Sample records for resolution numerical models

  1. Studies regarding the quality of numerical weather forecasts of the WRF model integrated at high-resolutions for the Romanian territory

    DOE PAGES

    Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia; ...

    2016-01-01

    Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less

  2. Studies regarding the quality of numerical weather forecasts of the WRF model integrated at high-resolutions for the Romanian territory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia

    Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less

  3. Comments on “A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man

    2015-06-01

    Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less

  4. An Overview of Numerical Weather Prediction on Various Scales

    NASA Astrophysics Data System (ADS)

    Bao, J.-W.

    2009-04-01

    The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.

  5. Increasing horizontal resolution in numerical weather prediction and climate simulations: illusion or panacea?

    PubMed

    Wedi, Nils P

    2014-06-28

    The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  6. Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa

    2017-05-01

    We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.

  7. Application of Raytracing Through the High Resolution Numerical Weather Model HIRLAM for the Analysis of European VLBI

    NASA Technical Reports Server (NTRS)

    Garcia-Espada, Susana; Haas, Rudiger; Colomer, Francisco

    2010-01-01

    An important limitation for the precision in the results obtained by space geodetic techniques like VLBI and GPS are tropospheric delays caused by the neutral atmosphere, see e.g. [1]. In recent years numerical weather models (NWM) have been applied to improve mapping functions which are used for tropospheric delay modeling in VLBI and GPS data analyses. In this manuscript we use raytracing to calculate slant delays and apply these to the analysis of Europe VLBI data. The raytracing is performed through the limited area numerical weather prediction (NWP) model HIRLAM. The advantages of this model are high spatial (0.2 deg. x 0.2 deg.) and high temporal resolution (in prediction mode three hours).

  8. The importance of vertical resolution in the free troposphere for modeling intercontinental plumes

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiawei; Jacob, Daniel J.; Eastham, Sebastian D.

    2018-05-01

    Chemical plumes in the free troposphere can preserve their identity for more than a week as they are transported on intercontinental scales. Current global models cannot reproduce this transport. The plumes dilute far too rapidly due to numerical diffusion in sheared flow. We show how model accuracy can be limited by either horizontal resolution (Δx) or vertical resolution (Δz). Balancing horizontal and vertical numerical diffusion, and weighing computational cost, implies an optimal grid resolution ratio (Δx / Δz)opt ˜ 1000 for simulating the plumes. This is considerably higher than current global models (Δx / Δz ˜ 20) and explains the rapid plume dilution in the models as caused by insufficient vertical resolution. Plume simulations with the Geophysical Fluid Dynamics Laboratory Finite-Volume Cubed-Sphere Dynamical Core (GFDL-FV3) over a range of horizontal and vertical grid resolutions confirm this limiting behavior. Our highest-resolution simulation (Δx ≈ 25 km, Δz ≈ 80 m) preserves the maximum mixing ratio in the plume to within 35 % after 8 days in strongly sheared flow, a drastic improvement over current models. Adding free tropospheric vertical levels in global models is computationally inexpensive and would also improve the simulation of water vapor.

  9. Parameter uncertainty in simulations of extreme precipitation and attribution studies.

    NASA Astrophysics Data System (ADS)

    Timmermans, B.; Collins, W. D.; O'Brien, T. A.; Risser, M. D.

    2017-12-01

    The attribution of extreme weather events, such as heavy rainfall, to anthropogenic influence involves the analysis of their probability in simulations of climate. The climate models used however, such as the Community Atmosphere Model (CAM), employ approximate physics that gives rise to "parameter uncertainty"—uncertainty about the most accurate or optimal values of numerical parameters within the model. In particular, approximate parameterisations for convective processes are well known to be influential in the simulation of precipitation extremes. Towards examining the impact of this source of uncertainty on attribution studies, we investigate the importance of components—through their associated tuning parameters—of parameterisations relating to deep and shallow convection, and cloud and aerosol microphysics in CAM. We hypothesise that as numerical resolution is increased the change in proportion of variance induced by perturbed parameters associated with the respective components is consistent with the decreasing applicability of the underlying hydrostatic assumptions. For example, that the relative influence of deep convection should diminish as resolution approaches that where convection can be resolved numerically ( 10 km). We quantify the relationship between the relative proportion of variance induced and numerical resolution by conducting computer experiments that examine precipitation extremes over the contiguous U.S. In order to mitigate the enormous computational burden of running ensembles of long climate simulations, we use variable-resolution CAM and employ both extreme value theory and surrogate modelling techniques ("emulators"). We discuss the implications of the relationship between parameterised convective processes and resolution both in the context of attribution studies and progression towards models that fully resolve convection.

  10. Atmospheric model development in support of SEASAT. Volume 1: Summary of findings

    NASA Technical Reports Server (NTRS)

    Kesel, P. G.

    1977-01-01

    Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.

  11. Effects of sounding temperature assimilation on weather forecasting - Model dependence studies

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Halem, M.; Atlas, R.

    1979-01-01

    In comparing various methods for the assimilation of remote sounding information into numerical weather prediction (NWP) models, the problem of model dependence for the different results obtained becomes important. The paper investigates two aspects of the model dependence question: (1) the effect of increasing horizontal resolution within a given model on the assimilation of sounding data, and (2) the effect of using two entirely different models with the same assimilation method and sounding data. Tentative conclusions reached are: first, that model improvement as exemplified by increased resolution, can act in the same direction as judicious 4-D assimilation of remote sounding information, to improve 2-3 day numerical weather forecasts. Second, that the time continuous 4-D methods developed at GLAS have similar beneficial effects when used in the assimilation of remote sounding information into NWP models with very different numerical and physical characteristics.

  12. Consistent three-equation model for thin films

    NASA Astrophysics Data System (ADS)

    Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul

    2017-11-01

    Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.

  13. ON THE IMPACT OF SUPER RESOLUTION WSR-88D DOPPLER RADAR DATA ASSIMILATION ON HIGH RESOLUTION NUMERICAL MODEL FORECASTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiswell, S

    2009-01-11

    Assimilation of radar velocity and precipitation fields into high-resolution model simulations can improve precipitation forecasts with decreased 'spin-up' time and improve short-term simulation of boundary layer winds (Benjamin, 2004 & 2007; Xiao, 2008) which is critical to improving plume transport forecasts. Accurate description of wind and turbulence fields is essential to useful atmospheric transport and dispersion results, and any improvement in the accuracy of these fields will make consequence assessment more valuable during both routine operation as well as potential emergency situations. During 2008, the United States National Weather Service (NWS) radars implemented a significant upgrade which increased the real-timemore » level II data resolution to 8 times their previous 'legacy' resolution, from 1 km range gate and 1.0 degree azimuthal resolution to 'super resolution' 250 m range gate and 0.5 degree azimuthal resolution (Fig 1). These radar observations provide reflectivity, velocity and returned power spectra measurements at a range of up to 300 km (460 km for reflectivity) at a frequency of 4-5 minutes and yield up to 13.5 million point observations per level in super-resolution mode. The migration of National Weather Service (NWS) WSR-88D radars to super resolution is expected to improve warning lead times by detecting small scale features sooner with increased reliability; however, current operational mesoscale model domains utilize grid spacing several times larger than the legacy data resolution, and therefore the added resolution of radar data is not fully exploited. The assimilation of super resolution reflectivity and velocity data into high resolution numerical weather model forecasts where grid spacing is comparable to the radar data resolution is investigated here to determine the impact of the improved data resolution on model predictions.« less

  14. High-resolution observations of the near-surface wind field over an isolated mountain and in a steep river canyon

    Treesearch

    B. W. Butler; N. S. Wagenbrenner; J. M. Forthofer; B. K. Lamb; K. S. Shannon; D. Finn; R. M. Eckman; K. Clawson; L. Bradshaw; P. Sopko; S. Beard; D. Jimenez; C. Wold; M. Vosburgh

    2015-01-01

    A number of numerical wind flow models have been developed for simulating wind flow at relatively fine spatial resolutions (e.g., 100 m); however, there are very limited observational data available for evaluating these high-resolution models. This study presents high-resolution surface wind data sets collected from an isolated mountain and a steep river canyon. The...

  15. A Modeling Framework for Optimal Computational Resource Allocation Estimation: Considering the Trade-offs between Physical Resolutions, Uncertainty and Computational Costs

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.; Rajagopal, R.

    2014-12-01

    Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.

  16. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca; Palmer, Kevin; Deutsch, Clayton V.

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit inmore » South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.« less

  17. Atmospheric Test Models and Numerical Experiments for the Simulation of the Global Distributions of Weather Data Transponders III. Horizontal Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molenkamp, C.R.; Grossman, A.

    1999-12-20

    A network of small balloon-borne transponders which gather very high resolution wind and temperature data for use by modern numerical weather predication models has been proposed to improve the reliability of long-range weather forecasts. The global distribution of an array of such transponders is simulated using LLNL's atmospheric parcel transport model (GRANTOUR) with winds supplied by two different general circulation models. An initial study used winds from CCM3 with a horizontal resolution of about 3 degrees in latitude and longitude, and a second study used winds from NOGAPS with a 0.75 degree horizontal resolution. Results from both simulations show thatmore » reasonable global coverage can be attained by releasing balloons from an appropriate set of launch sites.« less

  18. 3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"

    NASA Technical Reports Server (NTRS)

    Douglass, A.

    2005-01-01

    The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.

  19. Hydrologic downscaling of soil moisture using global data without site-specific calibration

    USDA-ARS?s Scientific Manuscript database

    Numerous applications require fine-resolution (10-30 m) soil moisture patterns, but most satellite remote sensing and land-surface models provide coarse-resolution (9-60 km) soil moisture estimates. The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales soil moistu...

  20. Calibrating a numerical model's morphology using high-resolution spatial and temporal datasets from multithread channel flume experiments.

    NASA Astrophysics Data System (ADS)

    Javernick, L.; Bertoldi, W.; Redolfi, M.

    2017-12-01

    Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical agreement simulations suffered to represent braiding planforms (evolving toward meandering), and parameterization that ensured braided produced exaggerated activation and bank erosion rates. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917

  1. Tropical cyclones over the North Indian Ocean: experiments with the high-resolution global icosahedral grid point model GME

    NASA Astrophysics Data System (ADS)

    Kumkar, Yogesh V.; Sen, P. N.; Chaudhari, Hemankumar S.; Oh, Jai-Ho

    2018-02-01

    In this paper, an attempt has been made to conduct a numerical experiment with the high-resolution global model GME to predict the tropical storms in the North Indian Ocean during the year 2007. Numerical integrations using the icosahedral hexagonal grid point global model GME were performed to study the evolution of tropical cyclones, viz., Akash, Gonu, Yemyin and Sidr over North Indian Ocean during 2007. It has been seen that the GME model forecast underestimates cyclone's intensity, but the model can capture the evolution of cyclone's intensity especially its weakening during landfall, which is primarily due to the cutoff of the water vapor supply in the boundary layer as cyclones approach the coastal region. A series of numerical simulation of tropical cyclones have been performed with GME to examine model capability in prediction of intensity and track of the cyclones. The model performance is evaluated by calculating the root mean square errors as cyclone track errors.

  2. Improving Numerical Weather Predictions of Summertime Precipitation Over the Southeastern U.S. Through a High-Resolution Initialization of the Surface State

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Kumar, Sujay V.; Krikishen, Jayanthi; Jedlovec, Gary J.

    2011-01-01

    It is hypothesized that high-resolution, accurate representations of surface properties such as soil moisture and sea surface temperature are necessary to improve simulations of summertime pulse-type convective precipitation in high resolution models. This paper presents model verification results of a case study period from June-August 2008 over the Southeastern U.S. using the Weather Research and Forecasting numerical weather prediction model. Experimental simulations initialized with high-resolution land surface fields from the NASA Land Information System (LIS) and sea surface temperature (SST) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) are compared to a set of control simulations initialized with interpolated fields from the National Centers for Environmental Prediction 12-km North American Mesoscale model. The LIS land surface and MODIS SSTs provide a more detailed surface initialization at a resolution comparable to the 4-km model grid spacing. Soil moisture from the LIS spin-up run is shown to respond better to the extreme rainfall of Tropical Storm Fay in August 2008 over the Florida peninsula. The LIS has slightly lower errors and higher anomaly correlations in the top soil layer, but exhibits a stronger dry bias in the root zone. The model sensitivity to the alternative surface initial conditions is examined for a sample case, showing that the LIS/MODIS data substantially impact surface and boundary layer properties.

  3. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

  4. On a turbulent wall model to predict hemolysis numerically in medical devices

    NASA Astrophysics Data System (ADS)

    Lee, Seunghun; Chang, Minwook; Kang, Seongwon; Hur, Nahmkeon; Kim, Wonjung

    2017-11-01

    Analyzing degradation of red blood cells is very important for medical devices with blood flows. The blood shear stress has been recognized as the most dominant factor for hemolysis in medical devices. Compared to laminar flows, turbulent flows have higher shear stress values in the regions near the wall. In case of predicting hemolysis numerically, this phenomenon can require a very fine mesh and large computational resources. In order to resolve this issue, the purpose of this study is to develop a turbulent wall model to predict the hemolysis more efficiently. In order to decrease the numerical error of hemolysis prediction in a coarse grid resolution, we divided the computational domain into two regions and applied different approaches to each region. In the near-wall region with a steep velocity gradient, an analytic approach using modeled velocity profile is applied to reduce a numerical error to allow a coarse grid resolution. We adopt the Van Driest law as a model for the mean velocity profile. In a region far from the wall, a regular numerical discretization is applied. The proposed turbulent wall model is evaluated for a few turbulent flows inside a cannula and centrifugal pumps. The results present that the proposed turbulent wall model for hemolysis improves the computational efficiency significantly for engineering applications. Corresponding author.

  5. Large-scale computations in fluid mechanics; Proceedings of the Fifteenth Summer Seminar on Applied Mathematics, University of California, La Jolla, CA, June 27-July 8, 1983. Parts 1 & 2

    NASA Technical Reports Server (NTRS)

    Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)

    1985-01-01

    Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.

  6. Exploring New Challenges of High-Resolution SWOT Satellite Altimetry with a Regional Model of the Solomon Sea

    NASA Astrophysics Data System (ADS)

    Brasseur, P.; Verron, J. A.; Djath, B.; Duran, M.; Gaultier, L.; Gourdeau, L.; Melet, A.; Molines, J. M.; Ubelmann, C.

    2014-12-01

    The upcoming high-resolution SWOT altimetry satellite will provide an unprecedented description of the ocean dynamic topography for studying sub- and meso-scale processes in the ocean. But there is still much uncertainty on the signal that will be observed. There are many scientific questions that are unresolved about the observability of altimetry at vhigh resolution and on the dynamical role of the ocean meso- and submesoscales. In addition, SWOT data will raise specific problems due to the size of the data flows. These issues will probably impact the data assimilation approaches for future scientific or operational oceanography applications. In this work, we propose to use a high-resolution numerical model of the Western Pacific Solomon Sea as a regional laboratory to explore such observability and dynamical issues, as well as new data assimilation challenges raised by SWOT. The Solomon Sea connects subtropical water masses to the equatorial ones through the low latitude western boundary currents and could potentially modulate the tropical Pacific climate. In the South Western Pacific, the Solomon Sea exhibits very intense eddy kinetic energy levels, while relatively little is known about the mesoscale and submesoscale activities in this region. The complex bathymetry of the region, complicated by the presence of narrow straits and numerous islands, raises specific challenges. So far, a Solomon sea model configuration has been set up at 1/36° resolution. Numerical simulations have been performed to explore the meso- and submesoscales dynamics. The numerical solutions which have been validated against available in situ data, show the development of small scale features, eddies, fronts and filaments. Spectral analysis reveals a behavior that is consistent with the SQG theory. There is a clear evidence of energy cascade from the small scales including the submesoscales, although those submesoscales are only partially resolved by the model. In parallel, investigations have been conducted using image assimilation approaches in order to explore the richness of high-resolution altimetry missions. These investigations illustrate the potential benefit of combining tracer fields (SST, SSS and spiciness) with high-resolution SWOT data to estimate the fine-scale circulation.

  7. NUMERICAL SIMULATIONS OF CORONAL HEATING THROUGH FOOTPOINT BRAIDING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansteen, V.; Pontieu, B. De; Carlsson, M.

    2015-10-01

    Advanced three-dimensional (3D) radiative MHD simulations now reproduce many properties of the outer solar atmosphere. When including a domain from the convection zone into the corona, a hot chromosphere and corona are self-consistently maintained. Here we study two realistic models, with different simulated areas, magnetic field strength and topology, and numerical resolution. These are compared in order to characterize the heating in the 3D-MHD simulations which self-consistently maintains the structure of the atmosphere. We analyze the heating at both large and small scales and find that heating is episodic and highly structured in space, but occurs along loop-shaped structures, andmore » moves along with the magnetic field. On large scales we find that the heating per particle is maximal near the transition region and that widely distributed opposite-polarity field in the photosphere leads to a greater heating scale height in the corona. On smaller scales, heating is concentrated in current sheets, the thicknesses of which are set by the numerical resolution. Some current sheets fragment in time, this process occurring more readily in the higher-resolution model leading to spatially highly intermittent heating. The large-scale heating structures are found to fade in less than about five minutes, while the smaller, local, heating shows timescales of the order of two minutes in one model and one minutes in the other, higher-resolution, model.« less

  8. A Conceptual Framework for SAHRA Integrated Multi-resolution Modeling in the Rio Grande Basin

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Gupta, H.; Springer, E.; Wagener, T.; Brookshire, D.; Duffy, C.

    2004-12-01

    The sustainable management of water resources in a river basin requires an integrated analysis of the social, economic, environmental and institutional dimensions of the problem. Numerical models are commonly used for integration of these dimensions and for communication of the analysis results to stakeholders and policy makers. The National Science Foundation Science and Technology Center for Sustainability of semi-Arid Hydrology and Riparian Areas (SAHRA) has been developing integrated multi-resolution models to assess impacts of climate variability and land use change on water resources in the Rio Grande Basin. These models not only couple natural systems such as surface and ground waters, but will also include engineering, economic and social components that may be involved in water resources decision-making processes. This presentation will describe the conceptual framework being developed by SAHRA to guide and focus the multiple modeling efforts and to assist the modeling team in planning, data collection and interpretation, communication, evaluation, etc. One of the major components of this conceptual framework is a Conceptual Site Model (CSM), which describes the basin and its environment based on existing knowledge and identifies what additional information must be collected to develop technically sound models at various resolutions. The initial CSM is based on analyses of basin profile information that has been collected, including a physical profile (e.g., topographic and vegetative features), a man-made facility profile (e.g., dams, diversions, and pumping stations), and a land use and ecological profile (e.g., demographics, natural habitats, and endangered species). Based on the initial CSM, a Conceptual Physical Model (CPM) is developed to guide and evaluate the selection of a model code (or numerical model) for each resolution to conduct simulations and predictions. A CPM identifies, conceptually, all the physical processes and engineering and socio-economic activities occurring (or to occur) in the real system that the corresponding numerical models are required to address, such as riparian evapotranspiration responses to vegetation change and groundwater pumping impacts on soil moisture contents. Simulation results from different resolution models and observations of the real system will then be compared to evaluate the consistency among the CSM, the CPMs, and the numerical models, and feedbacks will be used to update the models. In a broad sense, the evaluation of the models (conceptual or numerical), as well as the linkages between them, can be viewed as a part of the overall conceptual framework. As new data are generated and understanding improves, the models will evolve, and the overall conceptual framework is refined. The development of the conceptual framework becomes an on-going process. We will describe the current state of this framework and the open questions that have to be addressed in the future.

  9. Spectral characteristics of background error covariance and multiscale data assimilation

    DOE PAGES

    Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less

  10. Model validation and error estimation of tsunami runup using high resolution data in Sadeng Port, Gunungkidul, Yogyakarta

    NASA Astrophysics Data System (ADS)

    Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo

    2017-07-01

    A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.

  11. Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less

  12. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  13. Numerical simulation of the observed near-surface East India Coastal Current on the continental slope

    NASA Astrophysics Data System (ADS)

    Mukherjee, A.; Shankar, D.; Chatterjee, Abhisek; Vinayachandran, P. N.

    2018-06-01

    We simulate the East India Coastal Current (EICC) using two numerical models (resolution 0.1° × 0.1°), an oceanic general circulation model (OGCM) called Modular Ocean Model and a simpler, linear, continuously stratified (LCS) model, and compare the simulated current with observations from moorings equipped with acoustic Doppler current profilers deployed on the continental slope in the western Bay of Bengal (BoB). We also carry out numerical experiments to analyse the processes. Both models simulate well the annual cycle of the EICC, but the performance degrades for the intra-annual and intraseasonal components. In a model-resolution experiment, both models (run at a coarser resolution of 0.25° × 0.25°) simulate well the currents in the equatorial Indian Ocean (EIO), but the performance of the high-resolution LCS model as well as the coarse-resolution OGCM, which is good in the EICC regime, degrades in the eastern and northern BoB. An experiment on forcing mechanisms shows that the annual EICC is largely forced by the local alongshore winds in the western BoB and remote forcing due to Ekman pumping over the BoB, but forcing from the EIO has a strong impact on the intra-annual EICC. At intraseasonal periods, local (equatorial) forcing dominates in the south (north) because the Kelvin wave propagates equatorward in the western BoB. A stratification experiment with the LCS model shows that changing the background stratification from EIO to BoB leads to a stronger surface EICC owing to strong coupling of higher order vertical modes with wind forcing for the BoB profiles. These high-order modes, which lead to energy propagating down into the ocean in the form of beams, are important only for the current and do not contribute significantly to the sea level.

  14. NREL: International Activities - Pakistan Resource Maps

    Science.gov Websites

    . The high-resolution (1-km) annual wind power maps were developed using a numerical modeling approach along with NREL's empirical validation methodology. The high-resolution (10-km) annual and seasonal KB) | High-Res (ZIP 281 KB) 40-km Resolution Annual Maps (Direct) Low-Res (JPG 156 KB) | High-Res

  15. Towards a new multiscale air quality transport model using the fully unstructured anisotropic adaptive mesh technology of Fluidity (version 4.1.9)

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.

    2015-10-01

    An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.

  16. High-resolution numerical models for smoke transport in plumes from wildland fires

    Treesearch

    Philip Cunningham; Scott Goodrick

    2013-01-01

    A high-resolution large-eddy simulation (LES) model is employed to examine the fundamental structure and dynamics of buoyant plumes arising from heat sources representative of wildland fires. Herein we describe several aspects of the mean properties of the simulated plumes. Mean plume trajectories are apparently well described by the traditional two-thirds law for...

  17. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng

    2018-06-01

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.

  18. Dances with Membranes: Breakthroughs from Super-resolution Imaging

    PubMed Central

    Curthoys, Nikki M.; Parent, Matthew; Mlodzianoski, Michael; Nelson, Andrew J.; Lilieholm, Jennifer; Butler, Michael B.; Valles, Matthew; Hess, Samuel T.

    2017-01-01

    Biological membrane organization mediates numerous cellular functions and has also been connected with an immense number of human diseases. However, until recently, experimental methodologies have been unable to directly visualize the nanoscale details of biological membranes, particularly in intact living cells. Numerous models explaining membrane organization have been proposed, but testing those models has required indirect methods; the desire to directly image proteins and lipids in living cell membranes is a strong motivation for the advancement of technology. The development of super-resolution microscopy has provided powerful tools for quantification of membrane organization at the level of individual proteins and lipids, and many of these tools are compatible with living cells. Previously inaccessible questions are now being addressed, and the field of membrane biology is developing rapidly. This chapter discusses how the development of super-resolution microscopy has led to fundamental advances in the field of biological membrane organization. We summarize the history and some models explaining how proteins are organized in cell membranes, and give an overview of various super-resolution techniques and methods of quantifying super-resolution data. We discuss the application of super-resolution techniques to membrane biology in general, and also with specific reference to the fields of actin and actin-binding proteins, virus infection, mitochondria, immune cell biology, and phosphoinositide signaling. Finally, we present our hopes and expectations for the future of super-resolution microscopy in the field of membrane biology. PMID:26015281

  19. Numerical simulation of double‐diffusive finger convection

    USGS Publications Warehouse

    Hughes, Joseph D.; Sanford, Ward E.; Vacher, H. Leonard

    2005-01-01

    A hybrid finite element, integrated finite difference numerical model is developed for the simulation of double‐diffusive and multicomponent flow in two and three dimensions. The model is based on a multidimensional, density‐dependent, saturated‐unsaturated transport model (SUTRA), which uses one governing equation for fluid flow and another for solute transport. The solute‐transport equation is applied sequentially to each simulated species. Density coupling of the flow and solute‐transport equations is accounted for and handled using a sequential implicit Picard iterative scheme. High‐resolution data from a double‐diffusive Hele‐Shaw experiment, initially in a density‐stable configuration, is used to verify the numerical model. The temporal and spatial evolution of simulated double‐diffusive convection is in good agreement with experimental results. Numerical results are very sensitive to discretization and correspond closest to experimental results when element sizes adequately define the spatial resolution of observed fingering. Numerical results also indicate that differences in the molecular diffusivity of sodium chloride and the dye used to visualize experimental sodium chloride concentrations are significant and cause inaccurate mapping of sodium chloride concentrations by the dye, especially at late times. As a result of reduced diffusion, simulated dye fingers are better defined than simulated sodium chloride fingers and exhibit more vertical mass transfer.

  20. Benchmarking urban flood models of varying complexity and scale using high resolution terrestrial LiDAR data

    NASA Astrophysics Data System (ADS)

    Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.

    2011-01-01

    This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.

  1. Quantifying errors in trace species transport modeling.

    PubMed

    Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M

    2008-12-16

    One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.

  2. Numerical simulations of a reduced model for blood coagulation

    NASA Astrophysics Data System (ADS)

    Pavlova, Jevgenija; Fasano, Antonio; Sequeira, Adélia

    2016-04-01

    In this work, the three-dimensional numerical resolution of a complex mathematical model for the blood coagulation process is presented. The model was illustrated in Fasano et al. (Clin Hemorheol Microcirc 51:1-14, 2012), Pavlova et al. (Theor Biol 380:367-379, 2015). It incorporates the action of the biochemical and cellular components of blood as well as the effects of the flow. The model is characterized by a reduction in the biochemical network and considers the impact of the blood slip at the vessel wall. Numerical results showing the capacity of the model to predict different perturbations in the hemostatic system are discussed.

  3. Techniques and resources for storm-scale numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Droegemeier, Kelvin; Grell, Georg; Doyle, James; Soong, Su-Tzai; Skamarock, William; Bacon, David; Staniforth, Andrew; Crook, Andrew; Wilhelmson, Robert

    1993-01-01

    The topics discussed include the following: multiscale application of the 5th-generation PSU/NCAR mesoscale model, the coupling of nonhydrostatic atmospheric and hydrostatic ocean models for air-sea interaction studies; a numerical simulation of cloud formation over complex topography; adaptive grid simulations of convection; an unstructured grid, nonhydrostatic meso/cloud scale model; efficient mesoscale modeling for multiple scales using variable resolution; initialization of cloud-scale models with Doppler radar data; and making effective use of future computing architectures, networks, and visualization software.

  4. Research highlights: June 1990 - May 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Linear instability calculations at MSFC have suggested that the Geophysical Fluid Flow Cell (GFFC) should exhibit classic baroclinic instability at accessible parameter settings. Interest was in the mechanisms of transition to temporal chaos and the evolution of spatio-temporal chaos. In order to understand more about such transitions, high resolution numerical experiments for the physically simplest model of two layer baroclinic instability were conducted. This model has the advantage that the numerical code is exponentially convergent and can be efficiently run for very long times, enabling the study of chaotic attractors without the often devastating effects of low-order trunction found in many previous studies. Numerical algorithms for implementing an empirical orthogonal function (EOF) analysis of the high resolution numerical results were completed. Under conditions of rapid rotation and relatively low differential heating, convection in a spherical shell takes place as columnar banana cells wrapped around the annular gap, but with axes oriented along the axis of rotation; these were clearly evident in the GFFC experiments. The results of recent numerical simulations of columnar convection and future research plans are presented.

  5. Numerical simulation of "An American Haboob"

    NASA Astrophysics Data System (ADS)

    Vukovic, A.; Vujadinovic, M.; Pejanovic, G.; Andric, J.; Kumjian, M. R.; Djurdjevic, V.; Dacic, M.; Prasad, A. K.; El-Askary, H. M.; Paris, B. C.; Petkovic, S.; Nickovic, S.; Sprigg, W. A.

    2013-10-01

    A dust storm of fearful proportions hit Phoenix in the early evening hours of 5 July 2011. This storm, an American haboob, was predicted hours in advance because numerical, land-atmosphere modeling, computing power and remote sensing of dust events have improved greatly over the past decade. High resolution numerical models are required for accurate simulation of the small-scales of the haboob process, with high velocity surface winds produced by strong convection and severe downbursts. Dust productive areas in this region consist mainly of agricultural fields, with soil surfaces disturbed by plowing and tracks of land in the high Sonoran desert laid barren by ongoing draught. Model simulation of the 5 July 2011 dust storm uses the coupled atmospheric-dust model NMME-DREAM with 3.5 km horizontal resolution. A mask of the potentially dust productive regions is obtained from the land cover and the Normalized Difference Vegetation Index (NDVI) data from the Moderate Resolution Imaging Spectroradiometer (MODIS). Model results are compared with radar and other satellite-based images and surface meteorological and PM10 observations. The atmospheric model successfully hindcasted the position of the front in space and time, with about 1 h late arrival in Phoenix. The dust model predicted the rapid uptake of dust and high values of dust concentration in the ensuing storm. South of Phoenix, over the closest source regions (~ 25 km), the model PM10 surface dust concentration reached ~ 2500 μg m-3, but underestimated the values measured by the PM10stations within the city. Model results are also validated by the MODIS aerosol optical depth (AOD), employing deep blue (DB) algorithms for aerosol loadings. Model validation included Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO), equipped with the lidar instrument, to disclose the vertical structure of dust aerosols as well as aerosol subtypes. Promising results encourage further research and application of high-resolution modeling and satellite-based remote sensing to warn of approaching severe dust events and reduce risks for safety and health.

  6. Numerical Simulations of Vortex Generator Vanes and Jets on a Flat Plate

    NASA Technical Reports Server (NTRS)

    Allan, Brian G.; Yao, Chung-Sheng; Lin, John C.

    2002-01-01

    Numerical simulations of a single low-profile vortex generator vane, which is only a small fraction of the boundary-layer thickness, and a vortex generating jet have been performed for flows over a flat plate. The numerical simulations were computed by solving the steady-state solution to the Reynolds-averaged Navier-Stokes equations. The vortex generating vane results were evaluated by comparing the strength and trajectory of the streamwise vortex to experimental particle image velocimetry measurements. From the numerical simulations of the vane case, it was observed that the Shear-Stress Transport (SST) turbulence model resulted in a better prediction of the streamwise peak vorticity and trajectory when compared to the Spalart-Allmaras (SA) turbulence model. It is shown in this investigation that the estimation of the turbulent eddy viscosity near the vortex core, for both the vane and jet simulations, was higher for the SA model when compared to the SST model. Even though the numerical simulations of the vortex generating vane were able to predict the trajectory of the stream-wise vortex, the initial magnitude and decay of the peak streamwise vorticity were significantly under predicted. A comparison of the positive circulation associated with the streamwise vortex showed that while the numerical simulations produced a more diffused vortex, the vortex strength compared very well to the experimental observations. A grid resolution study for the vortex generating vane was also performed showing that the diffusion of the vortex was not a result of insufficient grid resolution. Comparisons were also made between a fully modeled trapezoidal vane with finite thickness to a simply modeled rectangular thin vane. The comparisons showed that the simply modeled rectangular vane produced a streamwise vortex which had a strength and trajectory very similar to the fully modeled trapezoidal vane.

  7. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    NASA Astrophysics Data System (ADS)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high-resolution topographic data set and the variable resolution grid, sets of experiments with increasing resolution were performed over specific regions of interest. Using realistic initial conditions derived from re-analysis fields, nonhydrostatic effects were significant for grid spacings on the order of 0.1 degrees with orographic forcing. If the model code was adapted for use in a message passing interface (MPI) on a parallel supercomputer today, it was estimated that a global grid spacing of 0.1 degrees would be achievable for a global model. In this case, nonhydrostatic effects would be significant for most areas. A variable resolution grid in a global model provides a unified and flexible approach to many climate and numerical weather prediction problems. The ability to configure the model from very fine to very coarse resolutions allows for the simulation of atmospheric phenomena at different scales using the same code. We have developed a dynamical core illustrating the feasibility of using a variable resolution in a global model.

  8. Mesosacle eddies in a high resolution OGCM and coupled ocean-atmosphere GCM

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Liu, H.; Lin, P.

    2017-12-01

    The present study described high-resolution climate modeling efforts including oceanic, atmospheric and coupled general circulation model (GCM) at the state key laboratory of numerical modeling for atmospheric sciences and geophysical fluid dynamics (LASG), Institute of Atmospheric Physics (IAP). The high-resolution OGCM is established based on the latest version of the LASG/IAP Climate system Ocean Model (LICOM2.1), but its horizontal resolution and vertical resolution are increased to 1/10° and 55 layers, respectively. Forced by the surface fluxes from the reanalysis and observed data, the model has been integrated for approximately more than 80 model years. Compared with the simulation of the coarse-resolution OGCM, the eddy-resolving OGCM not only better simulates the spatial-temporal features of mesoscale eddies and the paths and positions of western boundary currents but also reproduces the large meander of the Kuroshio Current and its interannual variability. Another aspect, namely, the complex structures of equatorial Pacific currents and currents in the coastal ocean of China, are better captured due to the increased horizontal and vertical resolution. Then we coupled the high resolution OGCM to NCAR CAM4 with 25km resolution, in which the mesoscale air-sea interaction processes are better captured.

  9. Surface acoustic impediography: a new technology for fingerprint mapping and biometric identification: a numerical study

    NASA Astrophysics Data System (ADS)

    Schmitt, Rainer M.; Scott, W. Guy; Irving, Richard D.; Arnold, Joe; Bardons, Charles; Halpert, Daniel; Parker, Lawrence

    2004-09-01

    A new type of fingerprint sensor is presented. The sensor maps the acoustic impedance of the fingerprint pattern by estimating the electrical impedance of its sensor elements. The sensor substrate, made of 1-3 piezo-ceramic, which is fabricated inexpensively at large scales, can provide a resolution up to 50 μm over an area of 20 x 25 mm2. Using FE modeling the paper presents the numerical validation of the basic principle. It evaluates an optimized pillar aspect ratio, estimates spatial resolution and the point spread function for a 100 μm and 50 μm pitch model. In addition, first fingerprints obtained with the prototype sensor are presented.

  10. Validation of an arterial tortuosity measure with application to hypertension collection of clinical hypertensive patients

    PubMed Central

    2011-01-01

    Background Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement’s ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. Methods The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. Results In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Conclusions Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations. PMID:22166145

  11. Validation of an arterial tortuosity measure with application to hypertension collection of clinical hypertensive patients.

    PubMed

    Diedrich, Karl T; Roberts, John A; Schmidt, Richard H; Kang, Chang-Ki; Cho, Zang-Hee; Parker, Dennis L

    2011-10-18

    Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement's ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations.

  12. On the application of subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1989-01-01

    LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.

  13. Aerodynamic force measurement on a large-scale model in a short duration test facility

    NASA Astrophysics Data System (ADS)

    Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.

    2005-03-01

    A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.

  14. Assessment of the Suitability of High Resolution Numerical Weather Model Outputs for Hydrological Modelling in Mountainous Cold Regions

    NASA Astrophysics Data System (ADS)

    Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.

    2017-12-01

    The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.

  15. Convergence and divergence in spherical harmonic series of the gravitational field generated by high-resolution planetary topography—A case study for the Moon

    NASA Astrophysics Data System (ADS)

    Hirt, Christian; Kuhn, Michael

    2017-08-01

    Theoretically, spherical harmonic (SH) series expansions of the external gravitational potential are guaranteed to converge outside the Brillouin sphere enclosing all field-generating masses. Inside that sphere, the series may be convergent or may be divergent. The series convergence behavior is a highly unstable quantity that is little studied for high-resolution mass distributions. Here we shed light on the behavior of SH series expansions of the gravitational potential of the Moon. We present a set of systematic numerical experiments where the gravity field generated by the topographic masses is forward-modeled in spherical harmonics and with numerical integration techniques at various heights and different levels of resolution, increasing from harmonic degree 90 to 2160 ( 61 to 2.5 km scales). The numerical integration is free from any divergence issues and therefore suitable to reliably assess convergence versus divergence of the SH series. Our experiments provide unprecedented detailed insights into the divergence issue. We show that the SH gravity field of degree-180 topography is convergent anywhere in free space. When the resolution of the topographic mass model is increased to degree 360, divergence starts to affect very high degree gravity signals over regions deep inside the Brillouin sphere. For degree 2160 topography/gravity models, severe divergence (with several 1000 mGal amplitudes) prohibits accurate gravity modeling over most of the topography. As a key result, we formulate a new hypothesis to predict divergence: if the potential degree variances show a minimum, then the SH series expansions diverge somewhere inside the Brillouin sphere and modeling of the internal potential becomes relevant.

  16. Spatial variability of the Black Sea surface temperature from high resolution modeling and satellite measurements

    NASA Astrophysics Data System (ADS)

    Mizyuk, Artem; Senderov, Maxim; Korotaev, Gennady

    2016-04-01

    Large number of numerical ocean models were implemented for the Black Sea basin during last two decades. They reproduce rather similar structure of synoptical variability of the circulation. Since 00-s numerical studies of the mesoscale structure are carried out using high performance computing (HPC). With the growing capacity of computing resources it is now possible to reconstruct the Black Sea currents with spatial resolution of several hundreds meters. However, how realistic these results can be? In the proposed study an attempt is made to understand which spatial scales are reproduced by ocean model in the Black Sea. Simulations are made using parallel version of NEMO (Nucleus for European Modelling of the Ocean). A two regional configurations with spatial resolutions 5 km and 2.5 km are described. Comparison of the SST from simulations with two spatial resolutions shows rather qualitative difference of the spatial structures. Results of high resolution simulation are compared also with satellite observations and observation-based products from Copernicus using spatial correlation and spectral analysis. Spatial scales of correlations functions for simulated and observed SST are rather close and differs much from satellite SST reanalysis. Evolution of spectral density for modelled SST and reanalysis showed agreed time periods of small scales intensification. Using of the spectral analysis for satellite measurements is complicated due to gaps. The research leading to this results has received funding from Russian Science Foundation (project № 15-17-20020)

  17. Urban pluvial flood prediction: a case study evaluating radar rainfall nowcasts and numerical weather prediction models as model inputs.

    PubMed

    Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer

    2016-12-01

    Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.

  18. Proper Generalized Decomposition (PGD) for the numerical simulation of polycrystalline aggregates under cyclic loading

    NASA Astrophysics Data System (ADS)

    Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck

    2018-02-01

    The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.

  19. The sensitivity of precipitation simulations to the soot aerosol presence

    NASA Astrophysics Data System (ADS)

    Palamarchuk, Iuliia; Ivanov, Sergiy; Mahura, Alexander; Ruban, Igor

    2016-04-01

    The role of aerosols in nonlinear feedbacks on atmospheric processes is in a focus of many researches. Particularly, the importance of black carbon particles for evolution of physical weather including precipitation formation and release is investigated by numerical modelling as well as observation networks. However, certain discrepancies between results obtained by different methods are remained. The increasing of complexity in numerical weather modelling systems leads to enlarging a volume of output data and promises to reveal new aspects in complexity of interactions and feedbacks. The Harmonie-38h1.2 model with the AROME physical package is used to study changes in precipitation life-cycle under black carbon polluted conditions. A model configuration includes a radar data assimilation procedure on a high resolution domain covering the Scandinavia region. Model results show that precipitation rate and distribution as well as other variables of atmospheric dynamics and physics over the domain are sensitive to aerosol concentrations. The attention should also be paid to numerical aspects, such as a list of observation types involved in assimilation. The use of high resolution radar information allows to include mesoscale features in initial conditions and to decrease the growth rate of a model error with the lead time.

  20. High-resolution modeling of a marine ecosystem using the FRESCO hydroecological model

    NASA Astrophysics Data System (ADS)

    Zalesny, V. B.; Tamsalu, R.

    2009-02-01

    The FRESCO (Finnish Russian Estonian Cooperation) mathematical model describing a marine hydroecosystem is presented. The methodology of the numerical solution is based on the method of multicomponent splitting into physical and biological processes, spatial coordinates, etc. The model is used for the reproduction of physical and biological processes proceeding in the Baltic Sea. Numerical experiments are performed with different spatial resolutions for four marine basins that are enclosed into one another: the Baltic Sea, the Gulf of Finland, the Tallinn-Helsinki water area, and Tallinn Bay. Physical processes are described by the equations of nonhydrostatic dynamics, including the k-ω parametrization of turbulence. Biological processes are described by the three-dimensional equations of an aquatic ecosystem with the use of a size-dependent parametrization of biochemical reactions. The main goal of this study is to illustrate the efficiency of the developed numerical technique and to demonstrate the importance of a high spatial resolution for water basins that have complex bottom topography, such as the Baltic Sea. Detailed information about the atmospheric forcing, bottom topography, and coastline is very important for the description of coastal dynamics and specific features of a marine ecosystem. Experiments show that the spatial inhomogeneity of hydroecosystem fields is caused by the combined effect of upwelling, turbulent mixing, surface-wave breaking, and temperature variations, which affect biochemical reactions.

  1. A stochastic ensemble-based model to predict crop water requirements from numerical weather forecasts and VIS-NIR high resolution satellite images in Southern Italy

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Falanga Bolognesi, Salvatore; De Michele, Carlo; Medina Gonzalez, Hanoi; Villani, Paolo; D'Urso, Guido; Battista Chirico, Giovanni

    2015-04-01

    Irrigation agriculture is one the biggest consumer of water in Europe, especially in southern regions, where it accounts for up to 70% of the total water consumption. The EU Common Agricultural Policy, combined with the Water Framework Directive, imposes to farmers and irrigation managers a substantial increase of the efficiency in the use of water in agriculture for the next decade. Ensemble numerical weather predictions can be valuable data for developing operational advisory irrigation services. We propose a stochastic ensemble-based model providing spatial and temporal estimates of crop water requirements, implemented within an advisory service offering detailed maps of irrigation water requirements and crop water consumption estimates, to be used by water irrigation managers and farmers. The stochastic model combines estimates of crop potential evapotranspiration retrieved from ensemble numerical weather forecasts (COSMO-LEPS, 16 members, 7 km resolution) and canopy parameters (LAI, albedo, fractional vegetation cover) derived from high resolution satellite images in the visible and near infrared wavelengths. The service provides users with daily estimates of crop water requirements for lead times up to five days. The temporal evolution of the crop potential evapotranspiration is simulated with autoregressive models. An ensemble Kalman filter is employed for updating model states by assimilating both ground based meteorological variables (where available) and numerical weather forecasts. The model has been applied in Campania region (Southern Italy), where a satellite assisted irrigation advisory service has been operating since 2006. This work presents the results of the system performance for one year of experimental service. The results suggest that the proposed model can be an effective support for a sustainable use and management of irrigation water, under conditions of water scarcity and drought. Since the evapotranspiration term represents a staple component in the water balance of a catchment, as outstanding future development, the model could also offer an advanced support for water resources management decisions at catchment scale.

  2. Three-dimensional representation of the human cochlea using micro-computed tomography data: presenting an anatomical model for further numerical calculations.

    PubMed

    Braun, Katharina; Böhnke, Frank; Stark, Thomas

    2012-06-01

    We present a complete geometric model of the human cochlea, including the segmentation and reconstruction of the fluid-filled chambers scala tympani and scala vestibuli, the lamina spiralis ossea and the vibrating structure (cochlear partition). Future fluid-structure coupled simulations require a reliable geometric model of the cochlea. The aim of this study was to present an anatomical model of the human cochlea, which can be used for further numerical calculations. Using high resolution micro-computed tomography (µCT), we obtained images of a cut human temporal bone with a spatial resolution of 5.9 µm. Images were manually segmented to obtain the three-dimensional reconstruction of the cochlea. Due to the high resolution of the µCT data, a detailed examination of the geometry of the twisted cochlear partition near the oval and the round window as well as the precise illustration of the helicotrema was possible. After reconstruction of the lamina spiralis ossea, the cochlear partition and the curved geometry of the scala vestibuli and the scala tympani were presented. The obtained data sets were exported as standard lithography (stl) files. These files represented a complete framework for future numerical simulations of mechanical (acoustic) wave propagation on the cochlear partition in the form of mathematical mechanical cochlea models. Additional quantitative information concerning heights, lengths and volumes of the scalae was found and compared with previous results.

  3. Impact of Basal Hydrology Near Grounding Lines: Results from the MISMIP-3D and MISMIP+ Experiments Using the Community Ice Sheet Model

    NASA Astrophysics Data System (ADS)

    Leguy, G.; Lipscomb, W. H.; Asay-Davis, X.

    2017-12-01

    Ice sheets and ice shelves are linked by the transition zone, the region where the grounded ice lifts off the bedrock and begins to float. Adequate resolution of the transition zone is necessary for numerically accurate ice sheet-ice shelf simulations. In previous work we have shown that by using a simple parameterization of the basal hydrology, a smoother transition in basal water pressure between floating and grounded ice improves the numerical accuracy of a one-dimensional vertically integrated fixed-grid model. We used a set of experiments based on the Marine Ice Sheet Model Intercomparison Project (MISMIP) to show that reliable grounding-line dynamics at resolutions 1 km is achievable. In this presentation we use the Community Ice Sheet Model (CISM) to demonstrate how the representation of basal lubrication impacts three-dimensional models using the MISMIP-3D and MISMIP+ experiments. To this end we will compare three different Stokes approximations: the Shallow Shelf Approximation (SSA), a depth-integrated higher-order approximation, and the Blatter-Pattyn model. The results from our one-dimensional model carry over to the 3-D models; a resolution of 1 km (and in some cases 2 km) remains sufficient to accurately simulate grounding-line dynamics.

  4. Coincidental match of numerical simulation and physics

    NASA Astrophysics Data System (ADS)

    Pierre, B.; Gudmundsson, J. S.

    2010-08-01

    Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.

  5. Validating high-resolution California coastal flood modeling with Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR)

    NASA Astrophysics Data System (ADS)

    O'Neill, A.

    2015-12-01

    The Coastal Storm Modeling System (CoSMoS) is a numerical modeling scheme used to predict coastal flooding due to sea level rise and storms influenced by climate change, currently in use in central California and in development for Southern California (Pt. Conception to the Mexican border). Using a framework of circulation, wave, analytical, and Bayesian models at different geographic scales, high-resolution results are translated as relevant hazards projections at the local scale that include flooding, wave heights, coastal erosion, shoreline change, and cliff failures. Ready access to accurate, high-resolution coastal flooding data is critical for further validation and refinement of CoSMoS and improved coastal hazard projections. High-resolution Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) provides an exceptional data source as appropriately-timed flights during extreme tides or storms provide a geographically-extensive method for determining areas of inundation and flooding extent along expanses of complex and varying coastline. Landward flood extents are numerically identified via edge-detection in imagery from single flights, and can also be ascertained via change detection using additional flights and imagery collected during average wave/tide conditions. The extracted flooding positions are compared against CoSMoS results for similar tide, water level, and storm-intensity conditions, allowing for robust testing and validation of CoSMoS and providing essential feedback for supporting regional and local model improvement.

  6. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  7. Age-of-Air, Tape Recorder, and Vertical Transport Schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.-J.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    A numerical-analytic investigation of the impacts of vertical transport schemes on the model simulated age-of-air and the so-called 'tape recorder' will be presented using an idealized 1-D column transport model as well as a more realistic 3-D dynamical model. By comparing to the 'exact' solutions of 'age-of-air' and the 'tape recorder' obtainable in the 1-D setting, useful insight is gained on the impacts of numerical diffusion and dispersion of numerical schemes used in global models. Advantages and disadvantages of Eulerian, semi-Lagrangian, and Lagrangian transport schemes will be discussed. Vertical resolution requirement for numerical schemes as well as observing systems for capturing the fine details of the 'tape recorder' or any upward propagating wave-like structures can potentially be derived from the 1-D analytic model.

  8. The generation and use of numerical shape models for irregular Solar System objects

    NASA Technical Reports Server (NTRS)

    Simonelli, Damon P.; Thomas, Peter C.; Carcich, Brian T.; Veverka, Joseph

    1993-01-01

    We describe a procedure that allows the efficient generation of numerical shape models for irregular Solar System objects, where a numerical model is simply a table of evenly spaced body-centered latitudes and longitudes and their associated radii. This modeling technique uses a combination of data from limbs, terminators, and control points, and produces shape models that have some important advantages over analytical shape models. Accurate numerical shape models make it feasible to study irregular objects with a wide range of standard scientific analysis techniques. These applications include the determination of moments of inertia and surface gravity, the mapping of surface locations and structural orientations, photometric measurement and analysis, the reprojection and mosaicking of digital images, and the generation of albedo maps. The capabilities of our modeling procedure are illustrated through the development of an accurate numerical shape model for Phobos and the production of a global, high-resolution, high-pass-filtered digital image mosaic of this Martian moon. Other irregular objects that have been modeled, or are being modeled, include the asteroid Gaspra and the satellites Deimos, Amalthea, Epimetheus, Janus, Hyperion, and Proteus.

  9. The atmospheric boundary layer — advances in knowledge and application

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.; Hess, G. D.; Physick, W. L.; Bougeault, P.

    1996-02-01

    We summarise major activities and advances in boundary-layer knowledge in the 25 years since 1970, with emphasis on the application of this knowledge to surface and boundary-layer parametrisation schemes in numerical models of the atmosphere. Progress in three areas is discussed: (i) the mesoscale modelling of selected phenomena; (ii) numerical weather prediction; and (iii) climate simulations. Future trends are identified, including the incorporation into models of advanced cloud schemes and interactive canopy schemes, and the nesting of high resolution boundary-layer schemes in global climate models.

  10. Integration of Local Observations into the One Dimensional Fog Model PAFOG

    NASA Astrophysics Data System (ADS)

    Thoma, Christina; Schneider, Werner; Masbou, Matthieu; Bott, Andreas

    2012-05-01

    The numerical prediction of fog requires a very high vertical resolution of the atmosphere. Owing to a prohibitive computational effort of high resolution three dimensional models, operational fog forecast is usually done by means of one dimensional fog models. An important condition for a successful fog forecast with one dimensional models consists of the proper integration of observational data into the numerical simulations. The goal of the present study is to introduce new methods for the consideration of these data in the one dimensional radiation fog model PAFOG. First, it will be shown how PAFOG may be initialized with observed visibilities. Second, a nudging scheme will be presented for the inclusion of measured temperature and humidity profiles in the PAFOG simulations. The new features of PAFOG have been tested by comparing the model results with observations of the German Meteorological Service. A case study will be presented that reveals the importance of including local observations in the model calculations. Numerical results obtained with the modified PAFOG model show a distinct improvement of fog forecasts regarding the times of fog formation, dissipation as well as the vertical extent of the investigated fog events. However, model results also reveal that a further improvement of PAFOG might be possible if several empirical model parameters are optimized. This tuning can only be realized by comprehensive comparisons of model simulations with corresponding fog observations.

  11. The sensitivity of biological finite element models to the resolution of surface geometry: a case study of crocodilian crania

    PubMed Central

    Evans, Alistair R.; McHenry, Colin R.

    2015-01-01

    The reliability of finite element analysis (FEA) in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context. PMID:26056620

  12. Sea breeze: Induced mesoscale systems and severe weather

    NASA Technical Reports Server (NTRS)

    Nicholls, M. E.; Pielke, R. A.; Cotton, W. R.

    1990-01-01

    Sea-breeze-deep convective interactions over the Florida peninsula were investigated using a cloud/mesoscale numerical model. The objective was to gain a better understanding of sea-breeze and deep convective interactions over the Florida peninsula using a high resolution convectively explicit model and to use these results to evaluate convective parameterization schemes. A 3-D numerical investigation of Florida convection was completed. The Kuo and Fritsch-Chappell parameterization schemes are summarized and evaluated.

  13. Data Analysis and Non-local Parametrization Strategies for Organized Atmospheric Convection

    NASA Astrophysics Data System (ADS)

    Brenowitz, Noah D.

    The intrinsically multiscale nature of moist convective processes in the atmosphere complicates scientific understanding, and, as a result, current coarse-resolution climate models poorly represent convective variability in the tropics. This dissertation addresses this problem by 1) studying new cumulus convective closures in a pair of idealized models for tropical moist convection, and 2) developing innovative strategies for analyzing high-resolution numerical simulations of organized convection. The first two chapters of this dissertation revisit a historical controversy about the use of convective closures based on the large-scale wind field or moisture convergence. In the first chapter, a simple coarse resolution stochastic model for convective inhibition is designed which includes the non-local effects of wind-convergence on convective activity. This model is designed to replicate the convective dynamics of a typical coarse-resolution climate prediction model. The non-local convergence coupling is motivated by the phenomena of gregarious convection, whereby mesoscale convective systems emit gravity waves which can promote convection at a distant locations. Linearized analysis and nonlinear simulations show that this convergence coupling allows for increased interaction between cumulus convection and the large-scale circulation, but does not suffer from the deleterious behavior of traditional moisture-convergence closures. In the second chapter, the non-local convergence coupling idea is extended to an idealized stochastic multicloud model. This model allows for stochastic transitions between three distinct cloud types, and non-local convergence coupling is most beneficial when applied to the transition from shallow to deep convection. This is consistent with recent observational and numerical modeling evidence, and there is a growing body of work highlighting the importance of this transition in tropical meteorology. In a series of idealized Walker cell simulations, convergence coupling enhances the persistence of Kelvin wave analogs in dry regions of the domain while leaving the dynamics in moist regions largely unaltered. The final chapter of this dissertation presents a technique for analyzing the variability of a direct numerical simulation of Rayleigh-Benard convection at large aspect ratio, which is a basic prototype of convective organization. High resolution numerical models are an invaluable tool for studying atmospheric dynamics, but modern data analysis techniques struggle with the extreme size of the model outputs and the trivial symmetries of the underlying dynamical systems (e.g. shift-invariance). A new data analysis approach which is invariant to spatial symmetries is derived by combining a quasi-Lagrangian description of the data, time-lagged embedding, and manifold learning techniques. The quasi-Lagrangian description is obtained by a straightforward isothermal binning procedure, which compresses the data in a dynamically-aware fashion. A small number of orthogonal modes returned by this algorithm are able to explain the highly intermittent dynamics of the bulk heat transfer, as quantified by the Nusselt Number.

  14. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  15. Itzï (version 17.1): an open-source, distributed GIS model for dynamic flood simulation

    NASA Astrophysics Data System (ADS)

    Guillaume Courty, Laurent; Pedrozo-Acuña, Adrián; Bates, Paul David

    2017-05-01

    Worldwide, floods are acknowledged as one of the most destructive hazards. In human-dominated environments, their negative impacts are ascribed not only to the increase in frequency and intensity of floods but also to a strong feedback between the hydrological cycle and anthropogenic development. In order to advance a more comprehensive understanding of this complex interaction, this paper presents the development of a new open-source tool named Itzï that enables the 2-D numerical modelling of rainfall-runoff processes and surface flows integrated with the open-source geographic information system (GIS) software known as GRASS. Therefore, it takes advantage of the ability given by GIS environments to handle datasets with variations in both temporal and spatial resolutions. Furthermore, the presented numerical tool can handle datasets from different sources with varied spatial resolutions, facilitating the preparation and management of input and forcing data. This ability reduces the preprocessing time usually required by other models. Itzï uses a simplified form of the shallow water equations, the damped partial inertia equation, for the resolution of surface flows, and the Green-Ampt model for the infiltration. The source code is now publicly available online, along with complete documentation. The numerical model is verified against three different tests cases: firstly, a comparison with an analytic solution of the shallow water equations is introduced; secondly, a hypothetical flooding event in an urban area is implemented, where results are compared to those from an established model using a similar approach; and lastly, the reproduction of a real inundation event that occurred in the city of Kingston upon Hull, UK, in June 2007, is presented. The numerical approach proved its ability at reproducing the analytic and synthetic test cases. Moreover, simulation results of the real flood event showed its suitability at identifying areas affected by flooding, which were verified against those recorded after the event by local authorities.

  16. The numerical modeling the sensitivity of coastal wind and ozone concentration to different SST forcing

    NASA Astrophysics Data System (ADS)

    Choi, Hyun-Jung; Lee, Hwa Woon; Jeon, Won-Bae; Lee, Soon-Hwan

    2012-01-01

    This study evaluated an atmospheric and air quality model of the spatial variability in low-level coastal winds and ozone concentration, which are affected by sea surface temperature (SST) forcing with different thermal gradients. Several numerical experiments examined the effect of sea surface SST forcing on the coastal atmosphere and air quality. In this study, the RAMS-CAMx model was used to estimate the sensitivity to two different resolutions of SST forcing during the episode day as well as to simulate the low-level coastal winds and ozone concentration over a complex coastal area. The regional model reproduced the qualitative effect of SST forcing and thermal gradients on the coastal flow. The high-resolution SST derived from NGSST-O (New Generation Sea Surface Temperature Open Ocean) forcing to resolve the warm SST appeared to enhance the mean response of low-level winds to coastal regions. These wind variations have important implications for coastal air quality. A higher ozone concentration was forecasted when SST data with a high resolution was used with the appropriate limitation of temperature, regional wind circulation, vertical mixing height and nocturnal boundary layer (NBL) near coastal areas.

  17. Interannual Tropical Rainfall Variability in General Circulation Model Simulations Associated with the Atmospheric Model Intercomparison Project.

    NASA Astrophysics Data System (ADS)

    Sperber, K. R.; Palmer, T. N.

    1996-11-01

    The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall variability was also best reproduced. However, for all regions the skill was less than that of the ECMWF model.The relationships of the all-India and Sahel rainfall/SST teleconnections with horizontal resolution, convection scheme closure, and numerics have been evaluated. Models with resolution T42 performed more poorly than lower-resolution models. The higher resolution models were predominantly spectral. At low resolution, spectral versus gridpoint numerics performed with nearly equal verisimilitude. At low resolution, moisture convergence closure was slightly more preferable than other convective closure techniques. At high resolution, the models that used moisture convergence closure performed very poorly, suggesting that moisture convergence may be problematic for models with horizontal resolution T42.

  18. Experimental and Numerical Correlation of Gravity Sag in Solar Sail Quality Membranes

    NASA Technical Reports Server (NTRS)

    Black, Jonathan T.; Leifer, Jack; DeMoss, Joshua A.; Walker, Eric N.; Belvin, W. Keith

    2004-01-01

    Solar sails are among the most studied members of the ultra-lightweight and inflatable (Gossamer) space structures family due to their potential to provide propellentless propulsion. They are comprised of ultra-thin membrane panels that, to date, have proven very difficult to experimentally characterize and numerically model due to their reflectivity and flexibility, and the effects of gravity sag and air damping. Numerical models must be correlated with experimental measurements of sub-scale solar sails to verify that the models can be scaled up to represent full-sized solar sails. In this paper, the surface shapes of five horizontally supported 25 micron thick aluminized Kapton membranes were measured to a 1.0 mm resolution using photogrammetry. Several simple numerical models closely match the experimental data, proving the ability of finite element simulations to predict actual behavior of solar sails.

  19. Ultra high energy resolution focusing monochromator for inelastic X-ray scattering spectrometer

    DOE PAGES

    Suvorov, Alexey; Cunsolo, Alessandro; Chubar, Oleg; ...

    2015-11-25

    Further development of a focusing monochromator concept for X-ray energy resolution of 0.1 meV and below is presented. Theoretical analysis of several optical layouts based on this concept was supported by numerical simulations performed in the “Synchrotron Radiation Workshop” software package using the physical-optics approach and careful modeling of partially-coherent synchrotron (undulator) radiation. Along with the energy resolution, the spectral shape of the energy resolution function was investigated. We show that under certain conditions the decay of the resolution function tails can be faster than that of the Gaussian function.

  20. High-resolution numerical approximation of traffic flow problems with variable lanes and free-flow velocities.

    PubMed

    Zhang, Peng; Liu, Ru-Xun; Wong, S C

    2005-05-01

    This paper develops macroscopic traffic flow models for a highway section with variable lanes and free-flow velocities, that involve spatially varying flux functions. To address this complex physical property, we develop a Riemann solver that derives the exact flux values at the interface of the Riemann problem. Based on this solver, we formulate Godunov-type numerical schemes to solve the traffic flow models. Numerical examples that simulate the traffic flow around a bottleneck that arises from a drop in traffic capacity on the highway section are given to illustrate the efficiency of these schemes.

  1. Normal modes of the world's oceans: A numerical investigation using Proudman functions

    NASA Technical Reports Server (NTRS)

    Sanchez, Braulio V.; Morrow, Dennis

    1993-01-01

    The numerical modeling of the normal modes of the global oceans is addressed. The results of such modeling could be expected to serve as a guide in the analysis of observations and measurements intended to detect these modes. The numerical computation of normal modes of the global oceans is a field in which several investigations have obtained results during the past 15 years. The results seem to be model-dependent to an unsatisfactory extent. Some modeling areas, such as higher resolution of the bathymetry, inclusion of self-attraction and loading, the role of the Arctic Ocean, and systematic testing by means of diagnostic models are addressed. The results show that the present state of the art is such that a final solution to the normal mode problem still lies in the future. The numerical experiments show where some of the difficulties are and give some insight as to how to proceed in the future.

  2. Numerical Simulation of Regional Circulation in the Monterey Bay Region

    NASA Technical Reports Server (NTRS)

    Tseng, Y. H.; Dietrich, D. E.; Ferziger, J. H.

    2003-01-01

    The objective of this study is to produce a high-resolution numerical model of Mon- terey Bay area in which the dynamics are determined by the complex geometry of the coastline, steep bathymetry, and the in uence of the water masses that constitute the CCS. Our goal is to simulate the regional-scale ocean response with realistic dynamics (annual cycle), forcing, and domain. In particular, we focus on non-hydrostatic e ects (by comparing the results of hydrostatic and non-hydrostatic models) and the role of complex geometry, i.e. the bay and submarine canyon, on the nearshore circulation. To the best of our knowledge, the current study is the rst to simulate the regional circulation in the vicinity of Monterey Bay using a non-hydrostatic model. Section 2 introduces the high resolution Monterey Bay area regional model (MBARM). Section 3 provides the results and veri cation with mooring and satellite data. Section 4 compares the results of hydrostatic and non-hydrostatic models.

  3. The MM5 Numerical Model to Correct PSInSAR Atmospheric Phase Screen

    NASA Astrophysics Data System (ADS)

    Perissin, D.; Pichelli, E.; Ferretti, R.; Rocca, F.; Pierdicca, N.

    2010-03-01

    In this work we make an experimental analysis to research the capability of Numerical Weather Prediction (NWP) models as MM5 to produce high resolution (1km-500m) maps of Integrated Water Vapour (IWV) in the atmosphere to mitigate the well-known disturbances that affect the radar signal while travelling from the sensor to the ground and back. Experiments have been conducted over the area surrounding Rome using ERS data acquired during the three days phase in '94 and using Envisat data acquired in recent years. By means of the PS technique SAR data have been processed and the Atmospheric Phase Screen (APS) of Slave images with respect to a reference Master have been extracted. MM5 IWV maps have a much lower resolution than PSInSAR APS's: the turbulent term of the atmospheric vapour field cannot be well resolved by MM5, at least with the low resolution ECMWF inputs. However, the vapour distribution term that depends on the local topography has been found quite in accordance.

  4. Testing high resolution numerical models for analysis of contaminant storage and release from low permeability zones

    NASA Astrophysics Data System (ADS)

    Chapman, Steven W.; Parker, Beth L.; Sale, Tom C.; Doner, Lee Ann

    2012-08-01

    It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making.

  5. A developed nearly analytic discrete method for forward modeling in the frequency domain

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai

    2018-02-01

    High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.

  6. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.

  7. Stress and deformation characteristics of sea ice in a high resolution numerical sea ice model.

    NASA Astrophysics Data System (ADS)

    Heorton, Harry; Feltham, Daniel; Tsamados, Michel

    2017-04-01

    The drift and deformation of sea ice floating on the polar oceans is due to the applied wind and ocean currents. The deformations of sea ice over ocean basin length scales have observable patterns; cracks and leads in satellite images and within the velocity fields generated from floe tracking. In a climate sea ice model the deformation of sea ice over ocean basin length scales is modelled using a rheology that represents the relationship between stresses and deformation within the sea ice cover. Here we investigate the link between observable deformation characteristics and the underlying internal sea ice stresses and force balance using the Los Alamos numerical sea ice climate model. In order to mimic laboratory experiments on the deformation of small cubes of sea ice we have developed an idealised square domain that tests the model response at spatial resolutions of up to 500m. We use the Elastic Anisotropic Plastic and Elastic Viscous Plastic rheologies, comparing their stability over varying resolutions and time scales. Sea ice within the domain is forced by idealised winds in order to compare the confinement of wind stresses and internal sea ice stresses. We document the characteristic deformation patterns of convergent, divergent and rotating stress states.

  8. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  9. Robustness of movement models: can models bridge the gap between temporal scales of data sets and behavioural processes?

    PubMed

    Schlägel, Ulrike E; Lewis, Mark A

    2016-12-01

    Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models.

  10. New, Improved Bulk-microphysical Schemes for Studying Precipitation Processes in WRF. Part 1; Comparisons with Other Schemes

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Shi, J.; Chen, S. S> ; Lang, S.; Hong, S.-Y.; Thompson, G.; Peters-Lidard, C.; Hou, A.; Braun, S.; hide

    2007-01-01

    Advances in computing power allow atmospheric prediction models to be mn at progressively finer scales of resolution, using increasingly more sophisticated physical parameterizations and numerical methods. The representation of cloud microphysical processes is a key component of these models, over the past decade both research and operational numerical weather prediction models have started using more complex microphysical schemes that were originally developed for high-resolution cloud-resolving models (CRMs). A recent report to the United States Weather Research Program (USWRP) Science Steering Committee specifically calls for the replacement of implicit cumulus parameterization schemes with explicit bulk schemes in numerical weather prediction (NWP) as part of a community effort to improve quantitative precipitation forecasts (QPF). An improved Goddard bulk microphysical parameterization is implemented into a state-of the-art of next generation of Weather Research and Forecasting (WRF) model. High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atllan"ic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The 31CE scheme with a cloud ice-snow-hail configuration led to a better agreement with observation in terms of simulated narrow convective line and rainfall intensity. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 m/s). For an Atlantic hurricane case, varying the microphysical schemes had no significant impact on the track forecast but did affect the intensity (important for air-sea interaction)

  11. Solid immersion lenses for enhancing the optical resolution of thermal and electroluminescence mapping of GaN-on-SiC transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pomeroy, J. W., E-mail: James.Pomeroy@Bristol.ac.uk; Kuball, M.

    2015-10-14

    Solid immersion lenses (SILs) are shown to greatly enhance optical spatial resolution when measuring AlGaN/GaN High Electron Mobility Transistors (HEMTs), taking advantage of the high refractive index of the SiC substrates commonly used for these devices. Solid immersion lenses can be applied to techniques such as electroluminescence emission microscopy and Raman thermography, aiding the development device physics models. Focused ion beam milling is used to fabricate solid immersion lenses in SiC substrates with a numerical aperture of 1.3. A lateral spatial resolution of 300 nm is demonstrated at an emission wavelength of 700 nm, and an axial spatial resolution of 1.7 ± 0.3 μm atmore » a laser wavelength of 532 nm is demonstrated; this is an improvement of 2.5× and 5×, respectively, when compared with a conventional 0.5 numerical aperture objective lens without a SIL. These results highlight the benefit of applying the solid immersion lenses technique to the optical characterization of GaN HEMTs. Further improvements may be gained through aberration compensation and increasing the SIL numerical aperture.« less

  12. Revisiting the Rossby Haurwitz wave test case with contour advection

    NASA Astrophysics Data System (ADS)

    Smith, Robert K.; Dritschel, David G.

    2006-09-01

    This paper re-examines a basic test case used for spherical shallow-water numerical models, and underscores the need for accurate, high resolution models of atmospheric and ocean dynamics. The Rossby-Haurwitz test case, first proposed by Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow-water equations on the sphere, J. Comput. Phys. (1992) 221-224], has been examined using a wide variety of shallow-water models in previous papers. Here, two contour-advective semi-Lagrangian (CASL) models are considered, and results are compared with previous test results. We go further by modifying this test case in a simple way to initiate a rapid breakdown of the basic wave state. This breakdown is accompanied by the formation of sharp potential vorticity gradients (fronts), placing far greater demands on the numerics than the original test case does. We also go further by examining other dynamical fields besides the height and potential vorticity, to assess how well the models deal with gravity waves. Such waves are sensitive to the presence or not of sharp potential vorticity gradients, as well as to numerical parameter settings. In particular, large time steps (convenient for semi-Lagrangian schemes) can seriously affect gravity waves but can also have an adverse impact on the primary fields of height and velocity. These problems are exacerbated by a poor resolution of potential vorticity gradients.

  13. Modeling the periodic stratification and gravitational circulation in San Francisco Bay, California

    USGS Publications Warehouse

    Cheng, Ralph T.; Casulli, Vincenzo

    1996-01-01

    A high resolution, three-dimensional (3-D) hydrodynamic numerical model is applied to San Francisco Bay, California to simulate the periodic tidal stratification caused by tidal straining and stirring and their long-term effects on gravitational circulation. The numerical model is formulated using fixed levels in the vertical and uniform computational mesh on horizontal planes. The governing conservation equations, the 3-D shallow water equations, are solved by a semi-implicit finite-difference scheme. Numerical simulations for estuarine flows in San Francisco Bay have been performed to reproduce the hydrodynamic properties of tides, tidal and residual currents, and salt transport. All simulations were carried out to cover at least 30 days, so that the spring-neap variance in the model results could be analyzed. High grid resolution used in the model permits the use of a simple turbulence closure scheme which has been shown to be sufficient to reproduce the tidal cyclic stratification and well-mixed conditions in the water column. Low-pass filtered 3-D time-series reveals the classic estuarine gravitational circulation with a surface layer flowing down-estuary and an up-estuary flow near the bottom. The intensity of the gravitational circulation depends upon the amount of freshwater inflow, the degree of stratification, and spring-neap tidal variations.

  14. Key issues review: numerical studies of turbulence in stars

    NASA Astrophysics Data System (ADS)

    Arnett, W. David; Meakin, Casey

    2016-10-01

    Three major problems of single-star astrophysics are convection, magnetic fields and rotation. Numerical simulations of convection in stars now have sufficient resolution to be truly turbulent, with effective Reynolds numbers of \\text{Re}>{{10}4} , and some turbulent boundary layers have been resolved. Implications of these developments are discussed for stellar structure, evolution and explosion as supernovae. Methods for three-dimensional (3D) simulations of stars are compared and discussed for 3D atmospheres, solar rotation, core-collapse and stellar boundary layers. Reynolds-averaged Navier-Stokes (RANS) analysis of the numerical simulations has been shown to provide a novel and quantitative estimate of resolution errors. Present treatments of stellar boundaries require revision, even for early burning stages (e.g. for mixing regions during He-burning). As stellar core-collapse is approached, asymmetry and fluctuations grow, rendering spherically symmetric models of progenitors more unrealistic. Numerical resolution of several different types of three-dimensional (3D) stellar simulations are compared; it is suggested that core-collapse simulations may be under-resolved. The Rayleigh-Taylor instability in explosions has a deep connection to convection, for which the abundance structure in supernova remnants may provide evidence.

  15. Macro-actor execution on multilevel data-driven architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Najjar, W.

    1988-12-31

    The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.

  16. On some limitations on temporal resolution in imaging subpicosecond photoelectronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shchelev, M Ya; Andreev, S V; Degtyareva, V P

    2015-05-31

    Numerical modelling is used to analyse some effects restricting the enhancement of temporal resolution into the area better than 100 fs in streak image tubes and photoelectron guns. A particular attention is paid to broadening of an electron bunch as a result of Coulomb interaction. Possible ways to overcome the limitations under consideration are discussed. (extreme light fields and their applications)

  17. Resolution and contrast in Kelvin probe force microscopy

    NASA Astrophysics Data System (ADS)

    Jacobs, H. O.; Leuchtmann, P.; Homan, O. J.; Stemmer, A.

    1998-08-01

    The combination of atomic force microscopy and Kelvin probe technology is a powerful tool to obtain high-resolution maps of the surface potential distribution on conducting and nonconducting samples. However, resolution and contrast transfer of this method have not been fully understood, so far. To obtain a better quantitative understanding, we introduce a model which correlates the measured potential with the actual surface potential distribution, and we compare numerical simulations of the three-dimensional tip-specimen model with experimental data from test structures. The observed potential is a locally weighted average over all potentials present on the sample surface. The model allows us to calculate these weighting factors and, furthermore, leads to the conclusion that good resolution in potential maps is obtained by long and slender but slightly blunt tips on cantilevers of minimal width and surface area.

  18. Effects of the bottom boundary condition in numerical investigations of dense water cascading on a slope

    NASA Astrophysics Data System (ADS)

    Berntsen, Jarle; Alendal, Guttorm; Avlesen, Helge; Thiem, Øyvind

    2018-05-01

    The flow of dense water along continental slopes is considered. There is a large literature on the topic based on observations and laboratory experiments. In addition, there are many analytical and numerical studies of dense water flows. In particular, there is a sequence of numerical investigations using the dynamics of overflow mixing and entrainment (DOME) setup. In these papers, the sensitivity of the solutions to numerical parameters such as grid size and numerical viscosity coefficients and to the choices of methods and models is investigated. In earlier DOME studies, three different bottom boundary conditions and a range of vertical grid sizes are applied. In other parts of the literature on numerical studies of oceanic gravity currents, there are statements that appear to contradict choices made on bottom boundary conditions in some of the DOME papers. In the present study, we therefore address the effects of the bottom boundary condition and vertical resolution in numerical investigations of dense water cascading on a slope. The main finding of the present paper is that it is feasible to capture the bottom Ekman layer dynamics adequately and cost efficiently by using a terrain-following model system using a quadratic drag law with a drag coefficient computed to give near-bottom velocity profiles in agreement with the logarithmic law of the wall. Many studies of dense water flows are performed with a quadratic bottom drag law and a constant drag coefficient. It is shown that when using this bottom boundary condition, Ekman drainage will not be adequately represented. In other studies of gravity flow, a no-slip bottom boundary condition is applied. With no-slip and a very fine resolution near the seabed, the solutions are essentially equal to the solutions obtained with a quadratic drag law and a drag coefficient computed to produce velocity profiles matching the logarithmic law of the wall. However, with coarser resolution near the seabed, there may be a substantial artificial blocking effect when using no-slip.

  19. Numerical techniques for the solution of the compressible Navier-Stokes equations and implementation of turbulence models. [separated turbulent boundary layer flow problems

    NASA Technical Reports Server (NTRS)

    Baldwin, B. S.; Maccormack, R. W.; Deiwert, G. S.

    1975-01-01

    The time-splitting explicit numerical method of MacCormack is applied to separated turbulent boundary layer flow problems. Modifications of this basic method are developed to counter difficulties associated with complicated geometry and severe numerical resolution requirements of turbulence model equations. The accuracy of solutions is investigated by comparison with exact solutions for several simple cases. Procedures are developed for modifying the basic method to improve the accuracy. Numerical solutions of high-Reynolds-number separated flows over an airfoil and shock-separated flows over a flat plate are obtained. A simple mixing length model of turbulence is used for the transonic flow past an airfoil. A nonorthogonal mesh of arbitrary configuration facilitates the description of the flow field. For the simpler geometry associated with the flat plate, a rectangular mesh is used, and solutions are obtained based on a two-equation differential model of turbulence.

  20. Final Report: Closeout of the Award NO. DE-FG02-98ER62618 (M.S. Fox-Rabinovitz, P.I.)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox-Rabinovitz, M. S.

    The final report describes the study aimed at exploring the variable-resolution stretched-grid (SG) approach to decadal regional climate modeling using advanced numerical techniques. The obtained results have shown that variable-resolution SG-GCMs using stretched grids with fine resolution over the area(s) of interest, is a viable established approach to regional climate modeling. The developed SG-GCMs have been extensively used for regional climate experimentation. The SG-GCM simulations are aimed at studying the U.S. regional climate variability with an emphasis on studying anomalous summer climate events, the U.S. droughts and floods.

  1. A Comparison of the Forecast Skills among Three Numerical Models

    NASA Astrophysics Data System (ADS)

    Lu, D.; Reddy, S. R.; White, L. J.

    2003-12-01

    Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.

  2. Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale

    NASA Astrophysics Data System (ADS)

    González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.

    2017-12-01

    Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).

  3. The "Grey Zone" cold air outbreak global model intercomparison: A cross evaluation using large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel

    2017-03-01

    A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.

  4. Numerical Simulation of The Mediterranean Sea Using Diecast: Interaction Between Basin, Sub-basin and Local Scale Features and Natural Variability.

    NASA Astrophysics Data System (ADS)

    Fernández, V.; Dietrich, D. E.; Haney, R. L.; Tintoré, J.

    In situ and satellite data obtained during the last ten years have shown that the circula- tion in the Mediterranean Sea is extremely complex in space, with significant features ranging from mesoscale to sub-basin and basin scale, and highly variable in time, with mesoscale to seasonal and interannual signals. Also, the steep bottom topography and the variable atmospheric conditions from one sub-basin to another, make the circula- tion to be composed of numerous energetic and narrow coastal currents, density fronts and mesoscale structures that interact at sub-basin scale with the large scale circula- tion. To simulate numerically and better understand these features, besides high grid resolution, a low numerical dispersion and low physical dissipation ocean model is required. We present the results from a 1/8z horizontal resolution numerical simula- tion of the Mediterranean Sea using DieCAST ocean model, which meets the above requirements since it is stable with low general dissipation and uses accurate fourth- order-accurate approximations with low numerical dispersion. The simulations are carried out with climatological surface forcing using monthly mean winds and relax- ation towards climatological values of temperature and salinity. The model reproduces the main features of the large basin scale circulation, as well as the seasonal variabil- ity of sub-basin scale currents that are well documented by observations in straits and channels. In addition, DieCAST brings out natural fronts and eddies that usually do not appear in numerical simulations of the Mediterranean and that lead to a natural interannual variability. The role of this intrinsic variability in the general circulation will be discussed.

  5. Modeling of Long-Term Evolution of Hydrophysical Fields of the Black Sea

    NASA Astrophysics Data System (ADS)

    Dorofeyev, V. L.; Sukhikh, L. I.

    2017-11-01

    The long-term evolution of the Black Sea dynamics (1980-2020) is reconstructed by numerical simulation. The model of the Black Sea circulation has 4.8 km horizontal spatial resolution and 40 levels in z-coordinates. The mixing processes in the upper layer are parameterized by Mellor-Yamada turbulent model. For the sea surface boundary conditions, atmospheric forcing functions were used, provided for the Black Sea region by the Euro mediterranean Center on Climate Change (CMCC) from the COSMO-CLM regional climate model. These data have a spatial resolution of 14 km and a daily temporal resolution. To evaluate the quality of the hydrodynamic fields derived from the simulation, they were compared with in-situ hydrological measurements and similar results from physical reanalysis of the Black Sea.

  6. Turbulence sources, character, and effects in the stable boundary layer: Insights from multi-scale direct numerical simulations and new, high-resolution measurements

    NASA Astrophysics Data System (ADS)

    Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale

    2013-04-01

    A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.

  7. High-resolution modelling of waves, currents and sediment transport in the Catalan Sea.

    NASA Astrophysics Data System (ADS)

    Sánchez-Arcilla, Agustín; Grifoll, Manel; Pallares, Elena; Espino, Manuel

    2013-04-01

    In order to investigate coastal shelf dynamics, a sequence of high resolution multi-scale models have been implemented for the Catalan shelf (North-western Mediterranean Sea). The suite consists of a set of increasing-resolution nested models, based on the circulation model ROMS (Regional Ocean Modelling System), the wave model SWAN (Simulation Waves Nearshore) and the sediment transport model CSTM (Community Sediment Transport Model), covering different ranges of spatial (from ~1 km at shelf-slope regions to ~40 m around river mouth or local beaches) and temporal scales (from storms events to seasonal variability). Contributions in the understanding of local processes such as along-shelf dynamics in the inner-shelf, sediment dispersal from the river discharge or bi-directional wave-current interactions under different synoptic conditions and resolution have been obtained using the Catalan Coast as a pilot site. Numerical results have been compared with "ad-hoc" intensive field campaigns, data from observational models and remote sensing products. The results exhibit acceptable agreement with observations and the investigation has allowed developing generic knowledge and more efficient (process-based) strategies for the coastal and shelf management.

  8. A new multiscale air quality transport model (Fluidity, 4.1.9) using fully unstructured anisotropic adaptive mesh technology

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.

    2015-06-01

    A new anisotropic hr-adaptive mesh technique has been applied to modelling of multiscale transport phenomena, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been setup for two-dimensional (2-D) transport phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes.

  9. The precipitation forecast sensitivity to data assimilation on a very high resolution domain

    NASA Astrophysics Data System (ADS)

    Palamarchuk, Iuliia; Ivanov, Sergiy; Ruban, Igor

    2016-04-01

    Last developments in computing technologies allow the implementation of a very high resolution in numerical weather prediction models. Due to that fact, simulation and quantitative analysis of mesoscale processes with a horizontal scale of few kilometers become available. This is crucially important in studies of precipitation including their life-cycle. However, new opportunities generate prerequisites to revising existing knowledge, both in meteorology and numerics. The latter associates, in particular, with formulation of the initial conditions involving the data assimilation. Depending on applied techniques, observational data types and spatial resolution the precipitation prediction appears quite sensitive. The impact of the data assimilation on resulting fields is presented using the Harmonie-38h1.2 model with the AROME physical package. The numerical experiments were performed for the Finland domain with the horizontal grid of 2.5 km and 65 vertical levels for the August 2010 period covering the BaltRad experiment. The initial conditions formulation included downscaling from the MARS archive and involving observations through 3DVAR data assimilation. The treatment of both conventional and radar observations in numerical experiments was used. The earlier included the SYNOP, SHIP, PILOT, TEMP, AIREP and DRIBU types. The background error covariances required for the variational assimilation have already been computed from the ensemble perturbed analysis with the purely statistical balance by the HIRLAM community. Deviations among the model runs started from the MARS, conventional and radar data assimilation were complex. In the focus therefore is to know how the model system reacts on involvement of observations. The contribution from observed variables included in the control vector, such as humidity and temperature, was expected to be largest. Nevertheless, revealing of such impact is not so straightforward task. Major changes occur within the lower 3-km layer of the atmosphere for all predicted variables. However, those changes were not directly associated with observation locations, as it often shows single observation experiments. Moreover, the model response to observations with lead time produces weak mesoscale spots of opposite signs. Special attention is paid to precipitation, cloud and rain water, vertical velocity fields. A complex chain of interactions among radiation, temperature, humidity, stratification and other atmospheric characteristics results in changes of local updraft and downdraft flows and following cloud formation processes and precipitation release. One can assume that those features would arise due to both, atmospheric physics and numeric effects. The latter becomes more evident in simulations on very high resolution domains.

  10. Hurricane Forecasting with the High-resolution NASA Finite-volume General Circulation Model

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Reale, O.; Shen, B.-W.; Lin, S.-J.; Chern, J.-D.; Putman, W.; Lee, T.; Yeh, K.-S.; Bosilovich, M.; Radakovich, J.

    2004-01-01

    A high-resolution finite-volume General Circulation Model (fvGCM), resulting from a development effort of more than ten years, is now being run operationally at the NASA Goddard Space Flight Center and Ames Research Center. The model is based on a finite-volume dynamical core with terrain-following Lagrangian control-volume discretization and performs efficiently on massive parallel architectures. The computational efficiency allows simulations at a resolution of a quarter of a degree, which is double the resolution currently adopted by most global models in operational weather centers. Such fine global resolution brings us closer to overcoming a fundamental barrier in global atmospheric modeling for both weather and climate, because tropical cyclones and even tropical convective clusters can be more realistically represented. In this work, preliminary results of the fvGCM are shown. Fifteen simulations of four Atlantic tropical cyclones in 2002 and 2004 are chosen because of strong and varied difficulties presented to numerical weather forecasting. It is shown that the fvGCM, run at the resolution of a quarter of a degree, can produce very good forecasts of these tropical systems, adequately resolving problems like erratic track, abrupt recurvature, intense extratropical transition, multiple landfall and reintensification, and interaction among vortices.

  11. A comparative verification of high resolution precipitation forecasts using model output statistics

    NASA Astrophysics Data System (ADS)

    van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees

    2017-04-01

    Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.

  12. The Spatial Resolution in the Computer Modelling of Atmospheric Flow over a Double-Hill Forested Region

    NASA Astrophysics Data System (ADS)

    Palma, J. L.; Rodrigues, C. V.; Lopes, A. S.; Carneiro, A. M. C.; Coelho, R. P. C.; Gomes, V. C.

    2017-12-01

    With the ever increasing accuracy required from numerical weather forecasts, there is pressure to increase the resolution and fidelity employed in computational micro-scale flow models. However, numerical studies of complex terrain flows are fundamentally bound by the digital representation of the terrain and land cover. This work assess the impact of the surface description on micro-scale simulation results at a highly complex site in Perdigão, Portugal, characterized by a twin parallel ridge topography, densely forested areas and an operating wind turbine. Although Coriolis and stratification effects cannot be ignored, the study is done under neutrally stratified atmosphere and static inflow conditions. The understanding gained here will later carry over to WRF-coupled simulations, where those conditions do not apply and the flow physics is more accurately modelled. With access to very fine digital mappings (<1m horizontal resolution) of both topography and land cover (roughness and canopy cover, both obtained through aerial LIDAR scanning of the surface) the impact of each element of the surface description on simulation results can be individualized, in order to estimate the resolution required to satisfactorily resolve them. Starting from the bare topographic description, in its coursest form, these include: a) the surface roughness mapping, b) the operating wind turbine, c) the canopy cover, as either body forces or added surface roughness (akin to meso-scale modelling), d) high resolution topography and surface cover mapping. Each of these individually will have an impact near the surface, including the rotor swept area of modern wind turbines. Combined they will considerably change flow up to boundary layer heights. Sensitivity to these elements cannot be generalized and should be assessed case-by-case. This type of in-depth study, unfeasible using WRF-coupled simulations, should provide considerable insight when spatially allocating mesh resolution for accurate resolution of complex flows.

  13. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  14. Multiple Scales in Fluid Dynamics and Meteorology: The DFG Priority Programme 1276 MetStröm

    NASA Astrophysics Data System (ADS)

    von Larcher, Th; Klein, R.

    2012-04-01

    Geophysical fluid motions are characterized by a very wide range of length and time scales, and by a rich collection of varying physical phenomena. The mathematical description of these motions reflects this multitude of scales and mechanisms in that it involves strong non-linearities and various scale-dependent singular limit regimes. Considerable progress has been made in recent years in the mathematical modelling and numerical simulation of such flows in detailed process studies, numerical weather forecasting, and climate research. One task of outstanding importance in this context has been and will remain for the foreseeable future the subgrid scale parameterization of the net effects of non-resolved processes that take place on spacio-temporal scales not resolvable even by the largest most recent supercomputers. Since the advent of numerical weather forecasting some 60 years ago, one simple but efficient means to achieve improved forecasting skills has been increased spacio-temporal resolution. This seems quite consistent with the concept of convergence of numerical methods in Applied Mathematics and Computational Fluid Dynamics (CFD) at a first glance. Yet, the very notion of increased resolution in atmosphere-ocean science is very different from the one used in Applied Mathematics: For the mathematician, increased resolution provides the benefit of getting closer to the ideal of a converged solution of some given partial differential equations. On the other hand, the atmosphere-ocean scientist would naturally refine the computational grid and adjust his mathematical model, such that it better represents the relevant physical processes that occur at smaller scales. This conceptual contradiction remains largely irrelevant as long as geophysical flow models operate with fixed computational grids and time steps and with subgrid scale parameterizations being optimized accordingly. The picture changes fundamentally when modern techniques from CFD involving spacio-temporal grid adaptivity get invoked in order to further improve the net efficiency in exploiting the given computational resources. In the setting of geophysical flow simulation one must then employ subgrid scale parameterizations that dynamically adapt to the changing grid sizes and time steps, implement ways to judiciously control and steer the newly available flexibility of resolution, and invent novel ways of quantifying the remaining errors. The DFG priority program MetStröm covers the expertise of Meteorology, Fluid Dynamics, and Applied Mathematics to develop model- as well as grid-adaptive numerical simulation concepts in multidisciplinary projects. The goal of this priority programme is to provide simulation models which combine scale-dependent (mathematical) descriptions of key physical processes with adaptive flow discretization schemes. Deterministic continuous approaches and discrete and/or stochastic closures and their possible interplay are taken into consideration. Research focuses on the theory and methodology of multiscale meteorological-fluid mechanics modelling. Accompanying reference experiments support model validation.

  15. Observed and modeled mesoscale variability near the Gulf Stream and Kuroshio Extension

    NASA Astrophysics Data System (ADS)

    Schmitz, William J.; Holland, William R.

    1986-08-01

    Our earliest intercomparisons between western North Atlantic data and eddy-resolving two-layer quasi-geostrophic symmetric-double-gyre steady wind-forced numerical model results focused on the amplitudes and largest horizontal scales in patterns of eddy kinetic energy, primarily abyssal. Here, intercomparisons are extended to recent eight-layer model runs and new data which allow expansion of the investigation to the Kuroshio Extension and throughout much of the water column. Two numerical experiments are shown to have realistic zonal, vertical, and temporal eddy scales in the vicinity of the Kuroshio Extension in one case and the Gulf Stream in the other. Model zonal mean speeds are larger than observed, but vertical shears are in general agreement with the data. A longitudinal displacement between the maximum intensity in surface and abyssal eddy fields as observed for the North Atlantic is not found in the model results. The numerical simulations examined are highly idealized, notably with respect to basin shape, topography, wind-forcing, and of course dissipation. Therefore the zero-order agreement between modeled and observed basic characteristics of mid-latitude jets and their associated eddy fields suggests that such properties are predominantly determined by the physical mechanisms which dominate the models, where the fluctuations are the result of instability processes. The comparatively high vertical resolution of the model is needed to compare with new higher-resolution data as well as for dynamical reasons, although the precise number of layers required either kinematically or dynamically (or numerically) has not been determined; we estimate four to six when no attempt is made to account for bottom- or near-surface-intensified phenomena.

  16. Towards a suite of test cases and a pycomodo library to assess and improve numerical methods in ocean models

    NASA Astrophysics Data System (ADS)

    Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves

    2016-04-01

    The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting bed to continue research in numerical approaches as well as an efficient tool to maintain any oceanic code and assure the users a stamped model in a certain range of hydrodynamical regimes. Thanks to a common netCDF format, this suite is completed with a python library that encompasses all the tools and metrics used to assess the efficiency of the numerical methods. References - Couvelard X., F. Dumas, V. Garnier, A.L. Ponte, C. Talandier, A.M. Treguier (2015). Mixed layer formation and restratification in presence of mesoscale and submesoscale turbulence. Ocean Modelling, Vol 96-2, p 243-253. doi:10.1016/j.ocemod.2015.10.004. - Soufflet Y., P. Marchesiello, F. Lemarié, J. Jouanno, X. Capet, L. Debreu , R. Benshila (2016). On effective resolution in ocean models. Ocean Modelling, in press. doi:10.1016/j.ocemod.2015.12.004

  17. Wavelet data compression for archiving high-resolution icosahedral model data

    NASA Astrophysics Data System (ADS)

    Wang, N.; Bao, J.; Lee, J.

    2011-12-01

    With the increase of the resolution of global circulation models, it becomes ever more important to develop highly effective solutions to archive the huge datasets produced by those models. While lossless data compression guarantees the accuracy of the restored data, it can only achieve limited reduction of data size. Wavelet transform based data compression offers significant potentials in data size reduction, and it has been shown very effective in transmitting data for remote visualizations. However, for data archive purposes, a detailed study has to be conducted to evaluate its impact to the datasets that will be used in further numerical computations. In this study, we carried out two sets of experiments for both summer and winter seasons. An icosahedral grid weather model and a highly efficient wavelet data compression software were used for this study. Initial conditions were compressed and input to the model to run to 10 days. The forecast results were then compared to those forecast results from the model run with the original uncompressed initial conditions. Several visual comparisons, as well as the statistics of numerical comparisons are presented. These results indicate that with specified minimum accuracy losses, wavelet data compression achieves significant data size reduction, and at the same time, it maintains minimum numerical impacts to the datasets. In addition, some issues are discussed to increase the archive efficiency while retaining a complete set of meta data for each archived file.

  18. Numerical simulation of severe convective phenomena over Croatian and Hungarian territory

    NASA Astrophysics Data System (ADS)

    Mahović, Nataša Strelec; Horvath, Akos; Csirmaz, Kalman

    2007-02-01

    Squall lines and supercells cause severe weather and huge damages in the territory of Croatia and Hungary. These long living events can be recognised by radar very well, but the problem of early warning, especially successful numerical forecast of these phenomena, has not yet been solved in this region. Two case studies are presented here in which dynamical modelling approach gives promising results: a squall line preceding a cold front and a single supercell generated because of a prefrontal instability. The numerical simulation is performed using the PSU/NCAR meso-scale model MM5, with horizontal resolution of 3 km. Lateral boundary conditions are taken from the ECMWF model. The moist processes are resolved by Reisner mixed-phase explicit moisture scheme and for the radiation scheme a rapid radiative transfer model is applied. The analysis nudging technique is applied for the first two hours of the model run. The results of the simulation are very promising. The MM5 model reconstructed the appearance of the convective phenomena and showed the development of thunderstorm into the supercell phase. The model results give very detailed insight into wind changes showing the rotation of supercells, clearly distinguish warm core of the cell and give rather good precipitation estimate. The successful simulation of convective phenomena by a high-resolution MM5 model showed that even smaller scale conditions are contained in synoptic scale patterns, represented in this case by the ECMWF model.

  19. Testing high resolution numerical models for analysis of contaminant storage and release from low permeability zones.

    PubMed

    Chapman, Steven W; Parker, Beth L; Sale, Tom C; Doner, Lee Ann

    2012-08-01

    It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Speeding up N-body simulations of modified gravity: chameleon screening models

    NASA Astrophysics Data System (ADS)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  1. Developing Local Scale, High Resolution, Data to Interface with Numerical Storm Models

    NASA Astrophysics Data System (ADS)

    Witkop, R.; Becker, A.; Stempel, P.

    2017-12-01

    High resolution, physical storm models that can rapidly predict storm surge, inundation, rainfall, wind velocity and wave height at the intra-facility scale for any storm affecting Rhode Island have been developed by Researchers at the University of Rhode Island's (URI's) Graduate School of Oceanography (GSO) (Ginis et al., 2017). At the same time, URI's Marine Affairs Department has developed methods that inhere individual geographic points into GSO's models and enable the models to accurately incorporate local scale, high resolution data (Stempel et al., 2017). This combination allows URI's storm models to predict any storm's impacts on individual Rhode Island facilities in near real time. The research presented here determines how a coastal Rhode Island town's critical facility managers (FMs) perceive their assets as being vulnerable to quantifiable hurricane-related forces at the individual facility scale and explores methods to elicit this information from FMs in a format usable for incorporation into URI's storm models.

  2. Waterspout Forecasting Method Over the Eastern Adriatic Using a High-Resolution Numerical Weather Model

    NASA Astrophysics Data System (ADS)

    Renko, Tanja; Ivušić, Sarah; Telišman Prtenjak, Maja; Šoljan, Vinko; Horvat, Igor

    2018-03-01

    In this study, a synoptic and mesoscale analysis was performed and Szilagyi's waterspout forecasting method was tested on ten waterspout events in the period of 2013-2016. Data regarding waterspout occurrences were collected from weather stations, an online survey at the official website of the National Meteorological and Hydrological Service of Croatia and eyewitness reports from newspapers and the internet. Synoptic weather conditions were analyzed using surface pressure fields, 500 hPa level synoptic charts, SYNOP reports and atmospheric soundings. For all observed waterspout events, a synoptic type was determined using the 500 hPa geopotential height chart. The occurrence of lightning activity was determined from the LINET lightning database, and waterspouts were divided into thunderstorm-related and "fair weather" ones. Mesoscale characteristics (with a focus on thermodynamic instability indices) were determined using the high-resolution (500 m grid length) mesoscale numerical weather model and model results were compared with the available observations. Because thermodynamic instability indices are usually insufficient for forecasting waterspout activity, the performance of the Szilagyi Waterspout Index (SWI) was tested using vertical atmospheric profiles provided by the mesoscale numerical model. The SWI successfully forecasted all waterspout events, even the winter events. This indicates that the Szilagyi's waterspout prognostic method could be used as a valid prognostic tool for the eastern Adriatic.

  3. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    NASA Astrophysics Data System (ADS)

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent/divergent ice zones, (ii) provide datasets that support enhanced parameterizations in numerical models as well as model initialization and validation, (iii) parameters of interest to Arctic stakeholders for marine navigation and ice engineering studies, and (iv) statistics that support algorithm development for the next-generation of airborne and satellite altimeters, including NASA's ICESat-2 mission. We describe the potential contribution our results can make towards the improvement of coupled ice-ocean numerical models, and discuss how data synthesis and integration with high-resolution models may improve our understanding of sea ice variability and our capabilities in predicting the future state of the ice pack.

  4. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  6. Impact of tropical cyclones on modeled extreme wind-wave climate

    DOE PAGES

    Timmermans, Ben; Stone, Daithi; Wehner, Michael; ...

    2017-02-16

    Here, the effect of forcing wind resolution on the extremes of global wind-wave climate are investigated in numerical simulations. Forcing winds from the Community Atmosphere Model at horizontal resolutions of ~1.0° and ~0.25° are used to drive Wavewatch III. Differences in extreme wave height are found to manifest most strongly in tropical cyclone (TC) regions, emphasizing the need for high-resolution forcing in those areas. Comparison with observations typically show improvement in performance with increased forcing resolution, with a strong influence in the tail of the distribution, although simulated extremes can exceed observations. A simulation for the end of the 21stmore » century under a RCP 8.5 type emission scenario suggests further increases in extreme wave height in TC regions.« less

  7. Impact of tropical cyclones on modeled extreme wind-wave climate

    NASA Astrophysics Data System (ADS)

    Timmermans, Ben; Stone, Dáithí; Wehner, Michael; Krishnan, Harinarayan

    2017-02-01

    The effect of forcing wind resolution on the extremes of global wind-wave climate are investigated in numerical simulations. Forcing winds from the Community Atmosphere Model at horizontal resolutions of ˜1.0° and ˜0.25° are used to drive Wavewatch III. Differences in extreme wave height are found to manifest most strongly in tropical cyclone (TC) regions, emphasizing the need for high-resolution forcing in those areas. Comparison with observations typically show improvement in performance with increased forcing resolution, with a strong influence in the tail of the distribution, although simulated extremes can exceed observations. A simulation for the end of the 21st century under a RCP 8.5 type emission scenario suggests further increases in extreme wave height in TC regions.

  8. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  9. The Thick Level-Set model for dynamic fragmentation

    DOE PAGES

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    2017-01-04

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  10. Topological characterization of antireflective and hydrophobic rough surfaces: are random process theory and fractal modeling applicable?

    NASA Astrophysics Data System (ADS)

    Borri, Claudia; Paggi, Marco

    2015-02-01

    The random process theory (RPT) has been widely applied to predict the joint probability distribution functions (PDFs) of asperity heights and curvatures of rough surfaces. A check of the predictions of RPT against the actual statistics of numerically generated random fractal surfaces and of real rough surfaces has been only partially undertaken. The present experimental and numerical study provides a deep critical comparison on this matter, providing some insight into the capabilities and limitations in applying RPT and fractal modeling to antireflective and hydrophobic rough surfaces, two important types of textured surfaces. A multi-resolution experimental campaign using a confocal profilometer with different lenses is carried out and a comprehensive software for the statistical description of rough surfaces is developed. It is found that the topology of the analyzed textured surfaces cannot be fully described according to RPT and fractal modeling. The following complexities emerge: (i) the presence of cut-offs or bi-fractality in the power-law power-spectral density (PSD) functions; (ii) a more pronounced shift of the PSD by changing resolution as compared to what was expected from fractal modeling; (iii) inaccuracy of the RPT in describing the joint PDFs of asperity heights and curvatures of textured surfaces; (iv) lack of resolution-invariance of joint PDFs of textured surfaces in case of special surface treatments, not accounted for by fractal modeling.

  11. A Coastal Bay Summer Breeze Study, Part 2: High-resolution Numerical Simulation of Sea-breeze Local Influences

    NASA Astrophysics Data System (ADS)

    Calmet, Isabelle; Mestayer, Patrice G.; van Eijk, Alexander M. J.; Herlédant, Olivier

    2018-04-01

    We complete the analysis of the data obtained during the experimental campaign around the semi circular bay of Quiberon, France, during two weeks in June 2006 (see Part 1). A reanalysis of numerical simulations performed with the Advanced Regional Prediction System model is presented. Three nested computational domains with increasing horizontal resolution down to 100 m, and a vertical resolution of 10 m at the lowest level, are used to reproduce the local-scale variations of the breeze close to the water surface of the bay. The Weather Research and Forecasting mesoscale model is used to assimilate the meteorological data. Comparisons of the simulations with the experimental data obtained at three sites reveal a good agreement of the flow over the bay and around the Quiberon peninsula during the daytime periods of sea-breeze development and weakening. In conditions of offshore synoptic flow, the simulations demonstrate that the semi-circular shape of the bay induces a corresponding circular shape in the offshore zones of stagnant flow preceding the sea-breeze onset, which move further offshore thereafter. The higher-resolution simulations are successful in reproducing the small-scale impacts of the peninsula and local coasts (breeze deviations, wakes, flow divergences), and in demonstrating the complexity of the breeze fields close to the surface over the bay. Our reanalysis also provides guidance for numerical simulation strategies for analyzing the structure and evolution of the near-surface breeze over a semi-circular bay, and for forecasting important flow details for use in upcoming sailing competitions.

  12. Resolution requirements for numerical simulations of transition

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Krist, Steven E.; Hussaini, M. Yousuff

    1989-01-01

    The resolution requirements for direct numerical simulations of transition to turbulence are investigated. A reliable resolution criterion is determined from the results of several detailed simulations of channel and boundary-layer transition.

  13. Performance Modeling of an Airborne Raman Water Vapor Lidar

    NASA Technical Reports Server (NTRS)

    Whiteman, D. N.; Schwemmer, G.; Berkoff, T.; Plotkin, H.; Ramos-Izquierdo, L.; Pappalardo, G.

    2000-01-01

    A sophisticated Raman lidar numerical model had been developed. The model has been used to simulate the performance of two ground-based Raman water vapor lidar systems. After tuning the model using these ground-based measurements, the model is used to simulate the water vapor measurement capability of an airborne Raman lidar under both day-and night-time conditions for a wide range of water vapor conditions. The results indicate that, under many circumstances, the daytime measurements possess comparable resolution to an existing airborne differential absorption water vapor lidar while the nighttime measurement have higher resolution. In addition, a Raman lidar is capable of measurements not possible using a differential absorption system.

  14. Analysis of small-angle X-ray scattering data in the presence of significant instrumental smearing

    PubMed Central

    Bergenholtz, Johan; Ulama, Jeanette; Zackrisson Oskolkova, Malin

    2016-01-01

    A laboratory-scale small-angle X-ray scattering instrument with pinhole collimation has been used to assess smearing effects due to instrumental resolution. A new, numerically efficient method to smear ideal model intensities is developed and presented. It allows for directly using measured profiles of isotropic but otherwise arbitrary beams in smearing calculations. Samples of low-polydispersity polymer spheres have been used to show that scattering data can in this way be quantitatively modeled even when there is substantial distortion due to instrumental resolution. PMID:26937235

  15. Changing the scale of hydrogeophysical aquifer heterogeneity characterization

    NASA Astrophysics Data System (ADS)

    Paradis, Daniel; Tremblay, Laurie; Ruggeri, Paolo; Brunet, Patrick; Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Holliger, Klaus; Irving, James; Molson, John; Lefebvre, Rene

    2015-04-01

    Contaminant remediation and management require the quantitative predictive capabilities of groundwater flow and mass transport numerical models. Such models have to encompass source zones and receptors, and thus typically cover several square kilometers. To predict the path and fate of contaminant plumes, these models have to represent the heterogeneous distribution of hydraulic conductivity (K). However, hydrogeophysics has generally been used to image relatively restricted areas of the subsurface (small fractions of km2), so there is a need for approaches defining heterogeneity at larger scales and providing data to constrain conceptual and numerical models of aquifer systems. This communication describes a workflow defining aquifer heterogeneity that was applied over a 12 km2 sub-watershed surrounding a decommissioned landfill emitting landfill leachate. The aquifer is a shallow, 10 to 20 m thick, highly heterogeneous and anisotropic assemblage of littoral sand and silt. Field work involved the acquisition of a broad range of data: geological, hydraulic, geophysical, and geochemical. The emphasis was put on high resolution and continuous hydrogeophysical data, the use of direct-push fully-screened wells and the acquisition of targeted high-resolution hydraulic data covering the range of observed aquifer materials. The main methods were: 1) surface geophysics (ground-penetrating radar and electrical resistivity); 2) direct-push operations with a geotechnical drilling rig (cone penetration tests with soil moisture resistivity CPT/SMR; full-screen well installation); and 3) borehole operations, including high-resolution hydraulic tests and geochemical sampling. New methods were developed to acquire high vertical resolution hydraulic data in direct-push wells, including both vertical and horizontal K (Kv and Kh). Various data integration approaches were used to represent aquifer properties in 1D, 2D and 3D. Using relevant vector machines (RVM), the mechanical and geophysical CPT/SMR measurements were used to recognize hydrofacies (HF) and obtain high-resolution 1D vertical profiles of hydraulic properties. Bayesian sequential simulation of the low-resolution surface-based geoelectrical measurements as well as high-resolution direct-push measurements of the electrical and hydraulic conductivities provided realistic estimates of the spatial distribution of K on a 250-m-long 2D survey line. Following a similar approach, all 1D vertical profiles of K derived from CPT/SMR soundings were integrated with available 2D geoelectrical profiles to obtain the 3D distribution of K over the study area. Numerical models were developed to understand flow and mass transport and assess how indicators could constrain model results and their K distributions. A 2D vertical section model was first developed based on a conceptual representation of heterogeneity which showed a significant effect of layering on flow and transport. The model demonstrated that solute and age tracers provide key model constraints. Additional 2D vertical section models with synthetic representations of low and high K hydrofacies were also developed on the basis of CPT/SMR soundings. These models showed that high-resolution profiles of hydraulic head could help constrain the spatial distribution and continuity of hydrofacies. History matching approaches are still required to simulate geostatistical models of K using hydrogeophysical data, while considering their impact on flow and transport with constraints provided by tracers of solutes and groundwater age.

  16. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    NASA Astrophysics Data System (ADS)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  17. CHARRON: Code for High Angular Resolution of Rotating Objects in Nature

    NASA Astrophysics Data System (ADS)

    Domiciano de Souza, A.; Zorec, J.; Vakili, F.

    2012-12-01

    Rotation is one of the fundamental physical parameters governing stellar physics and evolution. At the same time, spectrally resolved optical/IR long-baseline interferometry has proven to be an important observing tool to measure many physical effects linked to rotation, in particular, stellar flattening, gravity darkening, differential rotation. In order to interpret the high angular resolution observations from modern spectro-interferometers, such as VLTI/AMBER and VEGA/CHARA, we have developed an interferometry-oriented numerical model: CHARRON (Code for High Angular Resolution of Rotating Objects in Nature). We present here the characteristics of CHARRON, which is faster (≃q10-30 s per model) and thus more adapted to model-fitting than the first version of the code presented by Domiciano de Souza et al. (2002).

  18. Optimal control of a coupled partial and ordinary differential equations system for the assimilation of polarimetry Stokes vector measurements in tokamak free-boundary equilibrium reconstruction with application to ITER

    NASA Astrophysics Data System (ADS)

    Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric

    2017-08-01

    The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.

  19. Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane

    2017-06-02

    Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less

  20. Improved High Resolution Models of Subduction Dynamics: Use of transversely isotropic viscosity with a free-surface

    NASA Astrophysics Data System (ADS)

    Liu, X.; Gurnis, M.; Stadler, G.; Rudi, J.; Ratnaswamy, V.; Ghattas, O.

    2017-12-01

    Dynamic topography, or uncompensated topography, is controlled by internal dynamics, and provide constraints on the buoyancy structure and rheological parameters in the mantle. Compared with other surface manifestations such as the geoid, dynamic topography is very sensitive to shallower and more regional mantle structure. For example, the significant dynamic topography above the subduction zone potentially provides a rich mine for inferring the rheological and mechanical properties such as plate coupling, flow, and lateral viscosity variations, all critical in plate tectonics. However, employing subduction zone topography in the inversion study requires that we have a better understanding of the topography from forward models, especially the influence of the viscosity formulation, numerical resolution, and other factors. One common approach to formulating a fault between the subducted slab and the overriding plates in viscous flow models assumes a thin weak zone. However, due to the large lateral variation in viscosity, topography from free-slip numerical models typically has artificially large magnitude as well as high-frequency undulations over subduction zone, which adds to the difficulty in making comparisons between model results and observations. In this study, we formulate a weak zone with the transversely isotropic viscosity (TI) where the tangential viscosity is much smaller than the viscosity in the normal direction. Similar with isotropic weak zone models, TI models effectively decouple subducted slabs from the overriding plates. However, we find that the topography in TI models is largely reduced compared with that in weak zone models assuming an isotropic viscosity. Moreover, the artificial `tooth paste' squeezing effect observed in isotropic weak zone models vanishes in TI models, although the difference becomes less significant when the dip angle is small. We also implement a free-surface condition in our numerical models, which has a smoothing effect on the topography. With the improved model configuration, we can use the adjoint inversion method in a high-resolution model and employ topography in addition to other observables such as the plate motion to infer critical mechanical and rheological parameters in the subduction zone.

  1. High spatial resolution passive microwave sounding systems

    NASA Technical Reports Server (NTRS)

    Staelin, D. H.; Rosenkranz, P. W.; Bonanni, P. G.; Gasiewski, A. W.

    1986-01-01

    Two extensive series of flights aboard the ER-2 aircraft were conducted with the MIT 118 GHz imaging spectrometer together with a 53.6 GHz nadir channel and a TV camera record of the mission. Other microwave sensors, including a 183 GHz imaging spectrometer were flown simultaneously by other research groups. Work also continued on evaluating the impact of high-resolution passive microwave soundings upon numerical weather prediction models.

  2. Developing a regional retrospective ensemble precipitation dataset for watershed hydrology modeling, Idaho, USA

    NASA Astrophysics Data System (ADS)

    Flores, A. N.; Smith, K.; LaPorte, P.

    2011-12-01

    Applications like flood forecasting, military trafficability assessment, and slope stability analysis necessitate the use of models capable of resolving hydrologic states and fluxes at spatial scales of hillslopes (e.g., 10s to 100s m). These models typically require precipitation forcings at spatial scales of kilometers or better and time intervals of hours. Yet in especially rugged terrain that typifies much of the Western US and throughout much of the developing world, precipitation data at these spatiotemporal resolutions is difficult to come by. Ground-based weather radars have significant problems in high-relief settings and are sparsely located, leaving significant gaps in coverage and high uncertainties. Precipitation gages provide accurate data at points but are very sparsely located and their placement is often not representative, yielding significant coverage gaps in a spatial and physiographic sense. Numerical weather prediction efforts have made precipitation data, including critically important information on precipitation phase, available globally and in near real-time. However, these datasets present watershed modelers with two problems: (1) spatial scales of many of these datasets are tens of kilometers or coarser, (2) numerical weather models used to generate these datasets include a land surface parameterization that in some circumstances can significantly affect precipitation predictions. We report on the development of a regional precipitation dataset for Idaho that leverages: (1) a dataset derived from a numerical weather prediction model, (2) gages within Idaho that report hourly precipitation data, and (3) a long-term precipitation climatology dataset. Hourly precipitation estimates from the Modern Era Retrospective-analysis for Research and Applications (MERRA) are stochastically downscaled using a hybrid orographic and statistical model from their native resolution (1/2 x 2/3 degrees) to a resolution of approximately 1 km. Downscaled precipitation realizations are conditioned on hourly observations from reporting gages and then conditioned again on the Parameter-elevation Regressions on Independent Slopes Model (PRISM) at the monthly timescale to reflect orographic precipitation trends common to watersheds of the Western US. While this methodology potentially introduces cross-pollination of errors due to the re-use of precipitation gage data, it nevertheless achieves an ensemble-based precipitation estimate and appropriate measures of uncertainty at a spatiotemporal resolution appropriate for watershed modeling.

  3. Mesoscale spiral vortex embedded within a Lake Michigan snow squall band - High resolution satellite observations and numerical model simulations

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.; Keen, Cecil S.; Hjelmfelt, Mark; Pease, Steven R.

    1988-01-01

    It is known that Great Lakes snow squall convection occurs in a variety of different modes depending on various factors such as air-water temperature contrast, boundary-layer wind shear, and geostrophic wind direction. An exceptional and often neglected source of data for mesoscale cloud studies is the ultrahigh resolution multispectral data produced by Landsat satellites. On October 19, 1972, a clearly defined spiral vortex was noted in a Landsat-1 image near the southern end of Lake Michigan during an exceptionally early cold air outbreak over a still very warm lake. In a numerical simulation using a three-dimensional Eulerian hydrostatic primitive equation mesoscale model with an initially uniform wind field, a definite analog to the observed vortex was generated. This suggests that intense surface heating can be a principal cause in the development of a low-level mesoscale vortex.

  4. rpe v5: an emulator for reduced floating-point precision in large numerical simulations

    NASA Astrophysics Data System (ADS)

    Dawson, Andrew; Düben, Peter D.

    2017-06-01

    This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.

  5. Assimilation of Sea Surface Temperature in a doubly, two-way nested primitive equation model of the Ligurian Sea

    NASA Astrophysics Data System (ADS)

    Barth, A.; Alvera-Azcarate, A.; Rixen, M.; Beckers, J.-M.; Testut, C.-E.; Brankart, J.-M.; Brasseur, P.

    2003-04-01

    The GHER 3D primitive equation model is implemented with three different resolutions: a low resolution model (1/4^o) covering the whole Mediterranean Sea, an intermediate resolution model (1/20^o) of the Liguro-Provençal basin and a high resolution model (1/60^o) simulating the fine mesoscale structures in the Ligurian Sea. Boundary conditions and the averaged fields (feedback) are exchanged between two successive nesting levels. The model of the Ligurian Sea is also coupled with the assimilation package SESAM. It allows to assimilate satellite data and in situ observations using the local adaptative SEEK (Singular Evolutive Extended Kalman) filter. Instead of evolving the error space by the numerically expensive Lyapunov equation, a simplified algebraic equation depending on the misfit between observation and model forecast is used. Starting from the 1st January 1998 the low and intermediate resolution models are spun up for 18 months. The initial conditions for the Ligurian Sea are interpolated from the intermediate resolution model. The three models are then integrated until August 1999. During this period AVHRR Sea Surface Temperature of the Ligurian Sea is assimilated. The results are validated by using CTD and XBT profiles of the SIRENA cruise from the SACLANT Center. The overall objective of this study is pre-operational. It should help to identify limitations and weaknesses of forecasting methods and to suggest improvements of existing operational models.

  6. First results of high-resolution modeling of Cenozoic subduction orogeny in Andes

    NASA Astrophysics Data System (ADS)

    Liu, S.; Sobolev, S. V.; Babeyko, A. Y.; Krueger, F.; Quinteros, J.; Popov, A.

    2016-12-01

    The Andean Orogeny is the result of the upper-plate crustal shortening during the Cenozoic Nazca plate subduction beneath South America plate. With up to 300 km shortening, the Earth's second highest Altiplano-Puna Plateau was formed with a pronounced N-S oriented deformation diversity. Furthermore, the tectonic shortening in the Southern Andes was much less intensive and started much later. The mechanism of the shortening and the nature of N-S variation of its magnitude remain controversial. The previous studies of the Central Andes suggested that they might be related to the N-S variation in the strength of the lithosphere, friction coupling at slab interface, and are probably influenced by the interaction of the climate and tectonic systems. However, the exact nature of the strength variation was not explored due to the lack of high numerical resolution and 3D numerical models at that time. Here we will employ large-scale subduction models with a high resolution to reveal and quantify the factors controlling the strength of lithospheric structures and their effect on the magnitude of tectonic shortening in the South America plate between 18°-35°S. These high-resolution models are performed by using the highly scalable parallel 3D code LaMEM (Lithosphere and Mantle Evolution Model). This code is based on finite difference staggered grid approach and employs massive linear and non-linear solvers within the PETSc library to complete high-performance MPI-based parallelization in geodynamic modeling. Currently, in addition to benchmark-models we are developing high-resolution (< 1km) 2D subduction models with application to Nazca-South America convergence. In particular, we will present the models focusing on the effect of friction reduction in the Paleozoic-Cenozoic sediments above the uppermost crust in the Subandean Ranges. Future work will be focused on the origin of different styles of deformation and topography evolution in Altiplano-Puna Plateau and Central-Southern Andes through 3D modeling of large-scale interaction of subducting and overriding plates.

  7. Inter-Comparison of WRF Model Simulated Winds and MISR Stereoscopic Winds Embedded within Mesoscale von Kármán Wake Vortices

    NASA Astrophysics Data System (ADS)

    Horvath, A.; Nunalee, C. G.; Mueller, K. J.

    2014-12-01

    Several distinct wake regimes are possible when considering atmospheric flow past a steep mountainous island. Of these regimes, coherent vortex shedding in low-Froude number flow is particularly interesting because it can produce laterally focused paths of counter rotating eddies capable of extending downstream for hundreds of kilometers (i.e., a von Kármán vortex street). Given the spatial scales of atmospheric von Kármán vortices, which typically lies on the interface of the meso-scale and the micro-scale, they are uniquely challenging to model using conventional numerical weather prediction platforms. In this presentation, we present high resolution (1-km horizontally) numerical modeling results using the Weather Research and Forecasting (WRF) model, of multiple real-world von Kármán vortex shedding events associated with steep islands (e.g., Madeira island, Gran Canaria island, etc.). In parallel, we also present corresponding cloud-motion wind and cloud-top height measurements from the satellite-based Multiangle Imaging SpectroRadiometer (MISR) instrument. The MISR stereo algorithm enables experimental retrieval of the horizontal wind vector (both along-track and cross-track components) at 4.4-km resolution, in addition to the operational 1.1-km resolution cross-track wind and cloud-top height products. These products offer the fidelity appropriate for inter-comparison with the numerically simulated vortex streets. In general, we find an agreement between the instantaneous simulated cloud level winds and the MISR stereoscopic winds; however, discrepancies in the vortex street length and localized horizontal wind shear were documented. In addition, the simulated fields demonstrate sensitivity to turbulence closure and input terrain height data.

  8. Numerical simulation of "an American haboob"

    NASA Astrophysics Data System (ADS)

    Vukovic, A.; Vujadinovic, M.; Pejanovic, G.; Andric, J.; Kumjian, M. R.; Djurdjevic, V.; Dacic, M.; Prasad, A. K.; El-Askary, H. M.; Paris, B. C.; Petkovic, S.; Nickovic, S.; Sprigg, W. A.

    2014-04-01

    A dust storm of fearful proportions hit Phoenix in the early evening hours of 5 July 2011. This storm, an American haboob, was predicted hours in advance because numerical, land-atmosphere modeling, computing power and remote sensing of dust events have improved greatly over the past decade. High-resolution numerical models are required for accurate simulation of the small scales of the haboob process, with high velocity surface winds produced by strong convection and severe downbursts. Dust productive areas in this region consist mainly of agricultural fields, with soil surfaces disturbed by plowing and tracks of land in the high Sonoran Desert laid barren by ongoing draught. Model simulation of the 5 July 2011 dust storm uses the coupled atmospheric-dust model NMME-DREAM (Non-hydrostatic Mesoscale Model on E grid, Janjic et al., 2001; Dust REgional Atmospheric Model, Nickovic et al., 2001; Pérez et al., 2006) with 4 km horizontal resolution. A mask of the potentially dust productive regions is obtained from the land cover and the normalized difference vegetation index (NDVI) data from the Moderate Resolution Imaging Spectroradiometer (MODIS). The scope of this paper is validation of the dust model performance, and not use of the model as a tool to investigate mechanisms related to the storm. Results demonstrate the potential technical capacity and availability of the relevant data to build an operational system for dust storm forecasting as a part of a warning system. Model results are compared with radar and other satellite-based images and surface meteorological and PM10 observations. The atmospheric model successfully hindcasted the position of the front in space and time, with about 1 h late arrival in Phoenix. The dust model predicted the rapid uptake of dust and high values of dust concentration in the ensuing storm. South of Phoenix, over the closest source regions (~25 km), the model PM10 surface dust concentration reached ~2500 μg m-3, but underestimated the values measured by the PM10 stations within the city. Model results are also validated by the MODIS aerosol optical depth (AOD), employing deep blue (DB) algorithms for aerosol loadings. Model validation included Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO), equipped with the lidar instrument, to disclose the vertical structure of dust aerosols as well as aerosol subtypes. Promising results encourage further research and application of high-resolution modeling and satellite-based remote sensing to warn of approaching severe dust events and reduce risks for safety and health.

  9. Micro-computed tomography pore-scale study of flow in porous media: Effect of voxel resolution

    NASA Astrophysics Data System (ADS)

    Shah, S. M.; Gray, F.; Crawshaw, J. P.; Boek, E. S.

    2016-09-01

    A fundamental understanding of flow in porous media at the pore-scale is necessary to be able to upscale average displacement processes from core to reservoir scale. The study of fluid flow in porous media at the pore-scale consists of two key procedures: Imaging - reconstruction of three-dimensional (3D) pore space images; and modelling such as with single and two-phase flow simulations with Lattice-Boltzmann (LB) or Pore-Network (PN) Modelling. Here we analyse pore-scale results to predict petrophysical properties such as porosity, single-phase permeability and multi-phase properties at different length scales. The fundamental issue is to understand the image resolution dependency of transport properties, in order to up-scale the flow physics from pore to core scale. In this work, we use a high resolution micro-computed tomography (micro-CT) scanner to image and reconstruct three dimensional pore-scale images of five sandstones (Bentheimer, Berea, Clashach, Doddington and Stainton) and five complex carbonates (Ketton, Estaillades, Middle Eastern sample 3, Middle Eastern sample 5 and Indiana Limestone 1) at four different voxel resolutions (4.4 μm, 6.2 μm, 8.3 μm and 10.2 μm), scanning the same physical field of view. Implementing three phase segmentation (macro-pore phase, intermediate phase and grain phase) on pore-scale images helps to understand the importance of connected macro-porosity in the fluid flow for the samples studied. We then compute the petrophysical properties for all the samples using PN and LB simulations in order to study the influence of voxel resolution on petrophysical properties. We then introduce a numerical coarsening scheme which is used to coarsen a high voxel resolution image (4.4 μm) to lower resolutions (6.2 μm, 8.3 μm and 10.2 μm) and study the impact of coarsening data on macroscopic and multi-phase properties. Numerical coarsening of high resolution data is found to be superior to using a lower resolution scan because it avoids the problem of partial volume effects and reduces the scaling effect by preserving the pore-space properties influencing the transport properties. This is evidently compared in this study by predicting several pore network properties such as number of pores and throats, average pore and throat radius and coordination number for both scan based analysis and numerical coarsened data.

  10. Topographic gravity modeling for global Bouguer maps to degree 2160: Validation of spectral and spatial domain forward modeling techniques at the 10 microGal level

    NASA Astrophysics Data System (ADS)

    Hirt, Christian; Reußner, Elisabeth; Rexer, Moritz; Kuhn, Michael

    2016-09-01

    Over the past years, spectral techniques have become a standard to model Earth's global gravity field to 10 km scales, with the EGM2008 geopotential model being a prominent example. For some geophysical applications of EGM2008, particularly Bouguer gravity computation with spectral techniques, a topographic potential model of adequate resolution is required. However, current topographic potential models have not yet been successfully validated to degree 2160, and notable discrepancies between spectral modeling and Newtonian (numerical) integration well beyond the 10 mGal level have been reported. Here we accurately compute and validate gravity implied by a degree 2160 model of Earth's topographic masses. Our experiments are based on two key strategies, both of which require advanced computational resources. First, we construct a spectrally complete model of the gravity field which is generated by the degree 2160 Earth topography model. This involves expansion of the topographic potential to the 15th integer power of the topography and modeling of short-scale gravity signals to ultrahigh degree of 21,600, translating into unprecedented fine scales of 1 km. Second, we apply Newtonian integration in the space domain with high spatial resolution to reduce discretization errors. Our numerical study demonstrates excellent agreement (8 μGgal RMS) between gravity from both forward modeling techniques and provides insight into the convergence process associated with spectral modeling of gravity signals at very short scales (few km). As key conclusion, our work successfully validates the spectral domain forward modeling technique for degree 2160 topography and increases the confidence in new high-resolution global Bouguer gravity maps.

  11. Statistical Properties of Differences between Low and High Resolution CMAQ Runs with Matched Initial and Boundary Conditions

    EPA Science Inventory

    The difficulty in assessing errors in numerical models of air quality is a major obstacle to improving their ability to predict and retrospectively map air quality. In this paper, using simulation outputs from the Community Multi-scale Air Quality Model (CMAQ), the statistic...

  12. Proposed Use of the NASA Ames Nebula Cloud Computing Platform for Numerical Weather Prediction and the Distribution of High Resolution Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi

    2010-01-01

    The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.

  13. COSP: Satellite simulation software for model assessment

    DOE PAGES

    Bodas-Salcedo, A.; Webb, M. J.; Bony, S.; ...

    2011-08-01

    Errors in the simulation of clouds in general circulation models (GCMs) remain a long-standing issue in climate projections, as discussed in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report. This highlights the need for developing new analysis techniques to improve our knowledge of the physical processes at the root of these errors. The Cloud Feedback Model Intercomparison Project (CFMIP) pursues this objective, and under that framework the CFMIP Observation Simulator Package (COSP) has been developed. COSP is a flexible software tool that enables the simulation of several satellite-borne active and passive sensor observations from model variables. The flexibilitymore » of COSP and a common interface for all sensors facilitates its use in any type of numerical model, from high-resolution cloud-resolving models to the coarser-resolution GCMs assessed by the IPCC, and the scales in between used in weather forecast and regional models. The diversity of model parameterization techniques makes the comparison between model and observations difficult, as some parameterized variables (e.g., cloud fraction) do not have the same meaning in all models. The approach followed in COSP permits models to be evaluated against observations and compared against each other in a more consistent manner. This thus permits a more detailed diagnosis of the physical processes that govern the behavior of clouds and precipitation in numerical models. The World Climate Research Programme (WCRP) Working Group on Coupled Modelling has recommended the use of COSP in a subset of climate experiments that will be assessed by the next IPCC report. Here we describe COSP, present some results from its application to numerical models, and discuss future work that will expand its capabilities.« less

  14. Multi-scale coupled modelling of waves and currents on the Catalan shelf.

    NASA Astrophysics Data System (ADS)

    Grifoll, M.; Warner, J. C.; Espino, M.; Sánchez-Arcilla, A.

    2012-04-01

    Catalan shelf circulation is characterized by a background along-shelf flow to the southwest (including some meso-scale features) plus episodic storm driven patterns. To investigate these dynamics, a coupled multi-scale modeling system is applied to the Catalan shelf (North-western Mediterranean Sea). The implementation consists of a set of increasing-resolution nested models, based on the circulation model ROMS and the wave model SWAN as part of the COAWST modeling system, covering from the slope and shelf region (~1 km horizontal resolution) down to a local area around Barcelona city (~40 m). The system is initialized with MyOcean products in the coarsest outer domain, and uses atmospheric forcing from other sources for the increasing resolution inner domains. Results of the finer resolution domains exhibit improved agreement with observations relative to the coarser model results. Several hydrodynamic configurations were simulated to determine dominant forcing mechanisms and hydrodynamic processes that control coastal scale processes. The numerical results reveal that the short term (hours to days) inner-shelf variability is strongly influenced by local wind variability, while sea-level slope, baroclinic effects, radiation stresses and regional circulation constitute second-order processes. Additional analysis identifies the significance of shelf/slope exchange fluxes, river discharge and the effect of the spatial resolution of the atmospheric fluxes.

  15. Hyper-Resolution Groundwater Modeling using MODFLOW 6

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; Langevin, C.

    2017-12-01

    MODFLOW 6 is the latest version of the U.S. Geological Survey's modular hydrologic model. MODFLOW 6 was developed to synthesize many of the recent versions of MODFLOW into a single program, improve the way different process models are coupled, and to provide an object-oriented framework for adding new types of models and packages. The object-oriented framework and underlying numerical solver make it possible to tightly couple any number of hyper-resolution models within coarser regional models. The hyper-resolution models can be used to evaluate local-scale groundwater issues that may be affected by regional-scale forcings. In MODFLOW 6, hyper-resolution meshes can be maintained as separate model datasets, similar to MODFLOW-LGR, which simplifies the development of a coarse regional model with imbedded hyper-resolution models from a coarse regional model. For example, the South Atlantic Coastal Plain regional water availability model was converted from a MODFLOW-2000 model to a MODFLOW 6 model. The horizontal discretization of the original model is approximately 3,218 m x 3,218 m. Hyper-resolution models of the Aiken and Sumter County water budget areas in South Carolina with a horizontal discretization of approximately 322 m x 322 m were developed and were tightly coupled to a modified version of the original coarse regional model that excluded these areas. Hydraulic property and aquifer geometry data from the coarse model were mapped to the hyper-resolution models. The discretization of the hyper-resolution models is fine enough to make detailed analyses of the effect that changes in groundwater withdrawals in the production aquifers have on the water table and surface-water/groundwater interactions. The approach used in this analysis could be applied to other regional water availability models that have been developed by the U.S. Geological Survey to evaluate local scale groundwater issues.

  16. The impact of vertical resolution in mesoscale model AROME forecasting of radiation fog

    NASA Astrophysics Data System (ADS)

    Philip, Alexandre; Bergot, Thierry; Bouteloup, Yves; Bouyssel, François

    2015-04-01

    Airports short-term forecasting of fog has a security and economic impact. Numerical simulations have been performed with the mesoscale model AROME (Application of Research to Operations at Mesoscale) (Seity et al. 2011). Three vertical resolutions (60, 90 and 156 levels) are used to show the impact of radiation fog on numerical forecasting. Observations at Roissy Charles De Gaulle airport are compared to simulations. Significant differences in the onset, evolution and dissipation of fog were found. The high resolution simulation is in better agreement with observations than a coarser one. The surface boundary layer and incoming long-wave radiations are better represented. A more realistic behaviour of liquid water content evolution allows a better anticipation of low visibility procedures (ceiling < 60m and/or visibility < 600m). The case study of radiation fog shows that it is necessary to have a well defined vertical grid to better represent local phenomena. A statistical study over 6 months (October 2011 - March 2012 ) using different configurations was carried out. Statistically, results were the same as in the case study of radiation fog. Seity Y., P. Brousseau, S. Malardel, G. Hello, P. Bénard, F. Bouttier, C. Lac, V. Masson, 2011: The AROME-France convective scale operational model. Mon.Wea.Rev., 139, 976-991.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Sownak; Li, Baojiu; He, Jian-hua

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less

  18. P-CSI v1.0, an accelerated barotropic solver for the high-resolution ocean model component in the Community Earth System Model v2.0

    NASA Astrophysics Data System (ADS)

    Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen

    2016-11-01

    In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.

  19. Modeling the depth-sectioning effect in reflection-mode dynamic speckle-field interferometric microscopy

    PubMed Central

    Zhou, Renjie; Jin, Di; Hosseini, Poorya; Singh, Vijay Raj; Kim, Yang-hyo; Kuang, Cuifang; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Unlike most optical coherence microscopy (OCM) systems, dynamic speckle-field interferometric microscopy (DSIM) achieves depth sectioning through the spatial-coherence gating effect. Under high numerical aperture (NA) speckle-field illumination, our previous experiments have demonstrated less than 1 μm depth resolution in reflection-mode DSIM, while doubling the diffraction limited resolution as under structured illumination. However, there has not been a physical model to rigorously describe the speckle imaging process, in particular explaining the sectioning effect under high illumination and imaging NA settings in DSIM. In this paper, we develop such a model based on the diffraction tomography theory and the speckle statistics. Using this model, we calculate the system response function, which is used to further obtain the depth resolution limit in reflection-mode DSIM. Theoretically calculated depth resolution limit is in an excellent agreement with experiment results. We envision that our physical model will not only help in understanding the imaging process in DSIM, but also enable better designing such systems for depth-resolved measurements in biological cells and tissues. PMID:28085800

  20. A new scheme for the parameterization of the turbulent planetary boundary layer in the GLAS fourth order GCM

    NASA Technical Reports Server (NTRS)

    Helfand, H. M.

    1985-01-01

    Methods being used to increase the horizontal and vertical resolution and to implement more sophisticated parameterization schemes for general circulation models (GCM) run on newer, more powerful computers are described. Attention is focused on the NASA-Goddard Laboratory for Atmospherics fourth order GCM. A new planetary boundary layer (PBL) model has been developed which features explicit resolution of two or more layers. Numerical models are presented for parameterizing the turbulent vertical heat, momentum and moisture fluxes at the earth's surface and between the layers in the PBL model. An extended Monin-Obhukov similarity scheme is applied to express the relationships between the lowest levels of the GCM and the surface fluxes. On-line weather prediction experiments are to be run to test the effects of the higher resolution thereby obtained for dynamic atmospheric processes.

  1. Dust devil characteristics and associated dust entrainment based on large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Klose, Martina; Kwidzinski, Nick; Shao, Yaping

    2015-04-01

    The characteristics of dust devils, such as occurrence frequency, lifetime, size, and intensity, are usually inferred from in situ field measurements and remote sensing. Numerical models, e.g. large-eddy simulation (LES) models, have also been established as a tool to investigate dust devils and their structures. However, most LES models do not contain a dust module. Here, we present results from simulations using the WRF-LES model coupled to the convective turbulent dust emission (CTDE) scheme of Klose et al. (2014). The scheme describes the stochastic process of aerodynamic dust entrainment in the absence of saltation. It therefore allows for dust emission even below the threshold friction velocity for saltation. Numerical experiments have been conducted for different atmospheric stability and background wind conditions at 10 m horizontal resolution. A dust devil tracking algorithm is used to identify dust devils in the simulation results. The detected dust devils are statistically analyzed with regard to e.g. radius, pressure drop, lifetime, and turbulent wind speeds. An additional simulation with higher horizontal resolution (2 m) is conducted for conditions, which are especially favorable for dust devil development, i.e. unstable atmospheric stratification and weak mean winds. The higher resolution enables the identification of smaller dust devils and a more detailed structure analysis. Dust emission fluxes, dust concentrations, and dust mass budgets are calculated from the simulations. The results are compared to field observations reported in literature.

  2. Continuous data assimilation for downscaling large-footprint soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.

    2016-10-01

    Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.

  3. Validation of New Wind Resource Maps

    NASA Astrophysics Data System (ADS)

    Elliott, D.; Schwartz, M.

    2002-05-01

    The National Renewable Energy Laboratory (NREL) recently led a project to validate updated state wind resource maps for the northwestern United States produced by a private U.S. company, TrueWind Solutions (TWS). The independent validation project was a cooperative activity among NREL, TWS, and meteorological consultants. The independent validation concept originated at a May 2001 technical workshop held at NREL to discuss updating the Wind Energy Resource Atlas of the United States. Part of the workshop, which included more than 20 attendees from the wind resource mapping and consulting community, was dedicated to reviewing the latest techniques for wind resource assessment. It became clear that using a numerical modeling approach for wind resource mapping was rapidly gaining ground as a preferred technique and if the trend continues, it will soon become the most widely-used technique around the world. The numerical modeling approach is a relatively fast application compared to older mapping methods and, in theory, should be quite accurate because it directly estimates the magnitude of boundary-layer processes that affect the wind resource of a particular location. Numerical modeling output combined with high resolution terrain data can produce useful wind resource information at a resolution of 1 km or lower. However, because the use of the numerical modeling approach is new (last 35 years) and relatively unproven, meteorological consultants question the accuracy of the approach. It was clear that new state or regional wind maps produced by this method would have to undergo independent validation before the results would be accepted by the wind energy community and developers.

  4. Sub-radian-accuracy gravitational waveforms of coalescing binary neutron stars in numerical relativity

    NASA Astrophysics Data System (ADS)

    Kiuchi, Kenta; Kawaguchi, Kyohei; Kyutoku, Koutarou; Sekiguchi, Yuichiro; Shibata, Masaru; Taniguchi, Keisuke

    2017-10-01

    Extending our previous studies, we perform high-resolution simulations of inspiraling binary neutron stars in numerical relativity. We thoroughly carry through a convergence study in our currently available computational resources with the smallest grid spacing of ≈63 - 86 meter for the neutron-star radius 10.9-13.7 km. The estimated total error in the gravitational-wave phase is of order 0.1 rad for the total phase of ≳210 rad in the last ˜15 - 16 inspiral orbits. We then compare the waveforms (without resolution extrapolation) with those calculated by the latest effective-one-body formalism (tidal SEOBv2 model referred to as TEOB model). We find that for any of our models of binary neutron stars, the waveforms calculated by the TEOB formalism agree with the numerical-relativity waveforms up to ≈3 ms before the peak of the gravitational-wave amplitude is reached: For this late inspiral stage, the total phase error is ≲0.1 rad . Although the gravitational waveforms have an inspiral-type feature for the last ˜3 ms , this stage cannot be well reproduced by the current TEOB formalism, in particular, for neutron stars with large tidal deformability (i.e., lager radius). The reason for this is described.

  5. A numerical model for the whole Wadden Sea: results on the hydrodynamics

    NASA Astrophysics Data System (ADS)

    Gräwe, Ulf; Duran-Matute, Matias; Gerkema, Theo; Flöser, Götz; Burchard, Hans

    2015-04-01

    A high-resolution baroclinic three-dimensional numerical model for the entire Wadden Sea of the German Bight in the southern North Sea is first validated against field data for surface elevation, current velocity, temperature and salinity at selected stations and then used to calculate fluxes of volume, heat and salt inside the Wadden Sea and the exchange between the Wadden Sea and the adjacent North Sea through the major tidal inlets. The General Estuarine Transport Model (GETM) is simulating the reference years 2009-2011. The numerical grid has a resolution of 200x200m and 30 adaptive vertical layers. It is the final stage of a multi-nested setup, starting from the North Atlantic. The atmospheric forcing is taken from the operational forecast of the German Weather Service. Additionally, the freshwater discharge of 23 local rivers and creeks are included. For validation, we use observations from a ship of opportunity measuring sea surface properties, tidal gauge stations, high frequency of salinity and volume transport estimates for the Mardiep and Spiekeroog inlet. Finally, the estuarine overturning circulation in three tidal gulleys is quantified. Regional differences between the gullies are assessed and drivers of the estuarine circulation are identified. Moreover, we will give a consistent estimate of the tidal prisms for all tidal inlets in the entire Wadden Sea.

  6. Assessment of temperature and precipitation over Mediterranean Area and Black Sea with non hydrostatic high resolution regional climate model

    NASA Astrophysics Data System (ADS)

    Mercogliano, P.; Montesarchio, M.; Zollo, A.; Bucchignani, E.

    2012-12-01

    In the framework of the Italian GEMINA Project (program of expansion and development of the Euro-Mediterranean Center for Climate Change (CMCC), high resolution climate simulations have been performed, with the aim of furthering knowledge in the field of climate variability at regional scale, its causes and impacts. CMCC is a no profit centre whose aims are the promotion, research coordination and scientific activities in the field of climate changes. In this work, we show results of numerical simulation performed over a very wide area (13W-46E; 29-56N) at spatial resolution of 14 km, which includes all the Mediterranean Sea, using the regional climate model COSMO-CLM. It is a non-hydrostatic model for the simulation of atmospheric processes, developed by the DWD-Germany for weather forecast services; successively, the model has been updated by the CLM-Community, in order to develop climatic applications. It is the only documented numerical model system in Europe designed for spatial resolutions down to 1 km with a range of applicability encompassing operational numerical weather prediction, regional climate modelling the dispersion of trace gases and aerosol and idealised studies and applicable in all regions of the world for a wide range of available climate simulations from global climate and NWP models. Different reasons justify the development of a regional model: the first is the increasing number of works in literature asserting that regional models have also the features to provide more detailed description of the climate extremes, that are often more important then their mean values for natural and human systems. The second one is that high resolution modelling shows adequate features to provide information for impact assessment studies. At CMCC, regional climate modelling is a part of an integrated simulation system and it has been used in different European and African projects to provide qualitative and quantitative evaluation of the hydrogeological and public health risks. A simulation covering the period 1971-2000 and driven by ERA40 reanalysis has been performed, in order to assess the capability of the model to reproduce the present climate, with "perfect boundary conditions". A comparison, in terms of 2-metre temperature and precipitation, with EOBS dataset will be shown and discussed, in order to analyze the capabilities in simulating the main features of the observed climate over a wide area, at high spatial resolution. Then, a comparison between the results of COSMO-CLM driven by the global model CMCC-MED (whose atmospheric component is ECHAM5) and by ERA40 will be provided for a characterization of the errors induced by the global model. Finally, climate projections on the examined area for the XXI century, considering the RCP4.5 emission scenario for the future, will be provided. In this work a special emphasis will be issued to the analysis of the capability to reproduce not only the average climate patterns but also extremes of the present and future climate, in terms of temperature, precipitation and wind.

  7. Computer programs to assist in high resolution thermal denaturation and circular dichroism studies on nucleic acids

    PubMed Central

    Goodman, Thomas C.; Hardies, Stephen C.; Cortez, Carlos; Hillen, Wolfgang

    1981-01-01

    Computer programs are described that direct the collection, processing, and graphical display of numerical data obtained from high resolution thermal denaturation (1-3) and circular dichroism (4) studies. Besides these specific applications, the programs may also be useful, either directly or as programming models, in other types of spectrophotometric studies employing computers, programming languages, or instruments similar to those described here (see Materials and Methods). PMID:7335498

  8. A characteristics-based method for solving the transport equation and its application to the process of mantle differentiation and continental root growth

    NASA Astrophysics Data System (ADS)

    de Smet, Jeroen H.; van den Berg, Arie P.; Vlaar, Nico J.; Yuen, David A.

    2000-03-01

    Purely advective transport of composition is of major importance in the Geosciences, and efficient and accurate solution methods are needed. A characteristics-based method is used to solve the transport equation. We employ a new hybrid interpolation scheme, which allows for the tuning of stability and accuracy through a threshold parameter ɛth. Stability is established by bilinear interpolations, and bicubic splines are used to maintain accuracy. With this scheme, numerical instabilities can be suppressed by allowing numerical diffusion to work in time and locally in space. The scheme can be applied efficiently for preliminary modelling purposes. This can be followed by detailed high-resolution experiments. First, the principal effects of this hybrid interpolation method are illustrated and some tests are presented for numerical solutions of the transport equation. Second, we illustrate that this approach works successfully for a previously developed continental evolution model for the convecting upper mantle. In this model the transport equation contains a source term, which describes the melt production in pressure-released partial melting. In this model, a characteristic phenomenon of small-scale melting diapirs is observed (De Smet et al.1998; De Smet et al. 1999). High-resolution experiments with grid cells down to 700m horizontally and 515m vertically result in highly detailed observations of the diapiric melting phenomenon.

  9. Modeling Stokes flow in real pore geometries derived by high resolution micro CT imaging

    NASA Astrophysics Data System (ADS)

    Halisch, M.; Müller, C.

    2012-04-01

    Meanwhile, numerical modeling of rock properties forms an important part of modern petrophysics. Substantially, equivalent rock models are used to describe and assess specific properties and phenomena, like fluid transport or complex electrical properties. In recent years, non-destructive computed X-ray tomography got more and more important - not only to take a quick and three dimensional look into rock samples but also to get access to in-situ sample information for highly accurate modeling purposes. Due to - by now - very high resolution of the 3D CT data sets (micron- to submicron range) also very small structures and sample features - e.g. micro porosity - can be visualized and used for numerical models of very high accuracy. Special demands even arise before numerical modeling can take place. Inappropriate filter applications (e.g. improper type of filter, wrong kernel, etc.) may lead to a significant corruption of spatial sample structure and / or even sample or void space volume. Because of these difficulties, especially small scale mineral- and pore space textures are very often lost and valuable in-situ information is erased. Segmentation of important sample features - porosity as well as rock matrix - based upon grayscale values strongly depends upon the scan quality and upon the experience of the application engineer, respectively. If the threshold for matrix-porosity separation is set too low, porosity can be quickly (and even more, due to restrictions of scanning resolution) underestimated. Contrary to this, a too high threshold over-determines porosity and small void space features as well as interfaces are changed and falsified. Image based phase separation in close combination with "conventional" analytics, as scanning electron microscopy or thin sectioning, greatly increase the reliability of this preliminary work. For segmentation and quantification purposes, a special CT imaging and processing software (Avizo Fire) has been used. By using this tool, 3D rock data can be assessed and interpreted by petrophysical means. Furthermore, pore structures can be directly segmented and hence could be used for so called image based modeling approach. The special XLabHydro module grants a finite volume solver for the direct assessment of Stokes flow (incompressible fluid, constant dynamic viscosity, stationary conditions and laminar flow) in real pore geometries. Nevertheless, also pore network extraction and numerical modeling with standard FE or lattice Boltzmann solvers is possible. By using the achieved voxel resolution as smallest node distance, fluid flow properties can be analyzed even in very small sample structures and hence with very high accuracy, especially with interaction to bigger parts of the pore network. The so derived results in combination with a direct 3D visualization within the structures offer great new insights and understanding in range of meso- and microscopic pore space phenomena.

  10. Mesoscale Numerical Simulations of the IAS Circulation

    NASA Astrophysics Data System (ADS)

    Mooers, C. N.; Ko, D.

    2008-05-01

    Real-time nowcasts and forecasts of the IAS circulation have been made for several years with mesoscale resolution using the Navy Coastal Ocean Model (NCOM) implemented for the IAS. It is commonly called IASNFS and is driven by the lower resolution Global NCOM on the open boundaries, synoptic atmospheric forcing obtained from the Navy Global Atmospheric Prediction System (NOGAPS), and assimilated satellite-derived sea surface height anomalies and sea surface temperature. Here, examples of the model output are demonstrated; e.g., Gulf of Mexico Loop Current eddy shedding events and the meandering Caribbean Current jet and associated eddies. Overall, IASNFS is ready for further analysis, application to a variety of studies, and downscaling to even higher resolution shelf models. Its output fields are available online through NOAA's National Coastal Data Development Center (NCDDC), located at the Stennis Space Center.

  11. Modeling the spatio-temporal variability in subsurface thermal regimes across a low-relief polygonal tundra landscape: Modeling Archive

    DOE Data Explorer

    Kumar, Jitendra; Collier, Nathan; Bisht, Gautam; Mills, Richard T.; Thornton, Peter E.; Iversen, Colleen M.; Romanovsky, Vladimir

    2016-01-27

    This Modeling Archive is in support of an NGEE Arctic discussion paper under review and available at http://www.the-cryosphere-discuss.net/tc-2016-29/. Vast carbon stocks stored in permafrost soils of Arctic tundra are under risk of release to atmosphere under warming climate. Ice--wedge polygons in the low-gradient polygonal tundra create a complex mosaic of microtopographic features. The microtopography plays a critical role in regulating the fine scale variability in thermal and hydrological regimes in the polygonal tundra landscape underlain by continuous permafrost. Modeling of thermal regimes of this sensitive ecosystem is essential for understanding the landscape behaviour under current as well as changing climate. We present here an end-to-end effort for high resolution numerical modeling of thermal hydrology at real-world field sites, utilizing the best available data to characterize and parameterize the models. We develop approaches to model the thermal hydrology of polygonal tundra and apply them at four study sites at Barrow, Alaska spanning across low to transitional to high-centered polygon and representative of broad polygonal tundra landscape. A multi--phase subsurface thermal hydrology model (PFLOTRAN) was developed and applied to study the thermal regimes at four sites. Using high resolution LiDAR DEM, microtopographic features of the landscape were characterized and represented in the high resolution model mesh. Best available soil data from field observations and literature was utilized to represent the complex hetogeneous subsurface in the numerical model. This data collection provides the complete set of input files, forcing data sets and computational meshes for simulations using PFLOTRAN for four sites at Barrow Environmental Observatory. It also document the complete computational workflow for this modeling study to allow verification, reproducibility and follow up studies.

  12. Continuous and Discrete Structured Population Models with Applications to Epidemiology and Marine Mammals

    NASA Astrophysics Data System (ADS)

    Tang, Tingting

    In this dissertation, we develop structured population models to examine how changes in the environmental affect population processes. In Chapter 2, we develop a general continuous time size structured model describing a susceptible-infected (SI) population coupled with the environment. This model applies to problems arising in ecology, epidemiology, and cell biology. The model consists of a system of quasilinear hyperbolic partial differential equations coupled with a system of nonlinear ordinary differential equations that represent the environment. We develop a second-order high resolution finite difference scheme to numerically solve the model. Convergence of this scheme to a weak solution with bounded total variation is proved. We numerically compare the second order high resolution scheme with a first order finite difference scheme. Higher order of convergence and high resolution property are observed in the second order finite difference scheme. In addition, we apply our model to a multi-host wildlife disease problem, questions regarding the impact of the initial population structure and transition rate within each host are numerically explored. In Chapter 3, we use a stage structured matrix model for wildlife population to study the recovery process of the population given an environmental disturbance. We focus on the time it takes for the population to recover to its pre-event level and develop general formulas to calculate the sensitivity or elasticity of the recovery time to changes in the initial population distribution, vital rates and event severity. Our results suggest that the recovery time is independent of the initial population size, but is sensitive to the initial population structure. Moreover, it is more sensitive to the reduction proportion to the vital rates of the population caused by the catastrophe event relative to the duration of impact of the event. We present the potential application of our model to the amphibian population dynamic and the recovery of a certain plant population. In addition, we explore, in details, the application of the model to the sperm whale population in Gulf of Mexico after the Deepwater Horizon oil spill. In Chapter 4, we summarize the results from Chapter 2 and Chapter 3 and explore some further avenues of our research.

  13. Baroclinic stabilization effect of the Atlantic-Arctic water exchange simulated by the eddy-permitting ocean model and global atmosphere-ocean model

    NASA Astrophysics Data System (ADS)

    Moshonkin, Sergey; Bagno, Alexey; Gritsun, Andrey; Gusev, Anatoly

    2017-04-01

    Numerical experiments were performed with the global atmosphere-ocean model INMCM5 (for version of the international project CMIP6, resolution for atmosphere is 2°x1.5°, 21 level) and with the three-dimensional, free surface, sigma coordinate eddy-permitting ocean circulation model for Atlantic (from 30°S) - Arctic and Bering sea domain (0.25 degrees resolution, Institute of Numerical Mathematics Ocean Model or INMOM). Spatial resolution of the INMCM5 oceanic component is 0.5°x0.25°. Both models have 40 s-levels in ocean. Previously, the simulations were carried out for INMCM5 to generate climatic system stable state. Then model was run for 180 years. In the experiment with INMOM, CORE-II data for 1948-2009 were used. As the goal for comparing results of two these numerical models, we selected evolution of the density and velocity anomalies in the 0-300m active ocean layer near Fram Strait in the Greenland Sea, where oceanic cyclonic circulation influences Atlantic-Arctic water exchange. Anomalies were count without climatic seasonal cycle for time scales smaller than 30 years. We use Singular Value Decomposition analysis (SVD) for density-velocity anomalies with time lag from minus one to six months. Both models perform identical stable physical result. They reveal that changes of heat and salt transports by West Spitsbergen and East Greenland currents, caused by atmospheric forcing, produce the baroclinic modes of velocity anomalies in 0-300m layer, thereby stabilizing ocean response on the atmospheric forcing, which stimulates keeping water exchange between the North Atlantic and Arctic Ocean at the certain climatological level. The first SVD-mode of density-velocity anomalies is responsible for the cyclonic circulation variability. The second and third SVD-modes stabilize existing ocean circulation by the anticyclonic vorticity generation. The second and third SVD-modes give 35% of the input to the total dispersion of density anomalies and 16-18% of the input to the total dispersion of velocity anomalies for numerical results as in INMCM5 so in INMOM models. Input to the total dispersion of velocity anomalies for the first SVD-mode is equal to 50% for INMCM5 and only 19% for INMOM. The research was done in the INM RAS. The model INMOM was supported by Russian Foundation for Basic Research (grant №16-05-00534), and the model INMCM was supported by the Russian Scientific Foundation (grant №14-27-00126).

  14. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  15. Climate simulations and projections with a super-parameterized climate model

    DOE PAGES

    Stan, Cristiana; Xu, Li

    2014-07-01

    The mean climate and its variability are analyzed in a suite of numerical experiments with a fully coupled general circulation model in which subgrid-scale moist convection is explicitly represented through embedded 2D cloud-system resolving models. Control simulations forced by the present day, fixed atmospheric carbon dioxide concentration are conducted using two horizontal resolutions and validated against observations and reanalyses. The mean state simulated by the higher resolution configuration has smaller biases. Climate variability also shows some sensitivity to resolution but not as uniform as in the case of mean state. The interannual and seasonal variability are better represented in themore » simulation at lower resolution whereas the subseasonal variability is more accurate in the higher resolution simulation. The equilibrium climate sensitivity of the model is estimated from a simulation forced by an abrupt quadrupling of the atmospheric carbon dioxide concentration. The equilibrium climate sensitivity temperature of the model is 2.77 °C, and this value is slightly smaller than the mean value (3.37 °C) of contemporary models using conventional representation of cloud processes. As a result, the climate change simulation forced by the representative concentration pathway 8.5 scenario projects an increase in the frequency of severe droughts over most of the North America.« less

  16. The structure and rainfall features of Tropical Cyclone Rammasun (2002)

    NASA Astrophysics Data System (ADS)

    Ma, Leiming; Duan, Yihong; Zhu, Yongti

    2004-12-01

    Tropical Rainfall Measuring Mission (TRMM) data [TRMM Microwave Imager/Precipitation Radar/Visible and Infrared Scanner (TMI/PR/VIRS)] and a numerical model are used to investigate the structure and rainfall features of Tropical Cyclone (TC) Rammasun (2002). Based on the analysis of TRMM data, which are diagnosed together with NCEP/AVN [Aviation (global model)] analysis data, some typical features of TC structure and rainfall are preliminary discovered. Since the limitations of TRMM data are considered for their time resolution and coverage, the world observed by TRMM at several moments cannot be taken as the representation of the whole period of the TC lifecycle, therefore the picture should be reproduced by a numerical model of high quality. To better understand the structure and rainfall features of TC Rammasun, a numerical simulation is carried out with mesoscale model MM5 in which the validations have been made with the data of TRMM and NCEP/AVN analysis.

  17. Singular boundary method for global gravity field modelling

    NASA Astrophysics Data System (ADS)

    Cunderlik, Robert

    2014-05-01

    The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.

  18. A priori and a posteriori analysis of the flow around a rectangular cylinder

    NASA Astrophysics Data System (ADS)

    Cimarelli, A.; Leonforte, A.; Franciolini, M.; De Angelis, E.; Angeli, D.; Crivellini, A.

    2017-11-01

    The definition of a correct mesh resolution and modelling approach for the Large Eddy Simulation (LES) of the flow around a rectangular cylinder is recognized to be a rather elusive problem as shown by the large scatter of LES results present in the literature. In the present work, we aim at assessing this issue by performing an a priori analysis of Direct Numerical Simulation (DNS) data of the flow. This approach allows us to measure the ability of the LES field on reproducing the main flow features as a function of the resolution employed. Based on these results, we define a mesh resolution which maximize the opposite needs of reducing the computational costs and of adequately resolving the flow dynamics. The effectiveness of the resolution method proposed is then verified by means of an a posteriori analysis of actual LES data obtained by means of the implicit LES approach given by the numerical properties of the Discontinuous Galerkin spatial discretization technique. The present work represents a first step towards a best practice for LES of separating and reattaching flows.

  19. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  20. A NON-OSCILLATORY SCHEME FOR OPEN CHANNEL FLOWS. (R825200)

    EPA Science Inventory

    In modeling shocks in open channel flows, the traditional finite difference schemes become inefficient and warrant special numerical treatment for smooth computations. This paper provides a general introduction to the non-oscillatory high-resolution methodology, coupled with the ...

  1. Monitoring Marine Weather Systems Using Quikscat and TRMM Data

    NASA Technical Reports Server (NTRS)

    Liu, W.; Tang, W.; Datta, A.; Hsu, C.

    1999-01-01

    We do not understand nor are able to predict marine storms, particularly tropical cyclones, sufficiently well because ground-based measurements are sparse and operational numerical weather prediction models do not have sufficient spatial resolution nor accurate parameterization of the physics.

  2. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  3. The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test

    DOE PAGES

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...

    2016-12-20

    Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  4. The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain

    Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  5. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  6. High Resolution Simulations of Future Climate in West Africa Using a Variable-Resolution Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.

    2013-12-01

    In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.

  7. A Simplified Model for Multiphase Leakage through Faults with Applications for CO2 Storage

    NASA Astrophysics Data System (ADS)

    Watson, F. E.; Doster, F.

    2017-12-01

    In the context of geological CO2 storage, faults in the subsurface could affect storage security by acting as high permeability pathways which allow CO2 to flow upwards and away from the storage formation. To assess the likelihood of leakage through faults and the impacts faults might have on storage security numerical models are required. However, faults are complex geological features, usually consisting of a fault core surrounded by a highly fractured damage zone. A direct representation of these in a numerical model would require very fine grid resolution and would be computationally expensive. Here, we present the development of a reduced complexity model for fault flow using the vertically integrated formulation. This model captures the main features of the flow but does not require us to resolve the vertical dimension, nor the fault in the horizontal dimension, explicitly. It is thus less computationally expensive than full resolution models. Consequently, we can quickly model many realisations for parameter uncertainty studies of CO2 injection into faulted reservoirs. We develop the model based on explicitly simulating local 3D representations of faults for characteristic scenarios using the Matlab Reservoir Simulation Toolbox (MRST). We have assessed the impact of variables such as fault geometry, porosity and permeability on multiphase leakage rates.

  8. Painting models

    NASA Astrophysics Data System (ADS)

    Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.

    2015-12-01

    The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .

  9. Analysis of Surface Heterogeneity Effects with Mesoscale Terrestrial Modeling Platforms

    NASA Astrophysics Data System (ADS)

    Simmer, C.

    2015-12-01

    An improved understanding of the full variability in the weather and climate system is crucial for reducing the uncertainty in weather forecasting and climate prediction, and to aid policy makers to develop adaptation and mitigation strategies. A yet unknown part of uncertainty in the predictions from the numerical models is caused by the negligence of non-resolved land surface heterogeneity and the sub-surface dynamics and their potential impact on the state of the atmosphere. At the same time, mesoscale numerical models using finer horizontal grid resolution [O(1)km] can suffer from inconsistencies and neglected scale-dependencies in ABL parameterizations and non-resolved effects of integrated surface-subsurface lateral flow at this scale. Our present knowledge suggests large-eddy-simulation (LES) as an eventual solution to overcome the inadequacy of the physical parameterizations in the atmosphere in this transition scale, yet we are constrained by the computational resources, memory management, big-data, when using LES for regional domains. For the present, there is a need for scale-aware parameterizations not only in the atmosphere but also in the land surface and subsurface model components. In this study, we use the recently developed Terrestrial Systems Modeling Platform (TerrSysMP) as a numerical tool to analyze the uncertainty in the simulation of surface exchange fluxes and boundary layer circulations at grid resolutions of the order of 1km, and explore the sensitivity of the atmospheric boundary layer evolution and convective rainfall processes on land surface heterogeneity.

  10. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    NASA Astrophysics Data System (ADS)

    Wild, Oliver; Prather, Michael J.

    2006-06-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.

  11. Evaluation and Sensitivity Analysis of an Ocean Model Response to Hurricane Ivan (PREPRINT)

    DTIC Science & Technology

    2009-05-18

    analysis of upper-limb meridional overturning circulation interior ocean pathways in the tropical/subtropical Atlantic . In: Interhemispheric Water...diminishing returns are encountered when either resolution is increased. 3 1. Introduction Coupled ocean-atmosphere general circulation models have become...northwest Caribbean Sea 4 and GOM. Evaluation is difficult because ocean general circulation models incorporate a large suite of numerical algorithms

  12. On the role of Sea Surface Temperature forcing in the numerical simulation of a Tropical-Like Cyclone event in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Ricchi, Antonio; Marcello Miglietta, Mario; Barbariol, Francesco; Benetazzo, Alvise; Bonaldo, Davide; Falcieri, Francesco M.; Russo, Aniello; Sclavo, Mauro; Carniel, Sandro

    2017-04-01

    Between 19-22 January 2014 a baroclinic wave from the Atlantic region goes in cutoff over the Strait of Gibraltar. The resulting depression remains active for approximately 80 hours, passing off shore of the north African coast, crossing the Tyrrhenian Sea and the Adriatic Sea, before turning south. During the first phase (close to the Balearic islands) and when passing over the Adriatic, the depression assumes the characteristics of a TLC (Tropical Like Cyclones). Sea Surface Temperature (SST) is a very important factor for a proper numerical simulation of these events hence we chose to model this TLC event using the COAWST suite (Coupled Ocean Atmosphere Wave Sediment Transport Modelling System). In the first phase of our work we identified the best model configuration to reproduce the phenomenon, extensively testing different microphysics and PBL (Planetary Boundary Layer) schemes available in the numerical model WRF (Weather Research for Forecasting). In the second phase, in order to evaluate the impact of SST, we used the best physical set-up that reproduces the phenomenon in terms of intensity, trajectory and timing, using four different methods of implementation of the SST in the model: i)from a spectrum-radiometer at 8.3 km resolution and updated every six hours; ii) from a dataset provided by "MyOcean" at 1 km resolution and updated every 6 hours; iii) from COAWST suite run in coupled atmosphere-ocean configuration; iv) from COAWST suite in fully coupled atmosphere-ocean- wave configuration). Results show the importance of the selected microphysics scheme in order to correctly reproduce the TLC trajectory, and of the use of high-resolution and high-frequency SST fields, updated every hour in order to reproduce the diurnal cycles. Coupled numerical runs produce less intense heat fluxes which on turn result in better TLC trajectories, more realistic timing and intensity when compared with standalone simulations, even if the latter use a high resolution SST. Last, a temporary increase of the mixed layer depth along the trajectory of the TLC was exhibited by the fully coupled run during the two phases of maximum intensity of the phenomenon, when the wave field is more developed and acts more intensely on the vertical mixing. We will discuss how these results can be improved or further validated in proximity of land by using satellite information that will be available within the framework of H2020 CEASELESS project.

  13. Surfzone alongshore advective accelerations: observations and modeling

    NASA Astrophysics Data System (ADS)

    Hansen, J.; Raubenheimer, B.; Elgar, S.

    2014-12-01

    The sources, magnitudes, and impacts of non-linear advective accelerations on alongshore surfzone currents are investigated with observations and a numerical model. Previous numerical modeling results have indicated that advective accelerations are an important contribution to the alongshore force balance, and are required to understand spatial variations in alongshore currents (which may result in spatially variable morphological change). However, most prior observational studies have neglected advective accelerations in the alongshore force balance. Using a numerical model (Delft3D) to predict optimal sensor locations, a dense array of 26 colocated current meters and pressure sensors was deployed between the shoreline and 3-m water depth over a 200 by 115 m region near Duck, NC in fall 2013. The array included 7 cross- and 3 alongshore transects. Here, observational and numerical estimates of the dominant forcing terms in the alongshore balance (pressure and radiation-stress gradients) and the advective acceleration terms will be compared with each other. In addition, the numerical model will be used to examine the force balance, including sources of velocity gradients, at a higher spatial resolution than possible with the instrument array. Preliminary numerical results indicate that at O(10-100 m) alongshore scales, bathymetric variations and the ensuing alongshore variations in the wave field and subsequent forcing are the dominant sources of the modeled velocity gradients and advective accelerations. Additional simulations and analysis of the observations will be presented. Funded by NSF and ASDR&E.

  14. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  15. A mixture-energy-consistent six-equation two-phase numerical model for fluids with interfaces, cavitation and evaporation waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelanti, Marica, E-mail: marica.pelanti@ensta-paristech.fr; Shyue, Keh-Ming, E-mail: shyue@ntu.edu.tw

    2014-02-15

    We model liquid–gas flows with cavitation by a variant of the six-equation single-velocity two-phase model with stiff mechanical relaxation of Saurel–Petitpas–Berry (Saurel et al., 2009) [9]. In our approach we employ phasic total energy equations instead of the phasic internal energy equations of the classical six-equation system. This alternative formulation allows us to easily design a simple numerical method that ensures consistency with mixture total energy conservation at the discrete level and agreement of the relaxed pressure at equilibrium with the correct mixture equation of state. Temperature and Gibbs free energy exchange terms are included in the equations as relaxationmore » terms to model heat and mass transfer and hence liquid–vapor transition. The algorithm uses a high-resolution wave propagation method for the numerical approximation of the homogeneous hyperbolic portion of the model. In two dimensions a fully-discretized scheme based on a hybrid HLLC/Roe Riemann solver is employed. Thermo-chemical terms are handled numerically via a stiff relaxation solver that forces thermodynamic equilibrium at liquid–vapor interfaces under metastable conditions. We present numerical results of sample tests in one and two space dimensions that show the ability of the proposed model to describe cavitation mechanisms and evaporation wave dynamics.« less

  16. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  17. On the application of ENO scheme with subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1991-01-01

    Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.

  18. LITE microscopy: Tilted light-sheet excitation of model organisms offers high resolution and low photobleaching

    PubMed Central

    Gerbich, Therese M.; Rana, Kishan; Suzuki, Aussie; Schaefer, Kristina N.; Heppert, Jennifer K.; Boothby, Thomas C.; Allbritton, Nancy L.; Gladfelter, Amy S.; Maddox, Amy S.

    2018-01-01

    Fluorescence microscopy is a powerful approach for studying subcellular dynamics at high spatiotemporal resolution; however, conventional fluorescence microscopy techniques are light-intensive and introduce unnecessary photodamage. Light-sheet fluorescence microscopy (LSFM) mitigates these problems by selectively illuminating the focal plane of the detection objective by using orthogonal excitation. Orthogonal excitation requires geometries that physically limit the detection objective numerical aperture (NA), thereby limiting both light-gathering efficiency (brightness) and native spatial resolution. We present a novel live-cell LSFM method, lateral interference tilted excitation (LITE), in which a tilted light sheet illuminates the detection objective focal plane without a sterically limiting illumination scheme. LITE is thus compatible with any detection objective, including oil immersion, without an upper NA limit. LITE combines the low photodamage of LSFM with high resolution, high brightness, and coverslip-based objectives. We demonstrate the utility of LITE for imaging animal, fungal, and plant model organisms over many hours at high spatiotemporal resolution. PMID:29490939

  19. A Numerical Model for Trickle Bed Reactors

    NASA Astrophysics Data System (ADS)

    Propp, Richard M.; Colella, Phillip; Crutchfield, William Y.; Day, Marcus S.

    2000-12-01

    Trickle bed reactors are governed by equations of flow in porous media such as Darcy's law and the conservation of mass. Our numerical method for solving these equations is based on a total-velocity splitting, sequential formulation which leads to an implicit pressure equation and a semi-implicit mass conservation equation. We use high-resolution finite-difference methods to discretize these equations. Our solution scheme extends previous work in modeling porous media flows in two ways. First, we incorporate physical effects due to capillary pressure, a nonlinear inlet boundary condition, spatial porosity variations, and inertial effects on phase mobilities. In particular, capillary forces introduce a parabolic component into the recast evolution equation, and the inertial effects give rise to hyperbolic nonconvexity. Second, we introduce a modification of the slope-limiting algorithm to prevent our numerical method from producing spurious shocks. We present a numerical algorithm for accommodating these difficulties, show the algorithm is second-order accurate, and demonstrate its performance on a number of simplified problems relevant to trickle bed reactor modeling.

  20. Energy Conservation and Conversion in NIMROD Computations of Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Maddox, J. A.; Sovinec, C. R.

    2017-10-01

    Previous work modeling magnetic relaxation during non-inductive start-up at the Pegasus spherical tokamak indicates an order of magnitude gap between measured experimental temperature and simulated temperature in NIMROD. Potential causes of the plasma temperature gap include: insufficient transport modeling, too low modeled injector power input, and numerical loss of energy, as energy is not algorithmically conserved in NIMROD simulations. Simple 2D nonlinear MHD simulations explore numerical energy conservation discrepancies in NIMROD because understanding numerical loss of energy is fundamental to addressing the physical problems of the other potential causes of energy loss. Evolution of these configurations induces magnetic reconnection, which transfers magnetic energy to heat and kinetic energy. The kinetic energy is eventually damped so, magnetic energy loss must correspond to an increase in internal energy. Results in the 2D geometries indicate that numerical energy loss during reconnection depends on the temporal resolution of the dynamics. Work support from U.S. Department of Energy through a subcontract from the Plasma Science and Innovation Center.

  1. Ozone Production in Global Tropospheric Models: Quantifying Errors due to Grid Resolution

    NASA Astrophysics Data System (ADS)

    Wild, O.; Prather, M. J.

    2005-12-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the Western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes at a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63 and T106 resolution is likewise monotonic but still indicates large errors at 120~km scales, suggesting that T106 resolution is still too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over East Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution, but subsequent ozone production in the free troposphere is less significantly affected.

  2. A deterministic model of electron transport for electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Bünger, J.; Richter, S.; Torrilhon, M.

    2018-01-01

    Within the last decades significant improvements in the spatial resolution of electron probe microanalysis (EPMA) were obtained by instrumental enhancements. In contrast, the quantification procedures essentially remained unchanged. As the classical procedures assume either homogeneity or a multi-layered structure of the material, they limit the spatial resolution of EPMA. The possibilities of improving the spatial resolution through more sophisticated quantification procedures are therefore almost untouched. We investigate a new analytical model (M 1-model) for the quantification procedure based on fast and accurate modelling of electron-X-ray-matter interactions in complex materials using a deterministic approach to solve the electron transport equations. We outline the derivation of the model from the Boltzmann equation for electron transport using the method of moments with a minimum entropy closure and present first numerical results for three different test cases (homogeneous, thin film and interface). Taking Monte Carlo as a reference, the results for the three test cases show that the M 1-model is able to reproduce the electron dynamics in EPMA applications very well. Compared to classical analytical models like XPP and PAP, the M 1-model is more accurate and far more flexible, which indicates the potential of deterministic models of electron transport to further increase the spatial resolution of EPMA.

  3. A Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF)

    NASA Astrophysics Data System (ADS)

    Trotta, Francesco; Fenu, Elisa; Pinardi, Nadia; Bruciaferri, Diego; Giacomelli, Luca; Federico, Ivan; Coppini, Giovanni

    2016-11-01

    We present a numerical platform named Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF). The platform is developed for short-time forecasts and is designed to be embedded in any region of the large-scale Mediterranean Forecasting System (MFS) via downscaling. We employ CTD data collected during a campaign around the Elba island to calibrate and validate SURF. The model requires an initial spin up period of a few days in order to adapt the initial interpolated fields and the subsequent solutions to the higher-resolution nested grids adopted by SURF. Through a comparison with the CTD data, we quantify the improvement obtained by SURF model compared to the coarse-resolution MFS model.

  4. High-resolution DEM Effects on Geophysical Flow Models

    NASA Astrophysics Data System (ADS)

    Williams, M. R.; Bursik, M. I.; Stefanescu, R. E. R.; Patra, A. K.

    2014-12-01

    Geophysical mass flow models are numerical models that approximate pyroclastic flow events and can be used to assess the volcanic hazards certain areas may face. One such model, TITAN2D, approximates granular-flow physics based on a depth-averaged analytical model using inputs of basal and internal friction, material volume at a coordinate point, and a GIS in the form of a digital elevation model (DEM). The volume of modeled material propagates over the DEM in a way that is governed by the slope and curvature of the DEM surface and the basal and internal friction angles. Results from TITAN2D are highly dependent upon the inputs to the model. Here we focus on a single input: the DEM, which can vary in resolution. High resolution DEMs are advantageous in that they contain more surface details than lower-resolution models, presumably allowing modeled flows to propagate in a way more true to the real surface. However, very high resolution DEMs can create undesirable artifacts in the slope and curvature that corrupt flow calculations. With high-resolution DEMs becoming more widely available and preferable for use, determining the point at which high resolution data is less advantageous compared to lower resolution data becomes important. We find that in cases of high resolution, integer-valued DEMs, very high-resolution is detrimental to good model outputs when moderate-to-low (<10-15°) slope angles are involved. At these slope angles, multiple adjacent DEM cell elevation values are equal due to the need for the DEM to approximate the low slope with a limited set of integer values for elevation. The first derivative of the elevation surface thus becomes zero. In these cases, flow propagation is inhibited by these spurious zero-slope conditions. Here we present evidence for this "terracing effect" from 1) a mathematically defined simulated elevation model, to demonstrate the terracing effects of integer valued data, and 2) a real-world DEM where terracing must be addressed. We discuss the effect on the flow model output and present possible solutions for rectification of the problem.

  5. Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core

    NASA Astrophysics Data System (ADS)

    Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey

    2017-05-01

    SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.

  6. Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.

    PubMed

    Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P

    2015-02-20

    Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.

  7. Impact of wildfire-induced land cover modification on local meteorology: A sensitivity study of the 2003 wildfires in Portugal

    NASA Astrophysics Data System (ADS)

    Hernandez, Charles; Drobinski, Philippe; Turquety, Solène

    2015-10-01

    Wildfires alter land cover creating changes in dynamic, vegetative, radiative, thermal and hydrological properties of the surface. However, how so drastic changes induced by wildfires and how the age of the burnt scar affect the small and meso-scale atmospheric boundary layer dynamics are largely unknown. These questions are relevant for process analysis, meteorological and air quality forecast but also for regional climate analysis. Such questions are addressed numerically in this study on the case of the Portugal wildfires in 2003 as a testbed. In order to study the effects of burnt scars, an ensemble of numerical simulations using the Weather Research and Forecasting modeling system (WRF) have been performed with different surface properties mimicking the surface state immediately after the fire, few days after the fire and few months after the fire. In order to investigate such issue in a seamless approach, the same modelling framework has been used with various horizontal resolutions of the model grid and land use, ranging from 3.5 km, which can be considered as the typical resolution of state-of-the art regional numerical weather prediction models to 14 km which is now the typical target resolution of regional climate models. The study shows that the combination of high surface heat fluxes over the burnt area, large differential heating with respect to the preserved surroundings and lower surface roughness produces very intense frontogenesis with vertical velocity reaching few meters per second. This powerful meso-scale circulation can pump more humid air from the surroundings not impacted by the wildfire and produce more cloudiness over the burnt area. The influence of soil temperature immediately after the wildfire ceases is mainly seen at night as the boundary-layer remains unstably stratified and lasts only few days. So the intensity of the induced meso-scale circulation decreases with time, even though it remains until full recovery of the vegetation. Finally all these effects are simulated whatever the land cover and model resolution and there are thus robust processes in both regional climate simulations and process studies or short-time forecast. However, the impact of burnt scars on the precipitation signal remains very uncertain, especially because low precipitation is at stake.

  8. The Space-Time Conservation Element and Solution Element Method: A New High-Resolution and Genuinely Multidimensional Paradigm for Solving Conservation Laws. 1; The Two Dimensional Time Marching Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen

    1998-01-01

    A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.

  9. Measurement and modeling of CO2 mass transfer in brine at reservoir conditions

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Wen, B.; Hesse, M. A.; Tsotsis, T. T.; Jessen, K.

    2018-03-01

    In this work, we combine measurements and modeling to investigate the application of pressure-decay experiments towards delineation and interpretation of CO2 solubility, uptake and mass transfer in water/brine systems at elevated pressures of relevance to CO2 storage operations in saline aquifers. Accurate measurements and modeling of mass transfer in this context are crucial to an improved understanding of the longer-term fate of CO2 that is injected into the subsurface for storage purposes. Pressure-decay experiments are presented for CO2/water and CO2/brine systems with and without the presence of unconsolidated porous media. We demonstrate, via high-resolution numerical calculations in 2-D, that natural convection will complicate the interpretation of the experimental observations if the particle size is not sufficiently small. In such settings, we demonstrate that simple 1-D interpretations can result in an overestimation of the uptake (diffusivity) by two orders of magnitude. Furthermore, we demonstrate that high-resolution numerical calculations agree well with the experimental observations for settings where natural convection contributes substantially to the overall mass transfer process.

  10. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  11. Numerical solution of special ultra-relativistic Euler equations using central upwind scheme

    NASA Astrophysics Data System (ADS)

    Ghaffar, Tayabia; Yousaf, Muhammad; Qamar, Shamsul

    2018-06-01

    This article is concerned with the numerical approximation of one and two-dimensional special ultra-relativistic Euler equations. The governing equations are coupled first-order nonlinear hyperbolic partial differential equations. These equations describe perfect fluid flow in terms of the particle density, the four-velocity and the pressure. A high-resolution shock-capturing central upwind scheme is employed to solve the model equations. To avoid excessive numerical diffusion, the considered scheme avails the specific information of local propagation speeds. By using Runge-Kutta time stepping method and MUSCL-type initial reconstruction, we have obtained 2nd order accuracy of the proposed scheme. After discussing the model equations and the numerical technique, several 1D and 2D test problems are investigated. For all the numerical test cases, our proposed scheme demonstrates very good agreement with the results obtained by well-established algorithms, even in the case of highly relativistic 2D test problems. For validation and comparison, the staggered central scheme and the kinetic flux-vector splitting (KFVS) method are also implemented to the same model. The robustness and efficiency of central upwind scheme is demonstrated by the numerical results.

  12. Sensitivity of an Antarctic Ice Sheet Model to Sub-Ice-Shelf Melting

    NASA Astrophysics Data System (ADS)

    Lipscomb, W. H.; Leguy, G.; Urban, N. M.; Berdahl, M.

    2017-12-01

    Theory and observations suggest that marine-based sectors of the Antarctic ice sheet could retreat rapidly under ocean warming and increased melting beneath ice shelves. Numerical models of marine ice sheets vary widely in sensitivity, depending on grid resolution and the parameterization of key processes (e.g., calving and hydrofracture). Here we study the sensitivity of the Antarctic ice sheet to ocean warming and sub-shelf melting in standalone simulations of the Community Ice Sheet Model (CISM). Melt rates either are prescribed based on observations and high-resolution ocean model output, or are derived from a plume model forced by idealized ocean temperature profiles. In CISM, we vary the model resolution (between 1 and 8 km), Stokes approximation (shallow-shelf, depth-integrated higher-order, or 3D higher-order) and calving scheme to create an ensemble of plausible responses to sub-shelf melting. This work supports a broader goal of building statistical and reduced models that can translate large-scale Earth-system model projections to changes in Antarctic ocean temperatures and ice sheet discharge, thus better quantifying uncertainty in Antarctic-sourced sea-level rise.

  13. Biological production models as elements of coupled, atmosphere-ocean models for climate research

    NASA Technical Reports Server (NTRS)

    Platt, Trevor; Sathyendranath, Shubha

    1991-01-01

    Process models of phytoplankton production are discussed with respect to their suitability for incorporation into global-scale numerical ocean circulation models. Exact solutions are given for integrals over the mixed layer and the day of analytic, wavelength-independent models of primary production. Within this class of model, the bias incurred by using a triangular approximation (rather than a sinusoidal one) to the variation of surface irradiance through the day is computed. Efficient computation algorithms are given for the nonspectral models. More exact calculations require a spectrally sensitive treatment. Such models exist but must be integrated numerically over depth and time. For these integrations, resolution in wavelength, depth, and time are considered and recommendations made for efficient computation. The extrapolation of the one-(spatial)-dimension treatment to large horizontal scale is discussed.

  14. Physical mechanisms of longitudinal vortexes formation, appearance of zones with high heat fluxes and early transition in hypersonic flow over delta wing with blunted leading edges

    NASA Astrophysics Data System (ADS)

    Alexandrov, S. V.; Vaganov, A. V.; Shalaev, V. I.

    2016-10-01

    Processes of vortex structures formation and they interactions with the boundary layer in the hypersonic flow over delta wing with blunted leading edges are analyzed on the base of experimental investigations and numerical solutions of Navier-Stokes equations. Physical mechanisms of longitudinal vortexes formation, appearance of abnormal zones with high heat fluxes and early laminar turbulent transition are studied. These phenomena were observed in many high-speed wind tunnel experiments; however they were understood only using the detailed analysis of numerical modeling results with the high resolution. Presented results allowed explaining experimental phenomena. ANSYS CFX code (the DAFE MIPT license) on the grid with 50 million nodes was used for the numerical modeling. The numerical method was verified by comparison calculated heat flux distributions on the wing surface with experimental data.

  15. High resolution regional climate simulation of the Hawaiian Islands - Validation of the historical run from 2003 to 2012

    NASA Astrophysics Data System (ADS)

    Xue, L.; Newman, A. J.; Ikeda, K.; Rasmussen, R.; Clark, M. P.; Monaghan, A. J.

    2016-12-01

    A high-resolution (a 1.5 km grid spacing domain nested within a 4.5 km grid spacing domain) 10-year regional climate simulation over the entire Hawaiian archipelago is being conducted at the National Center for Atmospheric Research (NCAR) using the Weather Research and Forecasting (WRF) model version 3.7.1. Numerical sensitivity simulations of the Hawaiian Rainband Project (HaRP, a filed experiment from July to August in 1990) showed that the simulated precipitation properties are sensitive to initial and lateral boundary conditions, sea surface temperature (SST), land surface models, vertical resolution and cloud droplet concentration. The validations of model simulated statistics of the trade wind inversion, temperature, wind field, cloud cover, and precipitation over the islands against various observations from soundings, satellites, weather stations and rain gauges during the period from 2003 to 2012 will be presented at the meeting.

  16. Development of high resolution simulations of the atmospheric environment using the MASS model

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Zack, John W.; Karyampudi, V. Mohan

    1989-01-01

    Numerical simulations were performed with a very high resolution (7.25 km) version of the MASS model (Version 4.0) in an effort to diagnose the vertical wind shear and static stability structure during the Shuttle Challenger disaster which occurred on 28 January 1986. These meso-beta scale simulations reveal that the strongest vertical wind shears were concentrated in the 200 to 150 mb layer at 1630 GMT, i.e., at about the time of the disaster. These simulated vertical shears were the result of two primary dynamical processes. The juxtaposition of both of these processes produced a shallow (30 mb deep) region of strong vertical wind shear, and hence, low Richardson number values during the launch time period. Comparisons with the Cape Canaveral (XMR) rawinsonde indicates that the high resolution MASS 4.0 simulation more closely emulated nature than did previous simulations of the same event with the GMASS model.

  17. Detailed Characterization of Nearshore Processes During NCEX

    NASA Astrophysics Data System (ADS)

    Holland, K.; Kaihatu, J. M.; Plant, N.

    2004-12-01

    Recent technology advances have allowed the coupling of remote sensing methods with advanced wave and circulation models to yield detailed characterizations of nearshore processes. This methodology was demonstrated as part of the Nearshore Canyon EXperiment (NCEX) in La Jolla, CA during Fall 2003. An array of high-resolution, color digital cameras was installed to monitor an alongshore distance of nearly 2 km out to depths of 25 m. This digital imagery was analyzed over the three-month period through an automated process to produce hourly estimates of wave period, wave direction, breaker height, shoreline position, sandbar location, and bathymetry at numerous locations during daylight hours. Interesting wave propagation patterns in the vicinity of the canyons were observed. In addition, directional wave spectra and swash / surf flow velocities were estimated using more computationally intensive methods. These measurements were used to provide forcing and boundary conditions for the Delft3D wave and circulation model, giving additional estimates of nearshore processes such as dissipation and rip currents. An optimal approach for coupling these remotely sensed observations to the numerical model was selected to yield accurate, but also timely characterizations. This involved assimilation of directional spectral estimates near the offshore boundary to mimic forcing conditions achieved under traditional approaches involving nested domains. Measurements of breaker heights and flow speeds were also used to adaptively tune model parameters to provide enhanced accuracy. Comparisons of model predictions and video observations show significant correlation. As compared to nesting within larger-scale and coarser resolution models, the advantages of providing boundary conditions data using remote sensing is much improved resolution and fidelity. For example, rip current development was both modeled and observed. These results indicate that this approach to data-model coupling is tenable and may be useful in near-real-time characterizations required by many applied scenarios.

  18. High-resolution modelling of atmospheric dispersion of dense gas using TWODEE-2.1: application to the 1986 Lake Nyos limnic eruption

    NASA Astrophysics Data System (ADS)

    Folch, Arnau; Barcons, Jordi; Kozono, Tomofumi; Costa, Antonio

    2017-06-01

    Atmospheric dispersal of a gas denser than air can threat the environment and surrounding communities if the terrain and meteorological conditions favour its accumulation in topographic depressions, thereby reaching toxic concentration levels. Numerical modelling of atmospheric gas dispersion constitutes a useful tool for gas hazard assessment studies, essential for planning risk mitigation actions. In complex terrains, microscale winds and local orographic features can have a strong influence on the gas cloud behaviour, potentially leading to inaccurate results if not captured by coarser-scale modelling. We introduce a methodology for microscale wind field characterisation based on transfer functions that couple a mesoscale numerical weather prediction model with a microscale computational fluid dynamics (CFD) model for the atmospheric boundary layer. The resulting time-dependent high-resolution microscale wind field is used as input for a shallow-layer gas dispersal model (TWODEE-2.1) to simulate the time evolution of CO2 gas concentration at different heights above the terrain. The strategy is applied to review simulations of the 1986 Lake Nyos event in Cameroon, where a huge CO2 cloud released by a limnic eruption spread downslopes from the lake, suffocating thousands of people and animals across the Nyos and adjacent secondary valleys. Besides several new features introduced in the new version of the gas dispersal code (TWODEE-2.1), we have also implemented a novel impact criterion based on the percentage of human fatalities depending on CO2 concentration and exposure time. New model results are quantitatively validated using the reported percentage of fatalities at several locations. The comparison with previous simulations that assumed coarser-scale steady winds and topography illustrates the importance of high-resolution modelling in complex terrains.

  19. A combination of HARMONIE short time direct normal irradiance forecasts and machine learning: The #hashtdim procedure

    NASA Astrophysics Data System (ADS)

    Gastón, Martín; Fernández-Peruchena, Carlos; Körnich, Heiner; Landelius, Tomas

    2017-06-01

    The present work describes the first approach of a new procedure to forecast Direct Normal Irradiance (DNI): the #hashtdim that treats to combine ground information and Numerical Weather Predictions. The system is centered in generate predictions for the very short time. It combines the outputs from the Numerical Weather Prediction Model HARMONIE with an adaptive methodology based on Machine Learning. The DNI predictions are generated with 15-minute and hourly temporal resolutions and presents 3-hourly updates. Each update offers forecasts to the next 12 hours, the first nine hours are generated with 15-minute temporal resolution meanwhile the last three hours present hourly temporal resolution. The system is proved over a Spanish emplacement with BSRN operative station in south of Spain (PSA station). The #hashtdim has been implemented in the framework of the Direct Normal Irradiance Nowcasting methods for optimized operation of concentrating solar technologies (DNICast) project, under the European Union's Seventh Programme for research, technological development and demonstration framework.

  20. Simulation of modern climate with the new version of the INM RAS climate model

    NASA Astrophysics Data System (ADS)

    Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykosov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Yakovlev, N. G.

    2017-03-01

    The INMCM5.0 numerical model of the Earth's climate system is presented, which is an evolution from the previous version, INMCM4.0. A higher vertical resolution for the stratosphere is applied in the atmospheric block. Also, we raised the upper boundary of the calculating area, added the aerosol block, modified parameterization of clouds and condensation, and increased the horizontal resolution in the ocean block. The program implementation of the model was also updated. We consider the simulation of the current climate using the new version of the model. Attention is focused on reducing systematic errors as compared to the previous version, reproducing phenomena that could not be simulated correctly in the previous version, and modeling the problems that remain unresolved.

  1. Investigating Anomalies in the Output Generated by the Weather Research and Forecasting (WRF) Model

    NASA Astrophysics Data System (ADS)

    Decicco, Nicholas; Trout, Joseph; Manson, J. Russell; Rios, Manny; King, David

    2015-04-01

    The Weather Research and Forecasting (WRF) model is an advanced mesoscale numerical weather prediction (NWP) model comprised of two numerical cores, the Numerical Mesoscale Modeling (NMM) core, and the Advanced Research WRF (ARW) core. An investigation was done to determine the source of erroneous output generated by the NMM core. In particular were the appearance of zero values at regularly spaced grid cells in output fields and the NMM core's evident (mis)use of static geographic information at a resolution lower than the nesting level for which the core is performing computation. A brief discussion of the high-level modular architecture of the model is presented as well as methods utilized to identify the cause of these problems. Presented here are the initial results from a research grant, ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''.

  2. A detailed model for simulation of catchment scale subsurface hydrologic processes

    NASA Technical Reports Server (NTRS)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  3. Central Upwind Scheme for a Compressible Two-Phase Flow Model

    PubMed Central

    Ahmed, Munshoor; Saleem, M. Rehan; Zia, Saqib; Qamar, Shamsul

    2015-01-01

    In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme. PMID:26039242

  4. Central upwind scheme for a compressible two-phase flow model.

    PubMed

    Ahmed, Munshoor; Saleem, M Rehan; Zia, Saqib; Qamar, Shamsul

    2015-01-01

    In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.

  5. Numerical solutions of the semiclassical Boltzmann ellipsoidal-statistical kinetic model equation

    PubMed Central

    Yang, Jaw-Yen; Yan, Chin-Yuan; Huang, Juan-Chen; Li, Zhihui

    2014-01-01

    Computations of rarefied gas dynamical flows governed by the semiclassical Boltzmann ellipsoidal-statistical (ES) kinetic model equation using an accurate numerical method are presented. The semiclassical ES model was derived through the maximum entropy principle and conserves not only the mass, momentum and energy, but also contains additional higher order moments that differ from the standard quantum distributions. A different decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. The numerical method in phase space combines the discrete-ordinate method in momentum space and the high-resolution shock capturing method in physical space. Numerical solutions of two-dimensional Riemann problems for two configurations covering various degrees of rarefaction are presented and various contours of the quantities unique to this new model are illustrated. When the relaxation time becomes very small, the main flow features a display similar to that of ideal quantum gas dynamics, and the present solutions are found to be consistent with existing calculations for classical gas. The effect of a parameter that permits an adjustable Prandtl number in the flow is also studied. PMID:25104904

  6. Laser-optical and numerical Research of the flow inside the lubricating gap of a journal bearing model

    NASA Astrophysics Data System (ADS)

    Nobis, M.; Stücke, P.; Schmidt, M.; Riedel, M.

    2013-04-01

    The laser-optical research of the flow inside the lubricating gap of a journal bearing model is one important task in a larger overall project. The long-term objective is the development of an easy-to-work calculation tool which delivers information about the causes and consequences of cavitation processes in hydrodynamically lubricated journal bearings. Hence, it will be possible to find statements for advantageous and disadvantageous geometrical shapes of the bushings. In conclusion such a calculation tool can provide important insights for the construction and design of future journal bearings. Current design programs are based on a two-dimensional approach for the lubricating gap. The first dimension is the breath of the bearing and the second dimension is the circumferential direction of the bearing. The third dimension, the expansion of the gap in radial direction, will be neglected. Instead of an exact resolution of the flow pattern inside the gap, turbulence models are in use. Past studies on numerical and experimental field have shown that inside the lubricating gap clearly organized and predominantly laminar flow structures can be found. Thus, for a detailed analysis of the reasons and effects of cavitation bubbles, a three-dimensional resolution of the lubricating gap is inevitable. In addition to the qualitative evaluation of the flow with visualization experiments it is possible to perform angle-based velocity measurements inside the gap with the help of a triggered Laser-Doppler- Velocimeter (LDV). The results of these measurements are used to validate three-dimensional CFD flow simulations, and to optimize the numerical mesh structure and the boundary conditions. This paper will present the experimental setup of the bearing model, some exemplary results of the visualization experiments and LDV measurements as well as a comparison between experimental and numerical results.

  7. The future of EUV lithography: enabling Moore's Law in the next decade

    NASA Astrophysics Data System (ADS)

    Pirati, Alberto; van Schoot, Jan; Troost, Kars; van Ballegoij, Rob; Krabbendam, Peter; Stoeldraijer, Judon; Loopstra, Erik; Benschop, Jos; Finders, Jo; Meiling, Hans; van Setten, Eelco; Mika, Niclas; Dredonx, Jeannot; Stamm, Uwe; Kneer, Bernhard; Thuering, Bernd; Kaiser, Winfried; Heil, Tilmann; Migura, Sascha

    2017-03-01

    While EUV systems equipped with a 0.33 Numerical Aperture lenses are readying to start volume manufacturing, ASML and Zeiss are ramping up their development activities on a EUV exposure tool with Numerical Aperture greater than 0.5. The purpose of this scanner, targeting a resolution of 8nm, is to extend Moore's law throughout the next decade. A novel, anamorphic lens design, has been developed to provide the required Numerical Aperture; this lens will be paired with new, faster stages and more accurate sensors enabling Moore's law economical requirements, as well as the tight focus and overlay control needed for future process nodes. The tighter focus and overlay control budgets, as well as the anamorphic optics, will drive innovations in the imaging and OPC modelling, and possibly in the metrology concepts. Furthermore, advances in resist and mask technology will be required to image lithography features with less than 10nm resolution. This paper presents an overview of the key technology innovations and infrastructure requirements for the next generation EUV systems.

  8. Improving the Non-Hydrostatic Numerical Dust Model by Integrating Soil Moisture and Greenness Vegetation Fraction Data with Different Spatiotemporal Resolutions.

    PubMed

    Yu, Manzhu; Yang, Chaowei

    2016-01-01

    Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.

  9. Resolution dependence of precipitation statistical fidelity in hindcast simulations

    DOE PAGES

    O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik; ...

    2016-06-19

    This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less

  10. Resolution dependence of precipitation statistical fidelity in hindcast simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik

    This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less

  11. The effect of numerical methods on the simulation of mid-ocean ridge hydrothermal models

    NASA Astrophysics Data System (ADS)

    Carpio, J.; Braack, M.

    2012-01-01

    This work considers the effect of the numerical method on the simulation of a 2D model of hydrothermal systems located in the high-permeability axial plane of mid-ocean ridges. The behavior of hot plumes, formed in a porous medium between volcanic lava and the ocean floor, is very irregular due to convective instabilities. Therefore, we discuss and compare two different numerical methods for solving the mathematical model of this system. In concrete, we consider two ways to treat the temperature equation of the model: a semi-Lagrangian formulation of the advective terms in combination with a Galerkin finite element method for the parabolic part of the equations and a stabilized finite element scheme. Both methods are very robust and accurate. However, due to physical instabilities in the system at high Rayleigh number, the effect of the numerical method is significant with regard to the temperature distribution at a certain time instant. The good news is that relevant statistical quantities remain relatively stable and coincide for the two numerical schemes. The agreement is larger in the case of a mathematical model with constant water properties. In the case of a model with nonlinear dependence of the water properties on the temperature and pressure, the agreement in the statistics is clearly less pronounced. Hence, the presented work accentuates the need for a strengthened validation of the compatibility between numerical scheme (accuracy/resolution) and complex (realistic/nonlinear) models.

  12. Secretary of The Navy Professor

    DTIC Science & Technology

    1999-09-30

    goal of this research is to develop a predictive capability for the upper ocean circulation and atmospheric interactions using numerical models...assimilation techniques to be used in these models. In addition, we are continuing the task of preparing long-term global surface fluxes for use in ocean...NASA, NSF, and NOAA. APPROACH We are using a suite of models forced with estimates of real winds, with very fine horizontal resolution and realistic

  13. Adaptive Blending of Model and Observations for Automated Short-Range Forecasting: Examples from the Vancouver 2010 Olympic and Paralympic Winter Games

    NASA Astrophysics Data System (ADS)

    Bailey, Monika E.; Isaac, George A.; Gultepe, Ismail; Heckman, Ivan; Reid, Janti

    2014-01-01

    An automated short-range forecasting system, adaptive blending of observations and model (ABOM), was tested in real time during the 2010 Vancouver Olympic and Paralympic Winter Games in British Columbia. Data at 1-min time resolution were available from a newly established, dense network of surface observation stations. Climatological data were not available at these new stations. This, combined with output from new high-resolution numerical models, provided a unique and exciting setting to test nowcasting systems in mountainous terrain during winter weather conditions. The ABOM method blends extrapolations in time of recent local observations with numerical weather predictions (NWP) model predictions to generate short-range point forecasts of surface variables out to 6 h. The relative weights of the model forecast and the observation extrapolation are based on performance over recent history. The average performance of ABOM nowcasts during February and March 2010 was evaluated using standard scores and thresholds important for Olympic events. Significant improvements over the model forecasts alone were obtained for continuous variables such as temperature, relative humidity and wind speed. The small improvements to forecasts of variables such as visibility and ceiling, subject to discontinuous changes, are attributed to the persistence component of ABOM.

  14. Developing the Next Generation NATO Reference Mobility Model

    DTIC Science & Technology

    2016-06-27

    acquisition • design UNCLASSIFIED: Distribution Statement A. Approved for public release; distribution is unlimited.(#27992) Vehicle Dynamics Model...and numerical resolution – for use in vehicle design , acquisition and operational mobility planning 27 June 2016 An open architecture was established...the current empirical methods for simulating vehicle and suspension designs . – Industry wide shortfall with tire dynamics and soft soil behavior

  15. Experimental characterization and modelization of ion exchange kinetics for a carboxylic resin in infinite solution volume conditions. Application to monovalent-trivalent cations exchange.

    PubMed

    Picart, Sébastien; Ramière, Isabelle; Mokhtari, Hamid; Jobelin, Isabelle

    2010-09-02

    This study is devoted to the characterization of ion exchange inside a microsphere of carboxylic resin. It aims at describing the kinetics of this exchange reaction which is known to be controlled by interdiffusion in the particle. The fractional attainment of equilibrium function of time depends on the concentration of the cations in the resin which can be modelized by the Nernst-Planck equation. A powerful approach for the numerical resolution of this equation is introduced in this paper. This modeling is based on the work of Helfferich but involves an implicit numerical scheme which reduces the computational cost. Knowing the diffusion coefficients of the cations in the resin and the radius of the spherical exchanger, the kinetics can be hence completely determined. When those diffusion parameters are missing, they can be deduced by fitting experimental data of fractional attainment of equilibrium. An efficient optimization tool coupled with the implicit resolution has been developed for this purpose. A monovalent/trivalent cation exchange had been experimentally characterized for a carboxylic resin. Diffusion coefficients and concentration profiles in the resin were then deduced through this new model.

  16. Assimilation of the seabird and ship drift data in the north-eastern sea of Japan into an operational ocean nowcast/forecast system

    PubMed Central

    Miyazawa, Yasumasa; Guo, Xinyu; Varlamov, Sergey M.; Miyama, Toru; Yoda, Ken; Sato, Katsufumi; Kano, Toshiyuki; Sato, Keiji

    2015-01-01

    At the present time, ocean current is being operationally monitored mainly by combined use of numerical ocean nowcast/forecast models and satellite remote sensing data. Improvement in the accuracy of the ocean current nowcast/forecast requires additional measurements with higher spatial and temporal resolution as expected from the current observation network. Here we show feasibility of assimilating high-resolution seabird and ship drift data into an operational ocean forecast system. Data assimilation of geostrophic current contained in the observed drift leads to refinement in the gyre mode events of the Tsugaru warm current in the north-eastern sea of Japan represented by the model. Fitting the observed drift to the model depends on ability of the drift representing geostrophic current compared to that representing directly wind driven components. A preferable horizontal scale of 50 km indicated for the seabird drift data assimilation implies their capability of capturing eddies with smaller horizontal scale than the minimum scale of 100 km resolved by the satellite altimetry. The present study actually demonstrates that transdisciplinary approaches combining bio-/ship- logging and numerical modeling could be effective for enhancement in monitoring the ocean current. PMID:26633309

  17. Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation.

    PubMed

    Dione, Ibrahima; Deteix, Jean; Briffard, Thomas; Chamberland, Eric; Doyon, Nicolas

    2016-01-01

    In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer.

  18. Modeling tidal hydrodynamics of San Diego Bay, California

    USGS Publications Warehouse

    Wang, P.-F.; Cheng, R.T.; Richter, K.; Gross, E.S.; Sutton, D.; Gartner, J.W.

    1998-01-01

    In 1983, current data were collected by the National Oceanic and Atmospheric Administration using mechanical current meters. During 1992 through 1996, acoustic Doppler current profilers as well as mechanical current meters and tide gauges were used. These measurements not only document tides and tidal currents in San Diego Bay, but also provide independent data sets for model calibration and verification. A high resolution (100-m grid), depth-averaged, numerical hydrodynamic model has been implemented for San Diego Bay to describe essential tidal hydrodynamic processes in the bay. The model is calibrated using the 1983 data set and verified using the more recent 1992-1996 data. Discrepancies between model predictions and field data in beth model calibration and verification are on the order of the magnitude of uncertainties in the field data. The calibrated and verified numerical model has been used to quantify residence time and dilution and flushing of contaminant effluent into San Diego Bay. Furthermore, the numerical model has become an important research tool in ongoing hydrodynamic and water quality studies and in guiding future field data collection programs.

  19. Can Regional Climate Models be used in the assessment of vulnerability and risk caused by extreme events?

    NASA Astrophysics Data System (ADS)

    Nunes, Ana

    2015-04-01

    Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.

  20. Toward self-consistent tectono-magmatic numerical model of rift-to-ridge transition

    NASA Astrophysics Data System (ADS)

    Gerya, Taras; Bercovici, David; Liao, Jie

    2017-04-01

    Natural data from modern and ancient lithospheric extension systems suggest three-dimensional (3D) character of deformation and complex relationship between magmatism and tectonics during the entire rift-to-ridge transition. Therefore, self-consistent high-resolution 3D magmatic-thermomechanical numerical approaches stand as a minimum complexity requirement for modeling and understanding of this transition. Here we present results from our new high-resolution 3D finite-difference marker-in-cell rift-to-ridge models, which account for magmatic accretion of the crust and use non-linear strain-weakened visco-plastic rheology of rocks that couples brittle/plastic failure and ductile damage caused by grain size reduction. Numerical experiments suggest that nucleation of rifting and ridge-transform patterns are decoupled in both space and time. At intermediate stages, two patterns can coexist and interact, which triggers development of detachment faults, failed rift arms, hyper-extended margins and oblique proto-transforms. En echelon rift patterns typically develop in the brittle upper-middle crust whereas proto-ridge and proto-transform structures nucleate in the lithospheric mantle. These deep proto-structures propagate upward, inter-connect and rotate toward a mature orthogonal ridge-transform patterns on the timescale of millions years during incipient thermal-magmatic accretion of the new oceanic-like lithosphere. Ductile damage of the extending lithospheric mantle caused by grain size reduction assisted by Zenner pinning plays critical role in rift-to-ridge transition by stabilizing detachment faults and transform structures. Numerical results compare well with observations from incipient spreading regions and passive continental margins.

  1. Super-resolution Time-Lapse Seismic Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Ovcharenko, O.; Kazei, V.; Peter, D. B.; Alkhalifah, T.

    2017-12-01

    Time-lapse seismic waveform inversion is a technique, which allows tracking changes in the reservoirs over time. Such monitoring is relatively computationally extensive and therefore it is barely feasible to perform it on-the-fly. Most of the expenses are related to numerous FWI iterations at high temporal frequencies, which is inevitable since the low-frequency components can not resolve fine scale features of a velocity model. Inverted velocity changes are also blurred when there is noise in the data, so the problem of low-resolution images is widely known. One of the problems intensively tackled by computer vision research community is the recovering of high-resolution images having their low-resolution versions. Usage of artificial neural networks to reach super-resolution from a single downsampled image is one of the leading solutions for this problem. Each pixel of the upscaled image is affected by all the pixels of its low-resolution version, which enables the workflow to recover features that are likely to occur in the corresponding environment. In the present work, we adopt machine learning image enhancement technique to improve the resolution of time-lapse full-waveform inversion. We first invert the baseline model with conventional FWI. Then we run a few iterations of FWI on a set of the monitoring data to find desired model changes. These changes are blurred and we enhance their resolution by using a deep neural network. The network is trained to map low-resolution model updates predicted by FWI into the real perturbations of the baseline model. For supervised training of the network we generate a set of random perturbations in the baseline model and perform FWI on the noisy data from the perturbed models. We test the approach on a realistic perturbation of Marmousi II model and demonstrate that it outperforms conventional convolution-based deblurring techniques.

  2. Mesoscale numerical modeling of meteorological events in a strong topographic gradient in the northeastern part of Mexico

    NASA Astrophysics Data System (ADS)

    Pineda-Martinez, Luis F.; Carbajal, Noel

    2009-08-01

    A series of numerical experiments were carried out to study the effect of meteorological events such as warm and cold air masses on climatic features and variability of a understudied region with strong topographic gradients in the northeastern part of Mexico. We applied the mesoscale model MM5. We investigated the influence of soil moisture availability in the performance of the model under two representative events for winter and summer. The results showed that a better resolution in land use cover improved the agreement among observed and calculated data. The topography induces atmospheric circulation patterns that determine the spatial distribution of climate and seasonal behavior. The numerical experiments reveal regions favorable to forced convection on the eastern side of the mountain chains Eastern Sierra Madre and Sierra de Alvarez. These processes affect the vertical and horizontal structure of the meteorological variables along the topographic gradient.

  3. Numerical Modeling of Pulse Detonation Rocket Engine Gasdynamics and Performance

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This paper presents viewgraphs on the numerical modeling of pulse detonation rocket engines (PDRE), with an emphasis on the Gasdynamics and performance analysis of these engines. The topics include: 1) Performance Analysis of PDREs; 2) Simplified PDRE Cycle; 3) Comparison of PDRE and Steady-State Rocket Engines (SSRE) Performance; 4) Numerical Modeling of Quasi 1-D Rocket Flows; 5) Specific PDRE Geometries Studied; 6) Time-Accurate Thrust Calculations; 7) PDRE Performance (Geometries A B C and D); 8) PDRE Blowdown Gasdynamics (Geom. A B C and D); 9) PDRE Geometry Performance Comparison; 10) PDRE Blowdown Time (Geom. A B C and D); 11) Specific SSRE Geometry Studied; 12) Effect of F-R Chemistry on SSRE Performance; 13) PDRE/SSRE Performance Comparison; 14) PDRE Performance Study; 15) Grid Resolution Study; and 16) Effect of F-R Chemistry on SSRE Exit Species Mole Fractions.

  4. Non-monotonic spatial distribution of the interstellar dust in astrospheres: finite gyroradius effect

    NASA Astrophysics Data System (ADS)

    Katushkina, O. A.; Alexashov, D. B.; Izmodenov, V. V.; Gvaramadze, V. V.

    2017-02-01

    High-resolution mid-infrared observations of astrospheres show that many of them have filamentary (cirrus-like) structure. Using numerical models of dust dynamics in astrospheres, we suggest that their filamentary structure might be related to specific spatial distribution of the interstellar dust around the stars, caused by a gyrorotation of charged dust grains in the interstellar magnetic field. Our numerical model describes the dust dynamics in astrospheres under an influence of the Lorentz force and assumption of a constant dust charge. Calculations are performed for the dust grains with different sizes separately. It is shown that non-monotonic spatial dust distribution (viewed as filaments) appears for dust grains with the period of gyromotion comparable with the characteristic time-scale of the dust motion in the astrosphere. Numerical modelling demonstrates that the number of filaments depends on charge-to-mass ratio of dust.

  5. Targeted numerical simulations of binary black holes for GW170104

    NASA Astrophysics Data System (ADS)

    Healy, J.; Lange, J.; O'Shaughnessy, R.; Lousto, C. O.; Campanelli, M.; Williamson, A. R.; Zlochower, Y.; Calderón Bustillo, J.; Clark, J. A.; Evans, C.; Ferguson, D.; Ghonge, S.; Jani, K.; Khamesra, B.; Laguna, P.; Shoemaker, D. M.; Boyle, M.; García, A.; Hemberger, D. A.; Kidder, L. E.; Kumar, P.; Lovelace, G.; Pfeiffer, H. P.; Scheel, M. A.; Teukolsky, S. A.

    2018-03-01

    In response to LIGO's observation of GW170104, we performed a series of full numerical simulations of binary black holes, each designed to replicate likely realizations of its dynamics and radiation. These simulations have been performed at multiple resolutions and with two independent techniques to solve Einstein's equations. For the nonprecessing and precessing simulations, we demonstrate the two techniques agree mode by mode, at a precision substantially in excess of statistical uncertainties in current LIGO's observations. Conversely, we demonstrate our full numerical solutions contain information which is not accurately captured with the approximate phenomenological models commonly used to infer compact binary parameters. To quantify the impact of these differences on parameter inference for GW170104 specifically, we compare the predictions of our simulations and these approximate models to LIGO's observations of GW170104.

  6. Joint numerical study of the 2011 Tohoku-Oki tsunami: comparative propagation simulations and high resolution coastal models

    NASA Astrophysics Data System (ADS)

    Loevenbruck, Anne; Arpaia, Luca; Ata, Riadh; Gailler, Audrey; Hayashi, Yutaka; Hébert, Hélène; Heinrich, Philippe; Le Gal, Marine; Lemoine, Anne; Le Roy, Sylvestre; Marcer, Richard; Pedreros, Rodrigo; Pons, Kevin; Ricchiuto, Mario; Violeau, Damien

    2017-04-01

    This study is part of the joint actions carried out within TANDEM (Tsunamis in northern AtlaNtic: Definition of Effects by Modeling). This French project, mainly dedicated to the appraisal of coastal effects due to tsunami waves on the French coastlines, was initiated after the catastrophic 2011 Tohoku-Oki tsunami. This event, which tragically struck Japan, drew the attention to the importance of tsunami risk assessment, in particular when nuclear facilities are involved. As a contribution to this challenging task, the TANDEM partners intend to provide guidance for the French Atlantic area based on numerical simulation. One of the identified objectives consists in designing, adapting and validating simulation codes for tsunami hazard assessment. Besides an integral benchmarking workpackage, the outstanding database of the 2011 event offers the TANDEM partners the opportunity to test their numerical tools with a real case. As a prerequisite, among the numerous published seismic source models arisen from the inversion of the various available records, a couple of coseismic slip distributions have been selected to provide common initial input parameters for the tsunami computations. After possible adaptations or specific developments, the different codes are employed to simulate the Tohoku-Oki tsunami from its source to the northeast Japanese coastline. The results are tested against the numerous tsunami measurements and, when relevant, comparisons of the different codes are carried out. First, the results related to the oceanic propagation phase are compared with the offshore records. Then, the modeled coastal impacts are tested against the onshore data. Flooding at a regional scale is considered, but high resolution simulations are also performed with some of the codes. They allow examining in detail the runup amplitudes and timing, as well as the complexity of the tsunami interaction with the coastal structures. The work is supported by the Tandem project in the frame of French PIA grant ANR-11-RSNR-00023.

  7. High resolution modelling and observation of wind-driven surface currents in a semi-enclosed estuary

    NASA Astrophysics Data System (ADS)

    Nash, S.; Hartnett, M.; McKinstry, A.; Ragnoli, E.; Nagle, D.

    2012-04-01

    Hydrodynamic circulation in estuaries is primarily driven by tides, river inflows and surface winds. While tidal and river data can be quite easily obtained for input to hydrodynamic models, sourcing accurate surface wind data is problematic. Firstly, the wind data used in hydrodynamic models is usually measured on land and can be quite different in magnitude and direction from offshore winds. Secondly, surface winds are spatially-varying but due to a lack of data it is common practice to specify a non-varying wind speed and direction across the full extents of a model domain. These problems can lead to inaccuracies in the surface currents computed by three-dimensional hydrodynamic models. In the present research, a wind forecast model is coupled with a three-dimensional numerical model of Galway Bay, a semi-enclosed estuary on the west coast of Ireland, to investigate the effect of surface wind data resolution on model accuracy. High resolution and low resolution wind fields are specified to the model and the computed surface currents are compared with high resolution surface current measurements obtained from two high frequency SeaSonde-type Coastal Ocean Dynamics Applications Radars (CODAR). The wind forecast models used for the research are Harmonie cy361.3, running on 2.5 and 0.5km spatial grids for the low resolution and high resolution models respectively. The low-resolution model runs over an Irish domain on 540x500 grid points with 60 vertical levels and a 60s timestep and is driven by ECMWF boundary conditions. The nested high-resolution model uses 300x300 grid points on 60 vertical levels and a 12s timestep. EFDC (Environmental Fluid Dynamics Code) is used for the hydrodynamic model. The Galway Bay model has ten vertical layers and is resolved spatially and temporally at 150m and 4 sec respectively. The hydrodynamic model is run for selected hindcast dates when wind fields were highly energetic. Spatially- and temporally-varying wind data is provided by offline coupling with the wind forecast models. Modelled surface currents show good correlation with CODAR observed currents and the resolution of the surface wind data is shown to be important for model accuracy.

  8. A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)

    NASA Astrophysics Data System (ADS)

    Lee, Jin

    2014-05-01

    The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.

  9. The properties of human body phantoms used in calculations of electromagnetic fields exposure by wireless communication handsets or hand-operated industrial devices.

    PubMed

    Zradziński, Patryk

    2013-06-01

    According to international guidelines, the assessment of biophysical effects of exposure to electromagnetic fields (EMF) generated by hand-operated sources needs the evaluation of induced electric field (E(in)) or specific energy absorption rate (SAR) caused by EMF inside a worker's body and is usually done by the numerical simulations with different protocols applied to these two exposure cases. The crucial element of these simulations is the numerical phantom of the human body. Procedures of E(in) and SAR evaluation due to compliance analysis with exposure limits have been defined in Institute of Electrical and Electronics Engineers standards and International Commission on Non-Ionizing Radiation Protection guidelines, but a detailed specification of human body phantoms has not been described. An analysis of the properties of over 30 human body numerical phantoms was performed which has been used in recently published investigations related to the assessment of EMF exposure by various sources. The differences in applicability of these phantoms in the evaluation of E(in) and SAR while operating industrial devices and SAR while using mobile communication handsets are discussed. The whole human body numerical phantom dimensions, posture, spatial resolution and electric contact with the ground constitute the key parameters in modeling the exposure related to industrial devices, while modeling the exposure from mobile communication handsets, which needs only to represent the exposed part of the human body nearest to the handset, mainly depends on spatial resolution of the phantom. The specification and standardization of these parameters of numerical human body phantoms are key requirements to achieve comparable and reliable results from numerical simulations carried out for compliance analysis against exposure limits or within the exposure assessment in EMF-related epidemiological studies.

  10. Numerical solution of the exterior oblique derivative BVP using the direct BEM formulation

    NASA Astrophysics Data System (ADS)

    Čunderlík, Róbert; Špir, Róbert; Mikula, Karol

    2016-04-01

    The fixed gravimetric boundary value problem (FGBVP) represents an exterior oblique derivative problem for the Laplace equation. A direct formulation of the boundary element method (BEM) for the Laplace equation leads to a boundary integral equation (BIE) where a harmonic function is represented as a superposition of the single-layer and double-layer potential. Such a potential representation is applied to obtain a numerical solution of FGBVP. The oblique derivative problem is treated by a decomposition of the gradient of the unknown disturbing potential into its normal and tangential components. Our numerical scheme uses the collocation with linear basis functions. It involves a triangulated discretization of the Earth's surface as our computational domain considering its complicated topography. To achieve high-resolution numerical solutions, parallel implementations using the MPI subroutines as well as an iterative elimination of far zones' contributions are performed. Numerical experiments present a reconstruction of a harmonic function above the Earth's topography given by the spherical harmonic approach, namely by the EGM2008 geopotential model up to degree 2160. The SRTM30 global topography model is used to approximate the Earth's surface by the triangulated discretization. The obtained BEM solution with the resolution 0.05 deg (12,960,002 nodes) is compared with EGM2008. The standard deviation of residuals 5.6 cm indicates a good agreement. The largest residuals are obviously in high mountainous regions. They are negative reaching up to -0.7 m in Himalayas and about -0.3 m in Andes and Rocky Mountains. A local refinement in the area of Slovakia confirms an improvement of the numerical solution in this mountainous region despite of the fact that the Earth's topography is here considered in more details.

  11. Tsunami hazard maps of spanish coast at national scale from seismic sources

    NASA Astrophysics Data System (ADS)

    Aniel-Quiroga, Íñigo; González, Mauricio; Álvarez-Gómez, José Antonio; García, Pablo

    2017-04-01

    Tsunamis are a moderately frequent phenomenon in the NEAM (North East Atlantic and Mediterranean) region, and consequently in Spain, as historic and recent events have affected this area. I.e., the 1755 earthquake and tsunami affected the Spanish Atlantic coasts of Huelva and Cadiz and the 2003 Boumerdés earthquake triggered a tsunami that reached Balearic island coast in less than 45 minutes. The risk in Spain is real and, its population and tourism rate makes it vulnerable to this kind of catastrophic events. The Indian Ocean tsunami in 2004 and the tsunami in Japan in 2011 launched the worldwide development and application of tsunami risk reduction measures that have been taken as a priority in this field. On November 20th 2015 the directive of the Spanish civil protection agency on planning under the emergency of tsunami was presented. As part of the Spanish National Security strategy, this document specifies the structure of the action plans at different levels: National, regional and local. In this sense, the first step is the proper evaluation of the tsunami hazard at National scale. This work deals with the assessment of the tsunami hazard in Spain, by means of numerical simulations, focused on the elaboration of tsunami hazard maps at National scale. To get this, following a deterministic approach, the seismic structures whose earthquakes could generate the worst tsunamis affecting the coast of Spain have been compiled and characterized. These worst sources have been propagated numerically along a reconstructed bathymetry, built from the best resolution available data. This high-resolution bathymetry was joined with a 25-m resolution DTM, to generate continuous offshore-onshore space, allowing the calculation of the flooded areas prompted by each selected source. The numerical model applied for the calculation of the tsunami propagations was COMCOT. The maps resulting from the numerical simulations show not only the tsunami amplitude at coastal areas but also the run-up and inundation length from the coastline. The run-up has been calculated with numerical model, complemented with an alternative method, based on interpolation on a tsunami run-up database created ad hoc. These estimated variables allow the identification of the most affected areas in case of tsunami and they are also the base for the local authorities to evaluate the necessity of new higher resolution studies at local scale on specific areas.

  12. Application of Numerical Weather Models to Mitigating Atmospheric Artifacts in InSAR

    NASA Astrophysics Data System (ADS)

    Foster, J. H.; Kealy, J.; Businger, S.; Cherubini, T.; Brooks, B. A.; Albers, S. C.; Lu, Z.; Poland, M. P.; Chen, S.; Mass, C.

    2011-12-01

    A high-resolution weather "hindcasting" system to model the atmosphere at the time of SAR scene acquisitions has been established to investigate and mitigate the impact of atmospheric water vapor on InSAR deformation maps. Variations in the distributions of water vapor in the atmosphere between SAR acquisitions lead to artifacts in interferograms that can mask real ground motion signals. A database of regional numerical weather prediction model outputs generated by the University of Washington and U.C. Davis for times matching SAR acquisitions was used as "background" for higher resolution analyses of the atmosphere for Mount St Helens volcano in Washington, and Los Angeles in southern California. Using this background, we use LAPS to incrementally incorporate all other available meteorological data sets, including GPS, to explore the impact of additional observations on model accuracy. Our results suggest that, even with significant quantities of contemporaneously measured data, high-resolution atmospheric analyses are unable to model the timing and location of water vapor perturbations accurately enough to produce robust and reliable phase screens that can be directly subtracted from interferograms. Despite this, the analyses are able to reproduce the statistical character of the atmosphere with some confidence, suggesting that, in the absence of unusually dense in-situ measurements (such as is the case with GPS data for Los Angeles), weather analysis can play a valuable role in constraining the power-spectrum expected in an interferogram due to the troposphere. This could be used to provide objective weights to scenes during traditional stacking or to tune the filter parameters in time-series analyses.

  13. Numerical simulations of significant orographic precipitation in Madeira island

    NASA Astrophysics Data System (ADS)

    Couto, Flavio Tiago; Ducrocq, Véronique; Salgado, Rui; Costa, Maria João

    2016-03-01

    High-resolution simulations of high precipitation events with the MESO-NH model are presented, and also used to verify that increasing horizontal resolution in zones of complex orography, such as in Madeira island, improve the simulation of the spatial distribution and total precipitation. The simulations succeeded in reproducing the general structure of the cloudy systems over the ocean in the four periods considered of significant accumulated precipitation. The accumulated precipitation over the Madeira was better represented with the 0.5 km horizontal resolution and occurred under four distinct synoptic situations. Different spatial patterns of the rainfall distribution over the Madeira have been identified.

  14. Performance evaluation of a non-hydrostatic regional climate model over the Mediterranean/Black Sea area and climate projections for the XXI century

    NASA Astrophysics Data System (ADS)

    Mercogliano, Paola; Bucchignani, Edoardo; Montesarchio, Myriam; Zollo, Alessandra Lucia

    2013-04-01

    In the framework of the Work Package 4 (Developing integrated tools for environmental assessment) of PERSEUS Project, high resolution climate simulations have been performed, with the aim of furthering knowledge in the field of climate variability at regional scale, its causes and impacts. CMCC is a no profit centre whose aims are the promotion, research coordination and scientific activities in the field of climate changes. In this work, we show results of numerical simulation performed over a very wide area (13W-46E; 29-56N) at spatial resolution of 14 km, which includes the Mediterranean and Black Seas, using the regional climate model COSMO-CLM. It is a non-hydrostatic model for the simulation of atmospheric processes, developed by the DWD-Germany for weather forecast services; successively, the model has been updated by the CLM-Community, in order to develop climatic applications. It is the only documented numerical model system in Europe designed for spatial resolutions down to 1 km with a range of applicability encompassing operational numerical weather prediction, regional climate modelling the dispersion of trace gases and aerosol and idealised studies and applicable in all regions of the world for a wide range of available climate simulations from global climate and NWP models. Different reasons justify the development of a regional model: the first is the increasing number of works in literature asserting that regional models have also the features to provide more detailed description of the climate extremes, that are often more important then their mean values for natural and human systems. The second one is that high resolution modelling shows adequate features to provide information for impact assessment studies. At CMCC, regional climate modelling is a part of an integrated simulation system and it has been used in different European and African projects to provide qualitative and quantitative evaluation of the hydrogeological and public health risks. A simulation covering the period 1971-2000 and driven by ERA40 reanalysis has been performed, in order to assess the capability of the model to reproduce the present climate, with "perfect boundary conditions". A comparison, in terms of 2-metre temperature and precipitation, with EOBS dataset will be shown and discussed, in order to analyze the capabilities in simulating the main features of the observed climate over a wide area, at high spatial resolution. Then, a comparison between the results of COSMO-CLM driven by the global model CMCC-MED (whose atmospheric component is ECHAM5) and by ERA40 will be provided for a characterization of the errors induced by the global model. Finally, climate projections on the examined area for the XXI century, considering the RCP4.5 emission scenario for the future, will be provided. In this work a special emphasis will be issued to the analysis of the capability to reproduce not only the average climate trend but also extremes of the present and future climate, in terms of temperature, precipitation and wind.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Jitendra; Collier, Nathan; Bisht, Gautam

    Vast carbon stocks stored in permafrost soils of Arctic tundra are under risk of release to the atmosphere under warming climate scenarios. Ice-wedge polygons in the low-gradient polygonal tundra create a complex mosaic of microtopographic features. This microtopography plays a critical role in regulating the fine-scale variability in thermal and hydrological regimes in the polygonal tundra landscape underlain by continuous permafrost. Modeling of thermal regimes of this sensitive ecosystem is essential for understanding the landscape behavior under the current as well as changing climate. Here, we present an end-to-end effort for high-resolution numerical modeling of thermal hydrology at real-world fieldmore » sites, utilizing the best available data to characterize and parameterize the models. We also develop approaches to model the thermal hydrology of polygonal tundra and apply them at four study sites near Barrow, Alaska, spanning across low to transitional to high-centered polygons, representing a broad polygonal tundra landscape. A multiphase subsurface thermal hydrology model (PFLOTRAN) was developed and applied to study the thermal regimes at four sites. Using a high-resolution lidar digital elevation model (DEM), microtopographic features of the landscape were characterized and represented in the high-resolution model mesh. The best available soil data from field observations and literature were utilized to represent the complex heterogeneous subsurface in the numerical model. Simulation results demonstrate the ability of the developed modeling approach to capture – without recourse to model calibration – several aspects of the complex thermal regimes across the sites, and provide insights into the critical role of polygonal tundra microtopography in regulating the thermal dynamics of the carbon-rich permafrost soils. Moreover, areas of significant disagreement between model results and observations highlight the importance of field-based observations of soil thermal and hydraulic properties for modeling-based studies of permafrost thermal dynamics, and provide motivation and guidance for future observations that will help address model and data gaps affecting our current understanding of the system.« less

  16. Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.

    NASA Astrophysics Data System (ADS)

    Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric

    2016-04-01

    SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.

  17. A VAS-numerical model impact study using the Gal-Chen variational approach. [Visible Infrared Spin-Scan Radiometer Atmospheric Sounder (VAS)

    NASA Technical Reports Server (NTRS)

    Aune, Robert M.; Uccellini, Louis W.; Peterson, Ralph A.; Tuccillo, James J.

    1987-01-01

    Numerical experiments to assess the impact of incorporating temperature data from the VISSR Atmospheric Sounder (VAS) using the assimilation technique developed by Gal-Chen (1986) modified for use in the Mesoscale Atmospheric Simulation System (MASS) model were conducted. The scheme is designed to utilize the high temporal and horizontal resolution of satellite retrievals while maintaining the fine vertical structure generated by the model. This is accomplished by adjusting the model lapse rates to reflect thicknesses retrieved from VAS and applying a three-dimensional variational that preserves the distribution of the geopotential fields in the model. A nudging technique whereby the model temperature fields are gradually adjusted toward the updated temperature fields during model integration is also tested. An adiabatic version of MASS is used in all experiments to better isolate mass-momentum imbalances. The method has a sustained impact over an 18 hr model simulation.

  18. Towards Improved Forecasts of Atmospheric and Oceanic Circulations over the Complex Terrain of the Eastern Mediterranean

    NASA Technical Reports Server (NTRS)

    Chronis, Themis; Case, Jonathan L.; Papadopoulos, Anastasios; Anagnostou, Emmanouil N.; Mecikalski, John R.; Haines, Stephanie L.

    2008-01-01

    Forecasting atmospheric and oceanic circulations accurately over the Eastern Mediterranean has proved to be an exceptional challenge. The existence of fine-scale topographic variability (land/sea coverage) and seasonal dynamics variations can create strong spatial gradients in temperature, wind and other state variables, which numerical models may have difficulty capturing. The Hellenic Center for Marine Research (HCMR) is one of the main operational centers for wave forecasting in the eastern Mediterranean. Currently, HCMR's operational numerical weather/ocean prediction model is based on the coupled Eta/Princeton Ocean Model (POM). Since 1999, HCMR has also operated the POSEIDON floating buoys as a means of state-of-the-art, real-time observations of several oceanic and surface atmospheric variables. This study attempts a first assessment at improving both atmospheric and oceanic prediction by initializing a regional Numerical Weather Prediction (NWP) model with high-resolution sea surface temperatures (SST) from remotely sensed platforms in order to capture the small-scale characteristics.

  19. Investigation of CO 2 capture using solid sorbents in a fluidized bed reactor: Cold flow hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tingwen; Dietiker, Jean -Francois; Rogers, William

    2016-07-29

    Both experimental tests and numerical simulations were conducted to investigate the fluidization behavior of a solid CO 2 sorbent with a mean diameter of 100 μm and density of about 480 kg/m, which belongs to Geldart's Group A powder. A carefully designed fluidized bed facility was used to perform a series of experimental tests to study the flow hydrodynamics. Numerical simulations using the two-fluid model indicated that the grid resolution has a significant impact on the bed expansion and bubbling flow behavior. Due to the limited computational resource, no good grid independent results were achieved using the standard models asmore » far as the bed expansion is concerned. In addition, all simulations tended to under-predict the bubble size substantially. Effects of various model settings including both numerical and physical parameters have been investigated with no significant improvement observed. The latest filtered sub-grid drag model was then tested in the numerical simulations. Compared to the standard drag model, the filtered drag model with two markers not only predicted reasonable bed expansion but also yielded realistic bubbling behavior. As a result, a grid sensitivity study was conducted for the filtered sub-grid model and its applicability and limitation were discussed.« less

  20. Spectral characteristics of mid-latitude continental convection from a global variable-resolution Voronoi-mesh atmospheric model

    NASA Astrophysics Data System (ADS)

    Wong, M.; Skamarock, W. C.

    2015-12-01

    Global numerical weather forecast tests were performed using the global nonhydrostatic atmospheric model, Model for Prediction Across Scales (MPAS), for the NOAA Storm Prediction Center 2015 Spring Forecast Experiment (May 2015) and the Plains Elevated Convection at Night (PECAN) field campaign (June to mid-July 2015). These two sets of forecasts were performed on 50-to-3 km and 15-to-3 km smoothly-varying horizontal meshes, respectively. Both variable-resolution meshes have nominal convection-permitting 3-km grid spacing over the entire continental US. Here we evaluate the limited-area (vs. global) spectra from these NWP simulations. We will show the simulated spectral characteristics of total kinetic energy, vertical velocity variance, and precipitation during these spring and summer periods when diurnal continental convection is most active over central US. Spectral characteristics of a high-resolution global 3-km simulation (essentially no nesting) from the 20 May 2013 Moore, OK tornado case are also shown. These characteristics include spectral scaling, shape, and anisotropy, as well as the effective resolution of continental convection representation in MPAS.

  1. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  2. An unstaggered central scheme on nonuniform grids for the simulation of a compressible two-phase flow model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Touma, Rony; Zeidan, Dia

    In this paper we extend a central finite volume method on nonuniform grids to the case of drift-flux two-phase flow problems. The numerical base scheme is an unstaggered, non oscillatory, second-order accurate finite volume scheme that evolves a piecewise linear numerical solution on a single grid and uses dual cells intermediately while updating the numerical solution to avoid the resolution of the Riemann problems arising at the cell interfaces. We then apply the numerical scheme and solve a classical drift-flux problem. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the potentialmore » of the proposed scheme.« less

  3. Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Klimczak, Marek; Cecot, Witold

    2018-01-01

    We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.

  4. The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.

    2012-04-01

    Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.

  5. Toward the S3DVAR data assimilation software for the Caspian Sea

    NASA Astrophysics Data System (ADS)

    Arcucci, Rossella; Celestino, Simone; Toumi, Ralf; Laccetti, Giuliano

    2017-07-01

    Data Assimilation (DA) is an uncertainty quantification technique used to incorporate observed data into a prediction model in order to improve numerical forecasted results. The forecasting model used for producing oceanographic prediction into the Caspian Sea is the Regional Ocean Modeling System (ROMS). Here we propose the computational issues we are facing in a DA software we are developing (we named S3DVAR) which implements a Scalable Three Dimensional Variational Data Assimilation model for assimilating sea surface temperature (SST) values collected into the Caspian Sea with observations provided by the Group of High resolution sea surface temperature (GHRSST). We present the algorithmic strategies we employ and the numerical issues on data collected in two of the months which present the most significant variability in water temperature: August and March.

  6. Improvement of High-Resolution Tropical Cyclone Structure and Intensity Forecasts using COAMPS-TC

    DTIC Science & Technology

    2013-09-30

    scientific community including the recent T- PARC /TCS08, ITOP, and HS3 field campaigns to build upon the existing modeling capabilities. We will...heating and cooling rates in developing and non-developing tropical disturbances during tcs-08: radar -equivalent retrievals from mesoscale numerical

  7. Development and testing of a simple inertial formulation of the shallow water equations for flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Fewtrell, Timothy; Bates, Paul; Horritt, Matthew

    2010-05-01

    This abstract describes the development of a new set of equations derived from 1D shallow water theory for use in 2D storage cell inundation models. The new equation set is designed to be solved explicitly at very low computational cost, and is here tested against a suite of four analytical and numerical test cases of increasing complexity. In each case the predicted water depths compare favourably to analytical solutions or to benchmark results from the optimally stable diffusive storage cell code of Hunter et al. (2005). For the most complex test involving the fine spatial resolution simulation of flow in a topographically complex urban area the Root Mean Squared Difference between the new formulation and the model of Hunter et al. is ~1 cm. However, unlike diffusive storage cell codes where the stable time step scales with (1-?x)2 the new equation set developed here represents shallow water wave propagation and so the stability is controlled by the Courant-Freidrichs-Lewy condition such that the stable time step instead scales with 1-?x. This allows use of a stable time step that is 1-3 orders of magnitude greater for typical cell sizes than that possible with diffusive storage cell models and results in commensurate reductions in model run times. The maximum speed up achieved over a diffusive storage cell model was 1120x in these tests, although the actual value seen will depend on model resolution and water depth and surface gradient. Solutions using the new equation set are shown to be relatively grid-independent for the conditions considered given the numerical diffusion likely at coarse model resolution. In addition, the inertial formulation appears to have an intuitively correct sensitivity to friction, however small instabilities and increased errors on predicted depth were noted when Manning's n = 0.01. These small instabilities are likely to be a result of the numerical scheme employed, whereby friction is acting to stabilise the solution although this scheme is still widely used in practice. The new equations are likely to find widespread application in many types of flood inundation modelling and should provide a useful additional tool, alongside more established model formulations, for a variety of flood risk management studies.

  8. Streamflow simulation for continental-scale river basins

    NASA Astrophysics Data System (ADS)

    Nijssen, Bart; Lettenmaier, Dennis P.; Liang, Xu; Wetzel, Suzanne W.; Wood, Eric F.

    1997-04-01

    A grid network version of the two-layer variable infiltration capacity (VIC-2L) macroscale hydrologic model is described. VIC-2L is a hydrologically based soil- vegetation-atmosphere transfer scheme designed to represent the land surface in numerical weather prediction and climate models. The grid network scheme allows streamflow to be predicted for large continental rivers. Off-line (observed and estimated surface meteorological and radiative forcings) applications of the model to the Columbia River (1° latitude-longitude spatial resolution) and Delaware River (0.5° resolution) are described. The model performed quite well in both applications, reproducing the seasonal hydrograph and annual flow volumes to within a few percent. Difficulties in reproducing observed streamflow in the arid portion of the Snake River basin are attributed to groundwater-surface water interactions, which are not modeled by VIC-2L.

  9. Numerical modeling of landslide-generated tsunami using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Wilson, Cian; Collins, Gareth; Desousa Costa, Patrick; Piggott, Matthew

    2010-05-01

    Landslides impacting into or occurring under water generate waves, which can have devastating environmental consequences. Depending on the characteristics of the landslide the waves can have significant amplitude and potentially propagate over large distances. Linear models of classical earthquake-generated tsunamis cannot reproduce the highly nonlinear generation mechanisms required to accurately predict the consequences of landslide-generated tsunamis. Also, laboratory-scale experimental investigation is limited to simple geometries and short time-scales before wave reflections contaminate the data. Computational fluid dynamics models based on the nonlinear Navier-Stokes equations can simulate landslide-tsunami generation at realistic scales. However, traditional chessboard-like structured meshes introduce superfluous resolution and hence the computing power required for such a simulation can be prohibitively high, especially in three dimensions. Unstructured meshes allow the grid spacing to vary rapidly from high resolution in the vicinity of small scale features to much coarser, lower resolution in other areas. Combining this variable resolution with dynamic mesh adaptivity allows such high resolution zones to follow features like the interface between the landslide and the water whilst minimising the computational costs. Unstructured meshes are also better suited to representing complex geometries and bathymetries allowing more realistic domains to be simulated. Modelling multiple materials, like water, air and a landslide, on an unstructured adaptive mesh poses significant numerical challenges. Novel methods of interface preservation must be considered and coupled to a flow model in such a way that ensures conservation of the different materials. Furthermore this conservation property must be maintained during successive stages of mesh optimisation and interpolation. In this paper we validate a new multi-material adaptive unstructured fluid dynamics model against the well-known Lituya Bay landslide-generated wave experiment and case study [1]. In addition, we explore the effect of physical parameters, such as the shape, velocity and viscosity of the landslide, on wave amplitude and run-up, to quantify their influence on the landslide-tsunami hazard. As well as reproducing the experimental results, the model is shown to have excellent conservation and bounding properties. It also requires fewer nodes than an equivalent resolution fixed mesh simulation, therefore minimising at least one aspect of the computational cost. These computational savings are directly transferable to higher dimensions and some initial three dimensional results are also presented. These reproduce the experiments of DiRisio et al. [2], where an 80cm long landslide analogue was released from the side of an 8.9m diameter conical island in a 50 × 30m tank of water. The resulting impact between the landslide and the water generated waves with an amplitude of 1cm at wave gauges around the island. The range of scales that must be considered in any attempt to numerically reproduce this experiment makes it an ideal case study for our multi-material adaptive unstructured fluid dynamics model. [1] FRITZ, H. M., MOHAMMED, F., & YOO, J. 2009. Lituya Bay Landslide Impact Generated Mega-Tsunami 50th Anniversary. Pure and Applied Geophysics, 166(1), 153-175. [2] DIRISIO, M., DEGIROLAMO, P., BELLOTTI, G., PANIZZO, A., ARISTODEMO, F.,

  10. Assessing Australian Rainfall Projections in Two Model Resolutions

    NASA Astrophysics Data System (ADS)

    Taschetto, A.; Haarsma, R. D.; Sen Gupta, A.

    2016-02-01

    Australian climate is projected to change with increases in greenhouse gases. The IPCC reports an increase in extreme daily rainfall across the country. At the same time, mean rainfall over southeast Australia is projected to reduce during austral winter, but to increase during austral summer, mainly associated with changes in the surrounding oceans. Climate models agree better on the future reduction of average rainfall over the southern regions of Australia compared to the increase in extreme rainfall events. One of the reasons for this disagreement may be related to climate model limitations in simulating the observed mechanisms associated with the mid-latitude weather systems, in particular due to coarse model resolutions. In this study we investigate how changes in sea surface temperature (SST) affect Australian mean and extreme rainfall under global warming, using a suite of numerical experiments at two model resolutions: about 126km (T159) and 25km (T799). The numerical experiments are performed with the earth system model EC-EARTH. Two 6-member ensembles are produced for the present day conditions and a future scenario. The present day ensemble is forced with the observed daily SST from the NOAA National Climatic Data Center from 2002 to 2006. The future scenario simulation is integrated from 2094 to 2098 using the present day SST field added onto the future SST change created from a 17-member ensemble based on the RCP4.5 scenario. Preliminary results show an increase in extreme rainfall events over Tasmania associated with enhanced convection driven by the Tasman Sea warming. We will further discuss how the projected changes in SST will impact the southern mid-latitude weather systems that ultimately affect Australian rainfall.

  11. Francis-99 turbine numerical flow simulation of steady state operation using RANS and RANS/LES turbulence model

    NASA Astrophysics Data System (ADS)

    Minakov, A.; Platonov, D.; Sentyabov, A.; Gavrilov, A.

    2017-01-01

    We performed numerical simulation of flow in a laboratory model of a Francis hydroturbine at three regimes, using two eddy-viscosity- (EVM) and a Reynolds stress (RSM) RANS models (realizable k-ɛ, k-ω SST, LRR) and detached-eddy-simulations (DES), as well as large-eddy simulations (LES). Comparison of calculation results with the experimental data was carried out. Unlike the linear EVMs, the RSM, DES, and LES reproduced well the mean velocity components, and pressure pulsations in the diffusor draft tube. Despite relatively coarse meshes and insufficient resolution of the near-wall region, LES, DES also reproduced well the intrinsic flow unsteadiness and the dominant flow structures and the associated pressure pulsations in the draft tube.

  12. Assessment of the Impact of Climate Change on the Water Balances and Flooding Conditions of Peninsular Malaysia watersheds by a Coupled Numerical Climate Model - Watershed Hydrology Model

    NASA Astrophysics Data System (ADS)

    Ercan, A.; Kavvas, M. L.; Ishida, K.; Chen, Z. Q.; Amin, M. Z. M.; Shaaban, A. J.

    2017-12-01

    Impacts of climate change on the hydrologic processes under future climate change conditions were assessed over various watersheds of Peninsular Malaysia by means of a coupled regional climate and physically-based hydrology model that utilized an ensemble of future climate change projections. An ensemble of 15 different future climate realizations from coarse resolution global climate models' (GCMs) projections for the 21st century were dynamically downscaled to 6 km resolution over Peninsular Malaysia by a regional numerical climate model, which was then coupled with the watershed hydrology model WEHY through the atmospheric boundary layer over the selected watersheds of Peninsular Malaysia. Hydrologic simulations were carried out at hourly increments and at hillslope-scale in order to assess the impacts of climate change on the water balances and flooding conditions at the selected watersheds during the 21st century. The coupled regional climate and hydrology model was simulated for a duration of 90 years for each of the 15 realizations. It is demonstrated that the increase in mean monthly flows due to the impact of expected climate change during 2040-2100 is statistically significant at the selected watersheds. Furthermore, the flood frequency analyses for the selected watersheds indicate an overall increasing trend in the second half of the 21st century.

  13. Enviro-HIRLAM/ HARMONIE Studies in ECMWF HPC EnviroAerosols Project

    NASA Astrophysics Data System (ADS)

    Hansen Sass, Bent; Mahura, Alexander; Nuterman, Roman; Baklanov, Alexander; Palamarchuk, Julia; Ivanov, Serguei; Pagh Nielsen, Kristian; Penenko, Alexey; Edvardsson, Nellie; Stysiak, Aleksander Andrzej; Bostanbekov, Kairat; Amstrup, Bjarne; Yang, Xiaohua; Ruban, Igor; Bergen Jensen, Marina; Penenko, Vladimir; Nurseitov, Daniyar; Zakarin, Edige

    2017-04-01

    The EnviroAerosols on ECMWF HPC project (2015-2017) "Enviro-HIRLAM/ HARMONIE model research and development for online integrated meteorology-chemistry-aerosols feedbacks and interactions in weather and atmospheric composition forecasting" is aimed at analysis of importance of the meteorology-chemistry/aerosols interactions and to provide a way for development of efficient techniques for on-line coupling of numerical weather prediction and atmospheric chemical transport via process-oriented parameterizations and feedback algorithms, which will improve both the numerical weather prediction and atmospheric composition forecasts. Two main application areas of the on-line integrated modelling are considered: (i) improved numerical weather prediction with short-term feedbacks of aerosols and chemistry on formation and development of meteorological variables, and (ii) improved atmospheric composition forecasting with on-line integrated meteorological forecast and two-way feedbacks between aerosols/chemistry and meteorology. During 2015-2016 several research projects were realized. At first, the study on "On-line Meteorology-Chemistry/Aerosols Modelling and Integration for Risk Assessment: Case Studies" focused on assessment of scenarios with accidental and continuous emissions of sulphur dioxide for case studies for Atyrau (Kazakhstan) near the northern part of the Caspian Sea and metallurgical enterprises on the Kola Peninsula (Russia), with GIS integration of modelling results into the RANDOM (Risk Assessment of Nature Detriment due to Oil spill Migration) system. At second, the studies on "The sensitivity of precipitation simulations to the soot aerosol presence" & "The precipitation forecast sensitivity to data assimilation on a very high resolution domain" focused on sensitivity and changes in precipitation life-cycle under black carbon polluted conditions over Scandinavia. At third, studies on "Aerosol effects over China investigated with a high resolution convection permitting weather model" & "Meteorological and chemical urban scale modelling for Shanghai metropolitan area" with focus on aerosol effects and influence of urban areas in China at regional-subregional-urban scales. At fourth, study on "Direct variational data assimilation algorithm for atmospheric chemistry data with transport and transformation model" with focus on testing chemical data assimilation algorithm of in situ concentration measurements on real data scenario. At firth, study on "Aerosol influence on High Resolution NWP HARMONIE Operational Forecasts" with focus on impact of sea salt aerosols on numerical weather prediction during low precipitation events. And finally, study on "Impact of regional afforestation on climatic conditions in metropolitan areas: case study of Copenhagen" with focus on impact of forest and land-cover change on formation and development of temperature regimes in the Copenhagen metropolitan area of Denmark. Selected results and findings will be presented and discussed.

  14. Experimental High-Resolution Land Surface Prediction System for the Vancouver 2010 Winter Olympic Games

    NASA Astrophysics Data System (ADS)

    Belair, S.; Bernier, N.; Tong, L.; Mailhot, J.

    2008-05-01

    The 2010 Winter Olympic and Paralympic Games will take place in Vancouver, Canada, from 12 to 28 February 2010 and from 12 to 21 March 2010, respectively. In order to provide the best possible guidance achievable with current state-of-the-art science and technology, Environment Canada is currently setting up an experimental numerical prediction system for these special events. This system consists of a 1-km limited-area atmospheric model that will be integrated for 16h, twice a day, with improved microphysics compared with the system currently operational at the Canadian Meteorological Centre. In addition, several new and original tools will be used to adapt and refine predictions near and at the surface. Very high-resolution two-dimensional surface systems, with 100-m and 20-m grid size, will cover the Vancouver Olympic area. Using adaptation methods to improve the forcing from the lower-resolution atmospheric models, these 2D surface models better represent surface processes, and thus lead to better predictions of snow conditions and near-surface air temperature. Based on a similar strategy, a single-point model will be implemented to better predict surface characteristics at each station of an observing network especially installed for the 2010 events. The main advantage of this single-point system is that surface observations are used as forcing for the land surface models, and can even be assimilated (although this is not expected in the first version of this new tool) to improve initial conditions of surface variables such as snow depth and surface temperatures. Another adaptation tool, based on 2D stationnary solutions of a simple dynamical system, will be used to produce near-surface winds on the 100-m grid, coherent with the high- resolution orography. The configuration of the experimental numerical prediction system will be presented at the conference, together with preliminary results for winter 2007-2008.

  15. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    NASA Astrophysics Data System (ADS)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.

  16. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  17. Computation of Surface Laplacian for tri-polar ring electrodes on high-density realistic geometry head model.

    PubMed

    Junwei Ma; Han Yuan; Sunderam, Sridhar; Besio, Walter; Lei Ding

    2017-07-01

    Neural activity inside the human brain generate electrical signals that can be detected on the scalp. Electroencephalograph (EEG) is one of the most widely utilized techniques helping physicians and researchers to diagnose and understand various brain diseases. Due to its nature, EEG signals have very high temporal resolution but poor spatial resolution. To achieve higher spatial resolution, a novel tri-polar concentric ring electrode (TCRE) has been developed to directly measure Surface Laplacian (SL). The objective of the present study is to accurately calculate SL for TCRE based on a realistic geometry head model. A locally dense mesh was proposed to represent the head surface, where the local dense parts were to match the small structural components in TCRE. Other areas without dense mesh were used for the purpose of reducing computational load. We conducted computer simulations to evaluate the performance of the proposed mesh and evaluated possible numerical errors as compared with a low-density model. Finally, with achieved accuracy, we presented the computed forward lead field of SL for TCRE for the first time in a realistic geometry head model and demonstrated that it has better spatial resolution than computed SL from classic EEG recordings.

  18. Assessment of wind energy potential in Poland

    NASA Astrophysics Data System (ADS)

    Starosta, Katarzyna; Linkowska, Joanna; Mazur, Andrzej

    2014-05-01

    The aim of the presentation is to show the suitability of using numerical model wind speed forecasts for the wind power industry applications in Poland. In accordance with the guidelines of the European Union, the consumption of wind energy in Poland is rapidly increasing. According to the report of Energy Regulatory Office from 30 March 2013, the installed capacity of wind power in Poland was 2807MW from 765 wind power stations. Wind energy is strongly dependent on the meteorological conditions. Based on the climatological wind speed data, potential energy zones within the area of Poland have been developed (H. Lorenc). They are the first criterion for assessing the location of the wind farm. However, for exact monitoring of a given wind farm location the prognostic data from numerical model forecasts are necessary. For the practical interpretation and further post-processing, the verification of the model data is very important. Polish Institute Meteorology and Water Management - National Research Institute (IMWM-NRI) runs an operational model COSMO (Consortium for Small-scale Modelling, version 4.8) using two nested domains at horizontal resolutions of 7 km and 2.8 km. The model produces 36 hour and 78 hour forecasts from 00 UTC, for 2.8 km and 7 km domain resolutions respectively. Numerical forecasts were compared with the observation of 60 SYNOP and 3 TEMP stations in Poland, using VERSUS2 (Unified System Verification Survey 2) and R package. For every zone the set of statistical indices (ME, MAE, RMSE) was calculated. Forecast errors for aerological profiles are shown for Polish TEMP stations at Wrocław, Legionowo and Łeba. The current studies are connected with a topic of the COST ES1002 WIRE-Weather Intelligence for Renewable Energies.

  19. Turbulence sources in mountain terrain: results from MATERHORN program

    NASA Astrophysics Data System (ADS)

    Di Sabatino, Silvana; Leo, Laura S.; Fernando, Harindra J. S.; Pardyjak, Eric R.; Hocut, Chris M.

    2016-04-01

    Improving high-resolution numerical weather prediction in complex terrain is essential for the many applications involving mountain weather. It is commonly recognized that high intensity weather phenomena near mountains are a safety hazard to aircrafts and unmanned aerial vehicles, but the prediction of highly variable weather is often unsatisfactory due to inadequacy of resolution or lack of the correct dynamics in the model. Improving mountain weather forecasts has been the goal of the interdisciplinary Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) program (2011-2016). In this paper, we will report some of the findings focusing on several mechanisms of generating turbulence in near surface flows in the vicinity of an isolated mountain. Specifically, we will discuss nocturnal flows under low synoptic forcing. It has been demonstrated that such calm conditions are hard to predict in typical weather predictions models where forcing is dominated by local features that are poorly included in numerical models. It is found that downslope flows in calm and clear nights develop rapidly after sunset and usually persists for few hours. Owing to multiscale flow interactions, slope flows appear to be intermittent and disturbed, with a tendency to decay through the night yet periodically and unexpectedly generated. One of the interesting feature herein is the presence of oscillations that can be associated to different types of waves (e.g. internal and trapping waves) which may break to produce extra mixing. Pulsations of katabatic flow at critical internal-wave frequency, flow intrusions arriving from different topographies and shear layers of flow fanning out from the gaps all contribute to the weakly or intermittently turbulent state. Understanding of low frequency contributions to the total kinetic energy represent a step forward into modelling sub-grid effects in numerical models used for aviation applications.

  20. 3D Geodynamic Modelling Reveals Stress and Strain Partitioning within Continental Rifting

    NASA Astrophysics Data System (ADS)

    Rey, P. F.; Mondy, L. S.; Duclaux, G.; Moresi, L. N.

    2014-12-01

    The relative movement between two divergent rigid plates on a sphere can be described using a Euler pole and an angular velocity. On Earth, this typically results in extensional velocities increasing linearly as a function of the distance from the pole (for example in the South Atlantic, North Atlantic, Woodlark Basin, Red Sea Basin, etc.). This property has strong implications for continental rifting and the formation of passive margins, given the role that extensional velocity plays on both rift style (wide or narrow), fault pattern, subsidence histories, and magmatism. Until now, this scissor-style opening has been approached via suites of 2D numerical models of contrasting extensional velocities, complimenting field geology and geophysics. New advances in numerical modelling tools and computational hardware have enabled us to investigate the geodynamics of this problem in a 3D self-consistent high-resolution context. Using Underworld at a grid resolution of 2 km over a domain of 500 km x 500 km x 180 km, we have explored the role of the velocity gradient on the strain pattern, style of rifting, and decompression melting, along the margin. We find that the three dimensionality of this problem is important. The rise of the asthenosphere is enhanced in 2D models compared to 3D numerical solutions, due to the limited volume of material available in 2D. This leads to oceanisation occurring significantly sooner in 2D models. The 3D model shows that there is a significant time and space dependent flows parallel to the rift-axis. A similar picture emerges from the stress field, showing time and space partitioning, including regions of compression separating areas dominated by extension. The strain pattern shows strong zonation along the rift axis, with increasingly localised deformation with extension velocity and though time.

  1. SURFEX v8.0 interface with OASIS3-MCT to couple atmosphere with hydrology, ocean, waves and sea-ice models, from coastal to global scales

    NASA Astrophysics Data System (ADS)

    Voldoire, Aurore; Decharme, Bertrand; Pianezze, Joris; Lebeaupin Brossier, Cindy; Sevault, Florence; Seyfried, Léo; Garnier, Valérie; Bielli, Soline; Valcke, Sophie; Alias, Antoinette; Accensi, Mickael; Ardhuin, Fabrice; Bouin, Marie-Noëlle; Ducrocq, Véronique; Faroux, Stéphanie; Giordani, Hervé; Léger, Fabien; Marsaleix, Patrick; Rainaud, Romain; Redelsperger, Jean-Luc; Richard, Evelyne; Riette, Sébastien

    2017-11-01

    This study presents the principles of the new coupling interface based on the SURFEX multi-surface model and the OASIS3-MCT coupler. As SURFEX can be plugged into several atmospheric models, it can be used in a wide range of applications, from global and regional coupled climate systems to high-resolution numerical weather prediction systems or very fine-scale models dedicated to process studies. The objective of this development is to build and share a common structure for the atmosphere-surface coupling of all these applications, involving on the one hand atmospheric models and on the other hand ocean, ice, hydrology, and wave models. The numerical and physical principles of SURFEX interface between the different component models are described, and the different coupled systems in which the SURFEX OASIS3-MCT-based coupling interface is already implemented are presented.

  2. Simulating seasonal tropical cyclone intensities at landfall along the South China coast

    NASA Astrophysics Data System (ADS)

    Lok, Charlie C. F.; Chan, Johnny C. L.

    2018-04-01

    A numerical method is developed using a regional climate model (RegCM3) and the Weather Forecast and Research (WRF) model to predict seasonal tropical cyclone (TC) intensities at landfall for the South China region. In designing the model system, three sensitivity tests have been performed to identify the optimal choice of the RegCM3 model domain, WRF horizontal resolution and WRF physics packages. Driven from the National Centers for Environmental Prediction Climate Forecast System Reanalysis dataset, the model system can produce a reasonable distribution of TC intensities at landfall on a seasonal scale. Analyses of the model output suggest that the strength and extent of the subtropical ridge in the East China Sea are crucial to simulating TC landfalls in the Guangdong and Hainan provinces. This study demonstrates the potential for predicting TC intensities at landfall on a seasonal basis as well as projecting future climate changes using numerical models.

  3. Diffusion impact on atmospheric moisture transport

    NASA Astrophysics Data System (ADS)

    Moseley, C.; Haerter, J.; Göttel, H.; Hagemann, S.; Jacob, D.

    2009-04-01

    To ensure numerical stability, many global and regional climate models employ numerical diffusion to dampen short wavelength modes. Terrain following sigma diffusion is known to cause unphysical effects near the surface in orographically structured regions. They can be reduced by applying z-diffusion on geopotential height levels. We investigate the effect of the diffusion scheme on atmospheric moisture transport and precipitation formation at different resolutions in the European region. With respect to a better understanding of diffusion in current and future grid-space global models, current day regional models may serve as the appropriate tool for studies of the impact of diffusion schemes: Results can easily be constrained to a small test region and checked against reliable observations, which often are unavailable on a global scale. Special attention is drawn to the Alps - a region of strong topographic gradients and good observational coverage. Our study is further motivated by the appearance of the "summer drying problem" in South Eastern Europe. This too warm and too dry simulation of climate is common to many regional climate models and also to some global climate models, and remains a permanent unsolved problem in the community. We perform a systematic comparison of the two diffusion-schemes with respect to the hydrological cycle. In particular, we investigate how local meteorological quantities - such as the atmospheric moisture in the region east of the Alps - depend on the spatial model resolution. Higher model resolution would lead to a more accurate representation of the topography and entail larger gradients in the Alps. This could lead to consecutively stronger transport of moisture along the slopes in the case of sigma-diffusion with subsequent orographic precipitation, whereas the effect could be qualitatively different in the case of z-diffusion. For our study, we analyse a sequence of simulations of the regional climate model REMO employing the different diffusion methods over Europe. For these simulations, REMO was forced at the lateral boundaries with ERA40 reanalysis data for a five year period. For our higher resolution simulations we employ the double nesting technique.

  4. Theoretical limit of spatial resolution in diffuse optical tomography using a perturbation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konovalov, A B; Vlasov, V V

    2014-03-28

    We have assessed the limit of spatial resolution of timedomain diffuse optical tomography (DOT) based on a perturbation reconstruction model. From the viewpoint of the structure reconstruction accuracy, three different approaches to solving the inverse DOT problem are compared. The first approach involves reconstruction of diffuse tomograms from straight lines, the second – from average curvilinear trajectories of photons and the third – from total banana-shaped distributions of photon trajectories. In order to obtain estimates of resolution, we have derived analytical expressions for the point spread function and modulation transfer function, as well as have performed a numerical experiment onmore » reconstruction of rectangular scattering objects with circular absorbing inhomogeneities. It is shown that in passing from reconstruction from straight lines to reconstruction using distributions of photon trajectories we can improve resolution by almost an order of magnitude and exceed the accuracy of reconstruction of multi-step algorithms used in DOT. (optical tomography)« less

  5. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    NASA Astrophysics Data System (ADS)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.

  6. Final Report for''Numerical Methods and Studies of High-Speed Reactive and Non-Reactive Flows''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwendeman, D W

    2002-11-20

    The work carried out under this subcontract involved the development and use of an adaptive numerical method for the accurate calculation of high-speed reactive flows on overlapping grids. The flow is modeled by the reactive Euler equations with an assumed equation of state and with various reaction rate models. A numerical method has been developed to solve the nonlinear hyperbolic partial differential equations in the model. The method uses an unsplit, shock-capturing scheme, and uses a Godunov-type scheme to compute fluxes and a Runge-Kutta error control scheme to compute the source term modeling the chemical reactions. An adaptive mesh refinementmore » (AMR) scheme has been implemented in order to locally increase grid resolution. The numerical method uses composite overlapping grids to handle complex flow geometries. The code is part of the ''Overture-OverBlown'' framework of object-oriented codes [1, 2], and the development has occurred in close collaboration with Bill Henshaw and David Brown, and other members of the Overture team within CASC. During the period of this subcontract, a number of tasks were accomplished, including: (1) an extension of the numerical method to handle ''ignition and grow'' reaction models and a JWL equations of state; (2) an improvement in the efficiency of the AMR scheme and the error estimator; (3) an addition of a scheme of numerical dissipation designed to suppress numerical oscillations/instabilities near expanding detonations and along grid overlaps; and (4) an exploration of the evolution to detonation in an annulus and of detonation failure in an expanding channel.« less

  7. Early Earth plume-lid tectonics: A high-resolution 3D numerical modelling approach

    NASA Astrophysics Data System (ADS)

    Fischer, R.; Gerya, T.

    2016-10-01

    Geological-geochemical evidence point towards higher mantle potential temperature and a different type of tectonics (global plume-lid tectonics) in the early Earth (>3.2 Ga) compared to the present day (global plate tectonics). In order to investigate tectono-magmatic processes associated with plume-lid tectonics and crustal growth under hotter mantle temperature conditions, we conduct a series of 3D high-resolution magmatic-thermomechanical models with the finite-difference code I3ELVIS. No external plate tectonic forces are applied to isolate 3D effects of various plume-lithosphere and crust-mantle interactions. Results of the numerical experiments show two distinct phases in coupled crust-mantle evolution: (1) a longer (80-100 Myr) and relatively quiet 'growth phase' which is marked by growth of crust and lithosphere, followed by (2) a short (∼20 Myr) and catastrophic 'removal phase', where unstable parts of the crust and mantle lithosphere are removed by eclogitic dripping and later delamination. This modelling suggests that the early Earth plume-lid tectonic regime followed a pattern of episodic growth and removal also called episodic overturn with a periodicity of ∼100 Myr.

  8. Multibeam interferometric illumination as the primary source of resolution in optical microscopy

    NASA Astrophysics Data System (ADS)

    Ryu, J.; Hong, S. S.; Horn, B. K. P.; Freeman, D. M.; Mermelstein, M. S.

    2006-04-01

    High-resolution images of a fluorescent target were obtained using a low-resolution optical detector by illuminating the target with interference patterns produced with 31 coherent beams. The beams were arranged in a cone with 78° half angle to produce illumination patterns consistent with a numerical aperture of 0.98. High-resolution images were constructed from low-resolution images taken with 930 different illumination patterns. Results for optical detectors with numerical apertures of 0.1 and 0.2 were similar, demonstrating that the resolution is primarily determined by the illuminator and not by the low-resolution detector. Furthermore, the long working distance, large depth of field, and large field of view of the low-resolution detector are preserved.

  9. 3D hydrodynamic simulations of carbon burning in massive stars

    NASA Astrophysics Data System (ADS)

    Cristini, A.; Meakin, C.; Hirschi, R.; Arnett, D.; Georgy, C.; Viallet, M.; Walkington, I.

    2017-10-01

    We present the first detailed 3D hydrodynamic implicit large eddy simulations of turbulent convection of carbon burning in massive stars. Simulations begin with radial profiles mapped from a carbon-burning shell within a 15 M⊙ 1D stellar evolution model. We consider models with 1283, 2563, 5123, and 10243 zones. The turbulent flow properties of these carbon-burning simulations are very similar to the oxygen-burning case. We performed a mean field analysis of the kinetic energy budgets within the Reynolds-averaged Navier-Stokes framework. For the upper convective boundary region, we find that the numerical dissipation is insensitive to resolution for linear mesh resolutions above 512 grid points. For the stiffer, more stratified lower boundary, our highest resolution model still shows signs of decreasing sub-grid dissipation suggesting it is not yet numerically converged. We find that the widths of the upper and lower boundaries are roughly 30 per cent and 10 per cent of the local pressure scaleheights, respectively. The shape of the boundaries is significantly different from those used in stellar evolution models. As in past oxygen-shell-burning simulations, we observe entrainment at both boundaries in our carbon-shell-burning simulations. In the large Péclet number regime found in the advanced phases, the entrainment rate is roughly inversely proportional to the bulk Richardson number, RiB (∝RiB-α, 0.5 ≲ α ≲ 1.0). We thus suggest the use of RiB as a means to take into account the results of 3D hydrodynamics simulations in new 1D prescriptions of convective boundary mixing.

  10. High resolution simulations of a variable HH jet

    NASA Astrophysics Data System (ADS)

    Raga, A. C.; de Colle, F.; Kajdič, P.; Esquivel, A.; Cantó, J.

    2007-04-01

    Context: In many papers, the flows in Herbig-Haro (HH) jets have been modeled as collimated outflows with a time-dependent ejection. In particular, a supersonic variability of the ejection velocity leads to the production of "internal working surfaces" which (for appropriate forms of the time-variability) can produce emitting knots that resemble the chains of knots observed along HH jets. Aims: In this paper, we present axisymmetric simulations of an "internal working surface" in a radiative jet (produced by an ejection velocity variability). We concentrate on a given parameter set (i.e., on a jet with a constante ejection density, and a sinusoidal velocity variability with a 20 yr period and a 40 km s-1 half-amplitude), and carry out a study of the behaviour of the solution for increasing numerical resolutions. Methods: In our simulations, we solve the gasdynamic equations together with a 17-species atomic/ionic network, and we are therefore able to compute emission coefficients for different emission lines. Results: We compute 3 adaptive grid simulations, with 20, 163 and 1310 grid points (at the highest grid resolution) across the initial jet radius. From these simulations we see that successively more complex structures are obtained for increasing numerical resolutions. Such an effect is seen in the stratifications of the flow variables as well as in the predicted emission line intensity maps. Conclusions: .We find that while the detailed structure of an internal working surface depends on resolution, the predicted emission line luminosities (integrated over the volume of the working surface) are surprisingly stable. This is definitely good news for the future computation of predictions from radiative jet models for carrying out comparisons with observations of HH objects.

  11. Establishing a Numerical Modeling Framework for Hydrologic Engineering Analyses of Extreme Storm Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. Themore » evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.« less

  12. Model of separation performance of bilinear gradients in scanning format counter-flow gradient electrofocusing techniques.

    PubMed

    Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L

    2015-03-01

    Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE PAGES

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...

    2016-09-16

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  14. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  15. EIT forward problem parallel simulation environment with anisotropic tissue and realistic electrode models.

    PubMed

    De Marco, Tommaso; Ries, Florian; Guermandi, Marco; Guerrieri, Roberto

    2012-05-01

    Electrical impedance tomography (EIT) is an imaging technology based on impedance measurements. To retrieve meaningful insights from these measurements, EIT relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of current flows therein. The nonhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeoff between physical accuracy and technical feasibility, which at present severely limits the capabilities of EIT. This work presents a complete algorithmic flow for an accurate EIT modeling environment featuring high anatomical fidelity with a spatial resolution equal to that provided by an MRI and a novel realistic complete electrode model implementation. At the same time, we demonstrate that current graphics processing unit (GPU)-based platforms provide enough computational power that a domain discretized with five million voxels can be numerically modeled in about 30 s.

  16. Simulation of the atmospheric thermal circulation of a martian volcano using a mesoscale numerical model.

    PubMed

    Rafkin, Scot C R; Sta Maria, Magdalena R V; Michaels, Timothy I

    2002-10-17

    Mesoscale (<100 km) atmospheric phenomena are ubiquitous on Mars, as revealed by Mars Orbiter Camera images. Numerical models provide an important means of investigating martian atmospheric dynamics, for which data availability is limited. But the resolution of general circulation models, which are traditionally used for such research, is not sufficient to resolve mesoscale phenomena. To provide better understanding of these relatively small-scale phenomena, mesoscale models have recently been introduced. Here we simulate the mesoscale spiral dust cloud observed over the caldera of the volcano Arsia Mons by using the Mars Regional Atmospheric Modelling System. Our simulation uses a hierarchy of nested models with grid sizes ranging from 240 km to 3 km, and reveals that the dust cloud is an indicator of a greater but optically thin thermal circulation that reaches heights of up to 30 km, and transports dust horizontally over thousands of kilometres.

  17. An Experimental High-Resolution Forecast System During the Vancouver 2010 Winter Olympic and Paralympic Games

    NASA Astrophysics Data System (ADS)

    Mailhot, J.; Milbrandt, J. A.; Giguère, A.; McTaggart-Cowan, R.; Erfani, A.; Denis, B.; Glazer, A.; Vallée, M.

    2014-01-01

    Environment Canada ran an experimental numerical weather prediction (NWP) system during the Vancouver 2010 Winter Olympic and Paralympic Games, consisting of nested high-resolution (down to 1-km horizontal grid-spacing) configurations of the GEM-LAM model, with improved geophysical fields, cloud microphysics and radiative transfer schemes, and several new diagnostic products such as density of falling snow, visibility, and peak wind gust strength. The performance of this experimental NWP system has been evaluated in these winter conditions over complex terrain using the enhanced mesoscale observing network in place during the Olympics. As compared to the forecasts from the operational regional 15-km GEM model, objective verification generally indicated significant added value of the higher-resolution models for near-surface meteorological variables (wind speed, air temperature, and dewpoint temperature) with the 1-km model providing the best forecast accuracy. Appreciable errors were noted in all models for the forecasts of wind direction and humidity near the surface. Subjective assessment of several cases also indicated that the experimental Olympic system was skillful at forecasting meteorological phenomena at high-resolution, both spatially and temporally, and provided enhanced guidance to the Olympic forecasters in terms of better timing of precipitation phase change, squall line passage, wind flow channeling, and visibility reduction due to fog and snow.

  18. Investigating the Effects of Grid Resolution of WRF Model for Simulating the Atmosphere for use in the Study of Wake Turbulence

    NASA Astrophysics Data System (ADS)

    Prince, Alyssa; Trout, Joseph; di Mercurio, Alexis

    2017-01-01

    The Weather Research and Forecasting (WRF) Model is a nested-grid, mesoscale numerical weather prediction system maintained by the Developmental Testbed Center. The model simulates the atmosphere by integrating partial differential equations, which use the conservation of horizontal momentum, conservation of thermal energy, and conservation of mass along with the ideal gas law. This research investigated the possible use of WRF in investigating the effects of weather on wing tip wake turbulence. This poster shows the results of an investigation into the accuracy of WRF using different grid resolutions. Several atmospheric conditions were modeled using different grid resolutions. In general, the higher the grid resolution, the better the simulation, but the longer the model run time. This research was supported by Dr. Manuel A. Rios, Ph.D. (FAA) and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA'' (13-G-006). Dr. Manuel A. Rios, Ph.D. (FAA), and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''

  19. A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes

    With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less

  20. Forward modelling of global gravity fields with 3D density structures and an application to the high-resolution ( 2 km) gravity fields of the Moon

    NASA Astrophysics Data System (ADS)

    Šprlák, M.; Han, S.-C.; Featherstone, W. E.

    2017-12-01

    Rigorous modelling of the spherical gravitational potential spectra from the volumetric density and geometry of an attracting body is discussed. Firstly, we derive mathematical formulas for the spatial analysis of spherical harmonic coefficients. Secondly, we present a numerically efficient algorithm for rigorous forward modelling. We consider the finite-amplitude topographic modelling methods as special cases, with additional postulates on the volumetric density and geometry. Thirdly, we implement our algorithm in the form of computer programs and test their correctness with respect to the finite-amplitude topography routines. For this purpose, synthetic and realistic numerical experiments, applied to the gravitational field and geometry of the Moon, are performed. We also investigate the optimal choice of input parameters for the finite-amplitude modelling methods. Fourth, we exploit the rigorous forward modelling for the determination of the spherical gravitational potential spectra inferred by lunar crustal models with uniform, laterally variable, radially variable, and spatially (3D) variable bulk density. Also, we analyse these four different crustal models in terms of their spectral characteristics and band-limited radial gravitation. We demonstrate applicability of the rigorous forward modelling using currently available computational resources up to degree and order 2519 of the spherical harmonic expansion, which corresponds to a resolution of 2.2 km on the surface of the Moon. Computer codes, a user manual and scripts developed for the purposes of this study are publicly available to potential users.

  1. High resolution modelling of extreme precipitation events in urban areas

    NASA Astrophysics Data System (ADS)

    Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave

    2015-04-01

    The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with significant soil consolidation and the low-lying areas are prone to urban flooding. The simulation results are compared with measurements in the sewer network. References [1] Guus S. Stelling G.S., 2012. Quadtree flood simulations with subgrid digital elevation models. Water Management 165 (WM1):1329-1354. [2] Vincenzo Cassuli and Guus S. Stelling, 2013. A semi-implicit numerical model for urban drainage systems. International Journal for Numerical Methods in Fluids. Vol. 73:600-614. DOI: 10.1002/fld.3817

  2. Application of fire and evacuation models in evaluation of fire safety in railway tunnels

    NASA Astrophysics Data System (ADS)

    Cábová, Kamila; Apeltauer, Tomáš; Okřinová, Petra; Wald, František

    2017-09-01

    The paper describes an application of numerical simulation of fire dynamics and evacuation of people in a tunnel. The software tool Fire Dynamics Simulator is used to simulate temperature resolution and development of smoke in a railway tunnel. Comparing to temperature curves which are usually used in the design stage results of the model show that the numerical model gives lower temperature of hot smoke layer. Outputs of the numerical simulation of fire also enable to improve models of evacuation of people during fires in tunnels. In the presented study the calculated high of smoke layer in the tunnel is in 10 min after the fire ignition lower than the level of 2.2 m which is considered as the maximal limit for safe evacuation. Simulation of the evacuation process in bigger scale together with fire dynamics can provide very valuable information about important security conditions like Available Safe Evacuation Time (ASET) vs Required Safe Evacuation Time (RSET). On given example in software EXODUS the paper summarizes selected results of evacuation model which should be in mind of a designer when preparing an evacuation plan.

  3. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    PubMed

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  4. A Review of Element-Based Galerkin Methods for Numerical Weather Prediction

    DTIC Science & Technology

    2015-04-01

    with body forces to model the effects of gravity and the Earth’s rotation (i.e. Coriolis force). Although the gravitational force varies with both...more phenomena (e.g. resolving non-hydrostatic effects , incorporating more complex moisture parameterizations), their appetite for High Performance...operation effectively ). For instance, the ST-based model NOGAPS, used by the U. S. Navy, could not scale beyond 150 processes at typical resolutions [119

  5. Direct Numerical Simulations of Diffusive Staircases in the Arctic

    DTIC Science & Technology

    2009-03-01

    modeling is the simplest and most obvious tool for evaluating the mixing characteristics in the Arctic Ocean, and it will be extensively used in our...and Kinglear, in addition to Department of Defense (DoD) supercomputer clusters, Babbage, Davinci , and Midnight. Low resolution model runs were...Krishfield, R., Toole , J., Proshutinsky, A., & Timmermans, M.-L. (2008). Automated Ice Tethered Profilers for seawater observations under pack ice in

  6. The impact of mesoscale convective systems on global precipitation: A modeling study

    NASA Astrophysics Data System (ADS)

    Tao, Wei-Kuo

    2017-04-01

    The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.

  7. Stochastic modelling of a single ion channel: an alternating renewal approach with application to limited time resolution.

    PubMed

    Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W

    1988-04-22

    Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.

  8. Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods

    NASA Astrophysics Data System (ADS)

    Lemoine, Grady

    Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.

  9. Air-Sea Interaction Processes in Low and High-Resolution Coupled Climate Model Simulations for the Southeast Pacific

    NASA Astrophysics Data System (ADS)

    Porto da Silveira, I.; Zuidema, P.; Kirtman, B. P.

    2017-12-01

    The rugged topography of the Andes Cordillera along with strong coastal upwelling, strong sea surface temperatures (SST) gradients and extensive but geometrically-thin stratocumulus decks turns the Southeast Pacific (SEP) into a challenge for numerical modeling. In this study, hindcast simulations using the Community Climate System Model (CCSM4) at two resolutions were analyzed to examine the importance of resolution alone, with the parameterizations otherwise left unchanged. The hindcasts were initialized on January 1 with the real-time oceanic and atmospheric reanalysis (CFSR) from 1982 to 2003, forming a 10-member ensemble. The two resolutions are (0.1o oceanic and 0.5o atmospheric) and (1.125o oceanic and 0.9o atmospheric). The SST error growth in the first six days of integration (fast errors) and those resulted from model drift (saturated errors) are assessed and compared towards evaluating the model processes responsible for the SST error growth. For the high-resolution simulation, SST fast errors are positive (+0.3oC) near the continental borders and negative offshore (-0.1oC). Both are associated with a decrease in cloud cover, a weakening of the prevailing southwesterly winds and a reduction of latent heat flux. The saturated errors possess a similar spatial pattern, but are larger and are more spatially concentrated. This suggests that the processes driving the errors already become established within the first week, in contrast to the low-resolution simulations. These, instead, manifest too-warm SSTs related to too-weak upwelling, driven by too-strong winds and Ekman pumping. Nevertheless, the ocean surface tends to be cooler in the low-resolution simulation than the high-resolution due to a higher cloud cover. Throughout the integration, saturated SST errors become positive and could reach values up to +4oC. These are accompanied by upwelling dumping and a decrease in cloud cover. High and low resolution models presented notable differences in how SST errors variability drove atmospheric changes, especially because the high resolution is sensitive to resurgence regions. This allows the model to resolve cloud heights and establish different radiative feedbacks.

  10. High-NA EUV lithography enabling Moore's law in the next decade

    NASA Astrophysics Data System (ADS)

    van Schoot, Jan; Troost, Kars; Bornebroek, Frank; van Ballegoij, Rob; Lok, Sjoerd; Krabbendam, Peter; Stoeldraijer, Judon; Loopstra, Erik; Benschop, Jos P.; Finders, Jo; Meiling, Hans; van Setten, Eelco; Kneer, Bernhard; Kuerz, Peter; Kaiser, Winfried; Heil, Tilmann; Migura, Sascha; Neumann, Jens Timo

    2017-10-01

    While EUV systems equipped with a 0.33 Numerical Aperture lenses are readying to start volume manufacturing, ASML and Zeiss are ramping up their activities on a EUV exposure tool with Numerical Aperture of 0.55. The purpose of this scanner, targeting an ultimate resolution of 8nm, is to extend Moore's law throughout the next decade. A novel, anamorphic lens design, capable of providing the required Numerical Aperture has been investigated; This lens will be paired with new, faster stages and more accurate sensors enabling Moore's law economical requirements, as well as the tight focus and overlay control needed for future process nodes. The tighter focus and overlay control budgets, as well as the anamorphic optics, will drive innovations in the imaging and OPC modelling. Furthermore, advances in resist and mask technology will be required to image lithography features with less than 10nm resolution. This paper presents an overview of the target specifications, key technology innovations and imaging simulations demonstrating the advantages as compared to 0.33NA and showing the capabilities of the next generation EUV systems.

  11. Ensemble flood simulation for a small dam catchment in Japan using 10 and 2 km resolution nonhydrostatic model rainfalls

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kenichiro; Otsuka, Shigenori; Apip; Saito, Kazuo

    2016-08-01

    This paper presents a study on short-term ensemble flood forecasting specifically for small dam catchments in Japan. Numerical ensemble simulations of rainfall from the Japan Meteorological Agency nonhydrostatic model (JMA-NHM) are used as the input data to a rainfall-runoff model for predicting river discharge into a dam. The ensemble weather simulations use a conventional 10 km and a high-resolution 2 km spatial resolutions. A distributed rainfall-runoff model is constructed for the Kasahori dam catchment (approx. 70 km2) and applied with the ensemble rainfalls. The results show that the hourly maximum and cumulative catchment-average rainfalls of the 2 km resolution JMA-NHM ensemble simulation are more appropriate than the 10 km resolution rainfalls. All the simulated inflows based on the 2 and 10 km rainfalls become larger than the flood discharge of 140 m3 s-1, a threshold value for flood control. The inflows with the 10 km resolution ensemble rainfall are all considerably smaller than the observations, while at least one simulated discharge out of 11 ensemble members with the 2 km resolution rainfalls reproduces the first peak of the inflow at the Kasahori dam with similar amplitude to observations, although there are spatiotemporal lags between simulation and observation. To take positional lags into account of the ensemble discharge simulation, the rainfall distribution in each ensemble member is shifted so that the catchment-averaged cumulative rainfall of the Kasahori dam maximizes. The runoff simulation with the position-shifted rainfalls shows much better results than the original ensemble discharge simulations.

  12. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  13. An intercomparison study of TSM, SEBS, and SEBAL using high-resolution imagery and lysimetric data

    USDA-ARS?s Scientific Manuscript database

    Over the past three decades, numerous remote sensing based ET mapping algorithms were developed. These algorithms provided a robust, economical, and efficient tool for ET estimations at field and regional scales. The Two Source Model (TSM), Surface Energy Balance System (SEBS), and Surface Energy Ba...

  14. Use of artificial landscapes to isolate controls on burn probability

    Treesearch

    Marc-Andre Parisien; Carol Miller; Alan A. Ager; Mark A. Finney

    2010-01-01

    Techniques for modeling burn probability (BP) combine the stochastic components of fire regimes (ignitions and weather) with sophisticated fire growth algorithms to produce high-resolution spatial estimates of the relative likelihood of burning. Despite the numerous investigations of fire patterns from either observed or simulated sources, the specific influence of...

  15. Towards Forming a Primordial Protostar in a Cosmological AMR Simulation

    NASA Astrophysics Data System (ADS)

    Turk, Matthew J.; Abel, Tom; O'Shea, Brian W.

    2008-03-01

    Modeling the formation of the first stars in the universe is a well-posed problem and ideally suited for computational investigation.We have conducted high-resolution numerical studies of the formation of primordial stars. Beginning with primordial initial conditions appropriate for a ΛCDM model, we used the Eulerian adaptive mesh refinement code (Enzo) to achieve unprecedented numerical resolution, resolving cosmological scales as well as sub-stellar scales simultaneously. Building on the work of Abel, Bryan and Norman (2002), we followed the evolution of the first collapsing cloud until molecular hydrogen is optically thick to cooling radiation. In addition, the calculations account for the process of collision-induced emission (CIE) and add approximations to the optical depth in both molecular hydrogen roto-vibrational cooling and CIE. Also considered are the effects of chemical heating/cooling from the formation/destruction of molecular hydrogen. We present the results of these simulations, showing the formation of a 10 Jupiter-mass protostellar core bounded by a strongly aspherical accretion shock. Accretion rates are found to be as high as one solar mass per year.

  16. Numerical Issues for Circulation Control Calculations

    NASA Technical Reports Server (NTRS)

    Swanson, Roy C., Jr.; Rumsey, Christopher L.

    2006-01-01

    Steady-state and time-accurate two-dimensional solutions of the compressible Reynolds-averaged Navier- Stokes equations are obtained for flow over the Lockheed circulation control (CC) airfoil and the General Aviation CC (GACC) airfoil. Numerical issues in computing circulation control flows such as the effects of grid resolution, boundary and initial conditions, and unsteadiness are addressed. For the Lockheed CC airfoil computed solutions are compared with detailed experimental data, which include velocity and Reynolds stress profiles. Three turbulence models, having either one or two transport equations, are considered. Solutions are obtained on a sequence of meshes, with mesh refinement primarily concentrated on the airfoil circular trailing edge. Several effects related to mesh refinement are identified. For example, sometimes sufficient mesh resolution can exclude nonphysical solutions, which can occur in CC airfoil calculations. Also, sensitivities of the turbulence models with mesh refinement are discussed. In the case of the GACC airfoil the focus is on the difference between steady-state and time-accurate solutions. A specific objective is to determine if there is self-excited vortex shedding from the jet slot lip.

  17. Numerical modeling of an intense precipitation event and its associated lightning activity over northern Greece

    NASA Astrophysics Data System (ADS)

    Pytharoulis, I.; Kotsopoulos, S.; Tegoulias, I.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2016-03-01

    This study investigates an intense precipitation event and its lightning activity that affected northern Greece and primarily Thessaloniki on 15 July 2014. The precipitation measurement of 98.5 mm in 15 h at the Aristotle University of Thessaloniki set a new absolute record maximum. The thermodynamic analysis indicated that the event took place in an environment that could support deep thunderstorm activity. The development of this intense event was associated with significant low-level convergence and upper-level divergence even before its triggering and a positive vertical gradient of relative vorticity advection. The high resolution (1.667 km × 1.667 km) non-hydrostatic WRF-ARW numerical weather prediction model was used to simulate this intense precipitation event, while the Lightning Potential Index was utilized to calculate the potential for lightning activity. Sensitivity experiments suggested that although the strong synoptic forcing assumed primary role in the occurrence of intense precipitation and lightning activity, their spatiotemporal variability was affected by topography. The application of the very fine resolution topography of NASA Shuttle Radar Topographic Mission improved the simulated precipitation and the calculated lightning potential.

  18. Modeling Photo-Bleaching Kinetics to Create High Resolution Maps of Rod Rhodopsin in the Human Retina

    PubMed Central

    Ehler, Martin; Dobrosotskaya, Julia; Cunningham, Denise; Wong, Wai T.; Chew, Emily Y.; Czaja, Wojtek; Bonner, Robert F.

    2015-01-01

    We introduce and describe a novel non-invasive in-vivo method for mapping local rod rhodopsin distribution in the human retina over a 30-degree field. Our approach is based on analyzing the brightening of detected lipofuscin autofluorescence within small pixel clusters in registered imaging sequences taken with a commercial 488nm confocal scanning laser ophthalmoscope (cSLO) over a 1 minute period. We modeled the kinetics of rhodopsin bleaching by applying variational optimization techniques from applied mathematics. The physical model and the numerical analysis with its implementation are outlined in detail. This new technique enables the creation of spatial maps of the retinal rhodopsin and retinal pigment epithelium (RPE) bisretinoid distribution with an ≈ 50μm resolution. PMID:26196397

  19. Development of the GEOS-5 Atmospheric General Circulation Model: Evolution from MERRA to MERRA2.

    NASA Technical Reports Server (NTRS)

    Molod, Andrea; Takacs, Lawrence; Suarez, Max; Bacmeister, Julio

    2014-01-01

    The Modern-Era Retrospective Analysis for Research and Applications-2 (MERRA2) version of the GEOS-5 (Goddard Earth Observing System Model - 5) Atmospheric General Circulation Model (AGCM) is currently in use in the NASA Global Modeling and Assimilation Office (GMAO) at a wide range of resolutions for a variety of applications. Details of the changes in parameterizations subsequent to the version in the original MERRA reanalysis are presented here. Results of a series of atmosphere-only sensitivity studies are shown to demonstrate changes in simulated climate associated with specific changes in physical parameterizations, and the impact of the newly implemented resolution-aware behavior on simulations at different resolutions is demonstrated. The GEOS-5 AGCM presented here is the model used as part of the GMAO's MERRA2 reanalysis, the global mesoscale "nature run", the real-time numerical weather prediction system, and for atmosphere-only, coupled ocean-atmosphere and coupled atmosphere-chemistry simulations. The seasonal mean climate of the MERRA2 version of the GEOS-5 AGCM represents a substantial improvement over the simulated climate of the MERRA version at all resolutions and for all applications. Fundamental improvements in simulated climate are associated with the increased re-evaporation of frozen precipitation and cloud condensate, resulting in a wetter atmosphere. Improvements in simulated climate are also shown to be attributable to changes in the background gravity wave drag, and to upgrades in the relationship between the ocean surface stress and the ocean roughness. The series of "resolution aware" parameters related to the moist physics were shown to result in improvements at higher resolutions, and result in AGCM simulations that exhibit seamless behavior across different resolutions and applications.

  20. Driving Parameters for Distributed and Centralized Air Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Feron, Eric

    2001-01-01

    This report considers the problem of intersecting aircraft flows under decentralized conflict avoidance rules. Using an Eulerian standpoint (aircraft flow through a fixed control volume), new air traffic control models and scenarios are defined that enable the study of long-term airspace stability problems. Considering a class of two intersecting aircraft flows, it is shown that airspace stability, defined both in terms of safety and performance, is preserved under decentralized conflict resolution algorithms. Performance bounds are derived for the aircraft flow problem under different maneuver models. Besides analytical approaches, numerical examples are presented to test the theoretical results, as well as to generate some insight about the structure of the traffic flow after resolution. Considering more than two intersecting aircraft flows, simulations indicate that flow stability may not be guaranteed under simple conflict avoidance rules. Finally, a comparison is made with centralized strategies to conflict resolution.

  1. Investigating the influence of chromatic aberration and optical illumination bandwidth on fundus imaging in rats

    NASA Astrophysics Data System (ADS)

    Li, Hao; Liu, Wenzhong; Zhang, Hao F.

    2015-10-01

    Rodent models are indispensable in studying various retinal diseases. Noninvasive, high-resolution retinal imaging of rodent models is highly desired for longitudinally investigating the pathogenesis and therapeutic strategies. However, due to severe aberrations, the retinal image quality in rodents can be much worse than that in humans. We numerically and experimentally investigated the influence of chromatic aberration and optical illumination bandwidth on retinal imaging. We confirmed that the rat retinal image quality decreased with increasing illumination bandwidth. We achieved the retinal image resolution of 10 μm using a 19 nm illumination bandwidth centered at 580 nm in a home-built fundus camera. Furthermore, we observed higher chromatic aberration in albino rat eyes than in pigmented rat eyes. This study provides a design guide for high-resolution fundus camera for rodents. Our method is also beneficial to dispersion compensation in multiwavelength retinal imaging applications.

  2. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, F. D.; Chen, Y.; Singha, K.

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.

  3. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, Frederick D.; Chen, Yongping; Singha, Kamini

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error.

  4. Moving microphone arrays to reduce spatial aliasing in the beamforming technique: theoretical background and numerical investigation.

    PubMed

    Cigada, Alfredo; Lurati, Massimiliano; Ripamonti, Francesco; Vanali, Marcello

    2008-12-01

    This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique. Beamforming main disadvantages are a poor spatial resolution, at low frequency, and the spatial aliasing problem, at higher frequency, leading to the identification of false sources. The idea is to move the microphone array during the measurement operation. In this paper, the proposed approach is theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatial aliasing. A number of different array configurations are numerically investigated together with the most important parameters governing this measurement technique. A set of numerical results concerning the case of a planar rotating array is shown, together with a first experimental validation of the method.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schalk, W.W. III

    Early actions of emergency responders during hazardous material releases are intended to assess contamination and potential public exposure. As measurements are collected, an integration of model calculations and measurements can assist to better understand the situation. This study applied a high resolution version of the operational 3-D numerical models used by Lawrence Livermore National Laboratory to a limited meteorological and tracer data set to assist in the interpretation of the dispersion pattern on a 140 km scale. The data set was collected from a tracer release during the morning surface inversion and transition period in the complex terrain of themore » Snake River Plain near Idaho Falls, Idaho in November 1993 by the United States Air Force. Sensitivity studies were conducted to determine model input parameters that best represented the study environment. These studies showed that mixing and boundary layer heights, atmospheric stability, and rawinsonde data are the most important model input parameters affecting wind field generation and tracer dispersion. Numerical models and limited measurement data were used to interpret dispersion patterns through the use of data analysis, model input determination, and sensitivity studies. Comparison of the best-estimate calculation to measurement data showed that model results compared well with the aircraft data, but had moderate success with the few surface measurements taken. The moderate success of the surface measurement comparison, may be due to limited downward mixing of the tracer as a result of the model resolution determined by the domain size selected to study the overall plume dispersion. 8 refs., 40 figs., 7 tabs.« less

  6. The flow patterning capability of localized natural convection.

    PubMed

    Huang, Ling-Ting; Chao, Ling

    2016-09-14

    Controlling flow patterns to align materials can have various applications in optics, electronics, and biosciences. In this study, we developed a natural-convection-based method to create desirable spatial flow patterns by controlling the locations of heat sources. Fluid motion in natural convection is induced by the spatial fluid density gradient that is caused by the established spatial temperature gradient. To analyze the patterning resolution capability of this method, we used a mathematical model combined with nondimensionalization to correlate the flow patterning resolution with experimental operating conditions. The nondimensionalized model suggests that the flow pattern and resolution is only influenced by two dimensionless parameters, and , where Gr is the Grashof number, representing the ratio of buoyancy to the viscous force acting on a fluid, and Pr is the Prandtl number, representing the ratio of momentum diffusivity to thermal diffusivity. We used the model to examine all of the flow behaviors in a wide range of the two dimensionless parameter group and proposed a flow pattern state diagram which suggests a suitable range of operating conditions for flow patterning. In addition, we developed a heating wire with an angular configuration, which enabled us to efficiently examine the pattern resolution capability numerically and experimentally. Consistent resolutions were obtained between the experimental results and model predictions, suggesting that the state diagram and the identified operating range can be used for further application.

  7. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less

  8. Overflow Simulations using MPAS-Ocean in Idealized and Realistic Domains

    NASA Astrophysics Data System (ADS)

    Reckinger, S.; Petersen, M. R.; Reckinger, S. J.

    2016-02-01

    MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Also, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates. Additionally, preliminary measurements of overflow diagnostics on global simulations using a realistic oceanic domain are presented.

  9. Protoplanetary Disks and Planet Formation a Computational Perspective

    NASA Astrophysics Data System (ADS)

    Backus, Isaac

    In this thesis I present my research on the early stages of planet formation. Using advanced computational modeling techniques, I study global gas and gravitational dynamics in proto- planetary disks (PPDs) on length scales from the radius of Jupiter to the size of the solar system. In that environment, I investigate the formation of gas giants and the migration, enhancement, and distribution of small solids--the precursors to planetesimals and gas giant cores. I examine numerical techniques used in planet formation and PPD modeling, especially methods for generating initial conditions (ICs) in these unstable, chaotic systems. Disk simulation outcomes may depend strongly on ICs, which may explain results in the literature. I present the largest suite of high resolution PPD simulations to-date and argue that direct fragmentations of PPDs around M-Dwarfs is a plausible path to rapidly forming gas giants. I implement dust physics to track the migration of centimeter and smaller dust grains in very high resolution PPD simulations. While current dust methods are slow, with strict resolution and/or time-stepping requirements, and have some serious numerical issues, we can still demonstrate that dust does not concentrate at the pressure maxima of spiral arms, an indication that spiral features observed in the dust component are at least as well resolved in the gas. Additionally, coherent spiral arms do not limit dust settling. We suggest a novel mechanism for disk fragmentation at large radii driven by dust accretion from the surrounding nebula. We also investigate self induced dust traps, a mechanism which may help explain the growth of solids beyond meter sizes. We argue that current apparent demonstrations of this mechanism may be due to numerical artifacts and require further investigation.

  10. Using 3-D Numerical Weather Data in Piloted Simulations

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.

    2016-01-01

    This report describes the process of acquiring and using 3-D numerical model weather data sets in NASA Langley's Research Flight Deck (RFD). A set of software tools implement the process and can be used for other purposes as well. Given time and location information of a weather phenomenon of interest, the user can download associated numerical weather model data. These data are created by the National Oceanic and Atmospheric Administration (NOAA) High Resolution Rapid Refresh (HRRR) model, and are then processed using a set of Mathworks' Matlab(TradeMark) scripts to create the usable 3-D weather data sets. Each data set includes radar re ectivity, water vapor, component winds, temperature, supercooled liquid water, turbulence, pressure, altitude, land elevation, relative humidity, and water phases. An open-source data processing program, wgrib2, is available from NOAA online, and is used along with Matlab scripts. These scripts are described with sucient detail to make future modi cations. These software tools have been used to generate 3-D weather data for various RFD experiments.

  11. Rigorous numerical modeling of scattering-type scanning near-field optical microscopy and spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun

    2017-11-01

    Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.

  12. Research on regional numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Kreitzberg, C. W.

    1976-01-01

    Extension of the predictive power of dynamic weather forecasting to scales below the conventional synoptic or cyclonic scales in the near future is assessed. Lower costs per computation, more powerful computers, and a 100 km mesh over the North American area (with coarser mesh extending beyond it) are noted at present. Doubling the resolution even locally (to 50 km mesh) would entail a 16-fold increase in costs (including vertical resolution and halving the time interval), and constraints on domain size and length of forecast. Boundary conditions would be provided by the surrounding 100 km mesh, and time-varying lateral boundary conditions can be considered to handle moving phenomena. More physical processes to treat, more efficient numerical techniques, and faster computers (improved software and hardware) backing up satellite and radar data could produce further improvements in forecasting in the 1980s. Boundary layer modeling, initialization techniques, and quantitative precipitation forecasting are singled out among key tasks.

  13. Spatial and temporal variability of clouds and precipitation over Germany: multiscale simulations across the "gray zone"

    NASA Astrophysics Data System (ADS)

    Barthlott, C.; Hoose, C.

    2015-11-01

    This paper assesses the resolution dependance of clouds and precipitation over Germany by numerical simulations with the COnsortium for Small-scale MOdeling (COSMO) model. Six intensive observation periods of the HOPE (HD(CP)2 Observational Prototype Experiment) measurement campaign conducted in spring 2013 and 1 summer day of the same year are simulated. By means of a series of grid-refinement resolution tests (horizontal grid spacing 2.8, 1 km, 500, and 250 m), the applicability of the COSMO model to represent real weather events in the gray zone, i.e., the scale ranging between the mesoscale limit (no turbulence resolved) and the large-eddy simulation limit (energy-containing turbulence resolved), is tested. To the authors' knowledge, this paper presents the first non-idealized COSMO simulations in the peer-reviewed literature at the 250-500 m scale. It is found that the kinetic energy spectra derived from model output show the expected -5/3 slope, as well as a dependency on model resolution, and that the effective resolution lies between 6 and 7 times the nominal resolution. Although the representation of a number of processes is enhanced with resolution (e.g., boundary-layer thermals, low-level convergence zones, gravity waves), their influence on the temporal evolution of precipitation is rather weak. However, rain intensities vary with resolution, leading to differences in the total rain amount of up to +48 %. Furthermore, the location of rain is similar for the springtime cases with moderate and strong synoptic forcing, whereas significant differences are obtained for the summertime case with air mass convection. Domain-averaged liquid water paths and cloud condensate profiles are used to analyze the temporal and spatial variability of the simulated clouds. Finally, probability density functions of convection-related parameters are analyzed to investigate their dependance on model resolution and their impact on cloud formation and subsequent precipitation.

  14. Simulations of Madden-Julian Oscillation in High Resolution Atmospheric General Circulation Model

    NASA Astrophysics Data System (ADS)

    Deng, Liping; Stenchikov, Georgiy; McCabe, Matthew; Bangalath, HamzaKunhu; Raj, Jerry; Osipov, Sergey

    2014-05-01

    The simulation of tropical signals, especially the Madden-Julian Oscillation (MJO), is one of the major deficiencies in current numerical models. The unrealistic features in the MJO simulations include the weak amplitude, more power at higher frequencies, displacement of the temporal and spatial distributions, eastward propagation speed being too fast, and a lack of coherent structure for the eastward propagation from the Indian Ocean to the Pacific (e.g., Slingo et al. 1996). While some improvement in simulating MJO variance and coherent eastward propagation has been attributed to model physics, model mean background state and air-sea interaction, studies have shown that the model resolution, especially for higher horizontal resolution, may play an important role in producing a more realistic simulation of MJO (e.g., Sperber et al. 2005). In this study, we employ unique high-resolution (25-km) simulations conducted using the Geophysical Fluid Dynamics Laboratory global High Resolution Atmospheric Model (HIRAM) to evaluate the MJO simulation against the European Center for Medium-range Weather Forecasts (ECMWF) Interim re-analysis (ERAI) dataset. We specifically focus on the ability of the model to represent the MJO related amplitude, spatial distribution, eastward propagation, and horizontal and vertical structures. Additionally, as the HIRAM output covers not only an historic period (1979-2012) but also future period (2012-2050), the impact of future climate change related to the MJO is illustrated. The possible changes in intensity and frequency of extreme weather and climate events (e.g., strong wind and heavy rainfall) in the western Pacific, the Indian Ocean and the Middle East North Africa (MENA) region are highlighted.

  15. Numerical Simulations of Two-Phase Reacting Flow in a Single-Element Lean Direct Injection (LDI) Combustor Using NCC

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey; Shih, Tsan-Hsing; Wey, C. Thomas

    2011-01-01

    A series of numerical simulations of Jet-A spray reacting flow in a single-element lean direct injection (LDI) combustor have been conducted by using the National Combustion Code (NCC). The simulations have been carried out using the time filtered Navier-Stokes (TFNS) approach ranging from the steady Reynolds-averaged Navier-Stokes (RANS), unsteady RANS (URANS), to the dynamic flow structure simulation (DFS). The sub-grid model employed for turbulent mixing and combustion includes the well-mixed model, the linear eddy mixing (LEM) model, and the filtered mass density function (FDF/PDF) model. The starting condition of the injected liquid spray is specified via empirical droplet size correlation, and a five-species single-step global reduced mechanism is employed for fuel chemistry. All the calculations use the same grid whose resolution is of the RANS type. Comparisons of results from various models are presented.

  16. Quantifying Dynamic Changes on Surface of Comet 67P/Churyumov-Gerasimenko Using High-Resolution Photoclinometry DTMs

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Birch, S.; Hayes, A.; Kirk, R. L.; Kutsop, N. W. S.; Squyres, S. W.

    2017-12-01

    Observations from ESA's Rosetta spacecraft of comet 67P/Churyumov-Gerasimenko (67P) have provided insights into the geological processes that act to modify the surface of a small, primitive body. The landscapes of 67P are shaped by both large scale violent changes, such as cliff collapses and jet events, as well as smaller and more subtle changes such as the formation of pits and ripples within the larger-scale granular deposits. Explosive jets are located through triangulating the same jet in multiple images. They appear to originate from locations close to numerous newly formed, small-scale pits, which were only observed after known jet events (for example, the jet observed on March 11th, 2015, in image N20150311T053737597ID30F22). This implies a possible link between these two dynamical processes. We generated high-resolution photoclinometric digital terrain models (DTM) of the surface of 67P (at 1.5m/pixel) in locations where recent jet events were observed and over surfaces where newly formed pits are observed. A comparison of DTMs generated of the surface both before and after the appearance of the pits provides insight to the magnitude of dynamical changes, including the volume of the ejected material. By tracking the change in the surface topography at such high resolution, we constrain both the volume of materials that are ejected from the surface during the jet event, and of materials that are retained in nearby deposits. By studying these events and their aftermath, it will be possible to formulate numerical models as to the formation of the jets and explain why and how they occur. We will use this information in conjunction with numerical modeling of the large-scale global transport of sedimentary materials on 67P, to facilitate a better understanding of cometary landscape evolution.

  17. High-resolution modeling assessment of tidal stream resource in Western Passage of Maine, USA

    NASA Astrophysics Data System (ADS)

    Yang, Zhaoqing; Wang, Taiping; Feng, Xi; Xue, Huijie; Kilcher, Levi

    2017-04-01

    Although significant efforts have been taken to assess the maximum potential of tidal stream energy at system-wide scale, accurate assessment of tidal stream energy resource at project design scale requires detailed hydrodynamic simulations using high-resolution three-dimensional (3-D) numerical models. Extended model validation against high quality measured data is essential to minimize the uncertainties of the resource assessment. Western Passage in the State of Maine in U.S. has been identified as one of the top ranking sites for tidal stream energy development in U.S. coastal waters, based on a number of criteria including tidal power density, market value and transmission distance. This study presents an on-going modeling effort for simulating the tidal hydrodynamics in Western Passage using the 3-D unstructured-grid Finite Volume Community Ocean Model (FVCOM). The model domain covers a large region including the entire the Bay of Fundy with grid resolution varies from 20 m in the Western Passage to approximately 1000 m along the open boundary near the mouth of Bay of Fundy. Preliminary model validation was conducted using existing NOAA measurements within the model domain. Spatial distributions of tidal power density were calculated and extractable tidal energy was estimated using a tidal turbine module embedded in FVCOM under different tidal farm scenarios. Additional field measurements to characterize resource and support model validation were discussed. This study provides an example of high resolution resource assessment based on the guidance recommended by the International Electrotechnical Commission Technical Specification.

  18. Sensitivity of sea-level forecasting to the horizontal resolution and sea surface forcing for different configurations of an oceanographic model of the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Bressan, Lidia; Valentini, Andrea; Paccagnella, Tiziana; Montani, Andrea; Marsigli, Chiara; Stefania Tesini, Maria

    2017-04-01

    At the Hydro-meteo-climate service of the Regional environmental agency of Emilia-Romagna, Italy (Arpae-SIMC), the oceanographic numerical model AdriaROMS is used in the operational forecasting suite to compute sea level, temperature, salinity and 3-D current fields of the Adriatic Sea (northern Mediterranean Sea). In order to evaluate the performance of the sea-level forecast and to study different configurations of the ROMS model, two marine storms occurred on the Emilia Romagna coast during the winter 2015-2016 are investigated. The main focus of this study is to analyse the sensitivity of the model to the horizontal resolution and to the meteorological forcing. To this end, the model is run with two different configurations and with two horizontal grids at 1 and 2 km resolution. To study the influence of the meteorological forcing, the two storms have been reproduced by running ROMS in ensemble mode, forced by the 16-members of the meteorological ensemble COSMO-LEPS system. Possible optimizations of the model set-up are deduced by the comparison of the different run outputs.

  19. The Solomon Sea eddy activity from a 1/36° regional model

    NASA Astrophysics Data System (ADS)

    Djath, Bughsin; Babonneix, Antoine; Gourdeau, Lionel; Marin, Frédéric; Verron, Jacques

    2013-04-01

    In the South West Pacific, the Solomon Sea exhibits the highest levels of eddy kinetic energy but relatively little is known about the eddy activity in this region. This Sea is directly influenced by a monsoonal regime and ENSO variability, and occupies a strategical location as the Western Boundary Currents exiting it are known to feed the warm pool and to be the principal sources of the Equatorial UnderCurrent. During their transit in the Solomon Sea, meso-scale eddies are suspected to notably interact and influence these water masses. The goal of this study is to give an exhaustive description of this eddy activity. A dual approach, based both on altimetric data and high resolution modeling, has then been chosen for this purpose. First, an algorithm is applied on nearly 20 years of 1/3° x 1/3° gridded SLA maps (provided by the AVISO project). This allows eddies to be automatically detected and tracked, thus providing some basic eddy properties. The preliminary results show that two main and distinct types of eddies are detected. Eddies in the north-eastern part shows a variability associated with the mean structure, while those in the southern part are associated with generation/propagation processes. However, the resolution of the AVISO dataset is not very well suited to observe fine structures and to match with the numerous islands bordering the Solomon Sea. For this reason, we will confront these observations with the outputs of a 1/36° resolution realistic model of the Solomon Sea. The high resolution numerical model (1/36°) indeed permits to reproduce very fine scale features, such as eddies and filaments. The model is two-way embedded in a 1/12° regional model which is itself one-way embedded in the DRAKKAR 1/12° global model. The NEMO code is used as well as the AGRIF software for model nestings. Validation is realized by comparison with AVISO observations and available in situ data. In preparing the future wide-swath altimetric SWOT mission that is expected to provide observations of small-scale sea level variability, spectral analysis is performed from the 1/36° resolution realistic model in order to characterize the finer scale signals in the Solomon sea region. The preliminary SSH spectral analysis shows a k-4 slope, in good agreement with the suface quasigeostrophic (SQG) turbulence theory. Keywords: Solomon Sea; meso-scale activity; eddy detection, tracking and properties; wavenumber spectrum.

  20. Wave Dissipation over Nearshore Beach Morphology: Insights from High-Resolution LIDAR Observations and the SWASH Wave Model

    NASA Astrophysics Data System (ADS)

    Mulligan, R. P.; Gomes, E.; McNinch, J.; Brodie, K. L.

    2016-02-01

    Numerical modelling of the nearshore zone can be computationally intensive due to the complexity of wave breaking, and the need for high temporal and spatial resolution. In this study we apply the SWASH non-hydrostatic wave-flow model that phase-resolves the free surface and fluid motions in the water column at high resolution. The model is forced using observed directional energy spectra, and results are compared to wave observations during moderate storm events. Observations are collected outside the surf zone using acoustic wave and currents sensors, and inside the surf zone over a 100 m transect using high-resolution LIDAR measurements of the sea surface from a sensor mounted on a tower on the beach dune at the Field Research Facility in Duck, NC. The model is applied to four cases with different wave conditions and bathymetry, and used to predict the spatial variability in wave breaking, and correlation between energy dissipation and morphologic features. Model results compare well with observations of spectral evolution outside the surf zone, and with the remotely sensed observations of wave transformation inside the surf zone. The results indicate the importance of nearshore bars, rip-channels, and larger features (major scour depression under the pier following large waves from Hurricane Irene) on the location of wave breaking and alongshore variability in wave energy dissipation.

  1. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.

  2. The Impact of Microphysics on Intensity and Structure of Hurricanes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Shi, Jainn; Lang, Steve; Peters-Lidard, Christa

    2006-01-01

    During the past decade, both research and operational numerical weather prediction models, e.g. Weather Research and Forecast (WRF) model, have started using more complex microphysical schemes originally developed for high-resolution cloud resolving models (CRMs) with a 1-2 km or less horizontal resolutions. WFW is a next-generation mesoscale forecast model and assimilation system that has incorporated modern software framework, advanced dynamics, numeric and data assimilation techniques, a multiple moveable nesting capability, and improved physical packages. WFW model can be used for a wide range of applications, from idealized research to operational forecasting, with an emphasis on horizontal grid sizes in the range of 1-10 km. The current WRF includes several different microphysics options such as Lin et al. (1983), WSM 6-class and Thompson microphysics schemes. We have recently implemented three sophisticated cloud microphysics schemes into WRF. The cloud microphysics schemes have been extensively tested and applied for different mesoscale systems in different geographical locations. The performances of these schemes have been compared to those from other WRF microphysics options. We are performing sensitivity tests in using WW to examine the impact of six different cloud microphysical schemes on hurricane track, intensity and rainfall forecast. We are also performing the inline tracer calculation to comprehend the physical processes @e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes.

  3. Assimilation of drifters' trajectories in velocity fields from coastal radar and model via the Lagrangian assimilation algorithm LAVA.

    NASA Astrophysics Data System (ADS)

    Berta, Maristella; Bellomo, Lucio; Griffa, Annalisa; Gatimu Magaldi, Marcello; Marmain, Julien; Molcard, Anne; Taillandier, Vincent

    2013-04-01

    The Lagrangian assimilation algorithm LAVA (LAgrangian Variational Analysis) is customized for coastal areas in the framework of the TOSCA (Tracking Oil Spills & Coastal Awareness network) Project, to improve the response to maritime accidents in the Mediterranean Sea. LAVA assimilates drifters' trajectories in the velocity fields which may come from either coastal radars or numerical models. In the present study, LAVA is applied to the coastal area in front of Toulon (France). Surface currents are available from a WERA radar network (2km spatial resolution, every 20 minutes) and from the GLAZUR model (1/64° spatial resolution, every hour). The cluster of drifters considered is constituted by 7 buoys, transmitting every 15 minutes for a period of 5 days. Three assimilation cases are considered: i) correction of the radar velocity field, ii) correction of the model velocity field and iii) reconstruction of the velocity field from drifters only. It is found that drifters' trajectories compare well with the ones obtained by the radar and the correction to radar velocity field is therefore minimal. Contrarily, observed and numerical trajectories separate rapidly and the correction to the model velocity field is substantial. For the reconstruction from drifters only, the velocity fields obtained are similar to the radar ones, but limited to the neighbor of the drifter paths.

  4. Fast multigrid-based computation of the induced electric field for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka; Hirata, Akimasa

    2012-12-01

    In transcranial magnetic stimulation (TMS), the distribution of the induced electric field, and the affected brain areas, depends on the position of the stimulation coil and the individual geometry of the head and brain. The distribution of the induced electric field in realistic anatomies can be modelled using computational methods. However, existing computational methods for accurately determining the induced electric field in realistic anatomical models have suffered from long computation times, typically in the range of tens of minutes or longer. This paper presents a matrix-free implementation of the finite-element method with a geometric multigrid method that can potentially reduce the computation time to several seconds or less even when using an ordinary computer. The performance of the method is studied by computing the induced electric field in two anatomically realistic models. An idealized two-loop coil is used as the stimulating coil. Multiple computational grid resolutions ranging from 2 to 0.25 mm are used. The results show that, for macroscopic modelling of the electric field in an anatomically realistic model, computational grid resolutions of 1 mm or 2 mm appear to provide good numerical accuracy compared to higher resolutions. The multigrid iteration typically converges in less than ten iterations independent of the grid resolution. Even without parallelization, each iteration takes about 1.0 s or 0.1 s for the 1 and 2 mm resolutions, respectively. This suggests that calculating the electric field with sufficient accuracy in real time is feasible.

  5. MUSE, the Multi-Slit Solar Explorer

    NASA Astrophysics Data System (ADS)

    Lemen, J. R.; Tarbell, T. D.; De Pontieu, B.; Wuelser, J. P.

    2017-12-01

    The Multi-Slit Solar Explorer (MUSE) has been selected for a Phase A study for the NASA Heliophysics Small Explorer program. The science objective of MUSE is to make high spatial and temporal resolution imaging and spectral observations of the solar corona and transition region in order to probe the mechanisms responsible for energy release in the corona and understand the dynamics of the solar atmosphere. The physical processes are responsible for heating the corona, accelerating the solar wind, and the rapid release of energy in CMEs and flares. The observations will be tightly coupled to state-of-the-art numerical modeling to provide significantly improved estimates for understanding and anticipating space weather. MUSE contains two instruments: an EUV spectrograph and an EUV context imager. Both have similar spatial resolutions and leverage extensive heritage from previous high-resolution instruments such as IRIS and the HiC rocket payload. The MUSE spectrograph employs a novel multi-slit design that enables a 100x improvement in spectral scanning rates, which will reveal crucial information about the dynamics (e.g., temperature, velocities) of the physical processes that are not observable with current instruments. The MUSE investigation builds on the success of IRIS by combining numerical modeling with a uniquely capable observatory: MUSE will obtain EUV spectra and images with the highest resolution in space (1/3 arcsec) and time (1-4 s) ever achieved for the transition region and corona, along 35 slits and a large context FOV simultaneously. The MUSE consortium includes LMSAL, SAO, Stanford, ARC, HAO, GSFC, MSFC, MSU, and ITA Oslo.

  6. Advanced Tsunami Numerical Simulations and Energy Considerations by use of 3D-2D Coupled Models: The October 11, 1918, Mona Passage Tsunami

    NASA Astrophysics Data System (ADS)

    López-Venegas, Alberto M.; Horrillo, Juan; Pampell-Manis, Alyssa; Huérfano, Victor; Mercado, Aurelio

    2015-06-01

    The most recent tsunami observed along the coast of the island of Puerto Rico occurred on October 11, 1918, after a magnitude 7.2 earthquake in the Mona Passage. The earthquake was responsible for initiating a tsunami that mostly affected the northwestern coast of the island. Runup values from a post-tsunami survey indicated the waves reached up to 6 m. A controversy regarding the source of the tsunami has resulted in several numerical simulations involving either fault rupture or a submarine landslide as the most probable cause of the tsunami. Here we follow up on previous simulations of the tsunami from a submarine landslide source off the western coast of Puerto Rico as initiated by the earthquake. Improvements on our previous study include: (1) higher-resolution bathymetry; (2) a 3D-2D coupled numerical model specifically developed for the tsunami; (3) use of the non-hydrostatic numerical model NEOWAVE (non-hydrostatic evolution of ocean WAVE) featuring two-way nesting capabilities; and (4) comprehensive energy analysis to determine the time of full tsunami wave development. The three-dimensional Navier-Stokes model tsunami solution using the Navier-Stokes algorithm with multiple interfaces for two fluids (water and landslide) was used to determine the initial wave characteristic generated by the submarine landslide. Use of NEOWAVE enabled us to solve for coastal inundation, wave propagation, and detailed runup. Our results were in agreement with previous work in which a submarine landslide is favored as the most probable source of the tsunami, and improvement in the resolution of the bathymetry yielded inundation of the coastal areas that compare well with values from a post-tsunami survey. Our unique energy analysis indicates that most of the wave energy is isolated in the wave generation region, particularly at depths near the landslide, and once the initial wave propagates from the generation region its energy begins to stabilize.

  7. Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM)

    NASA Astrophysics Data System (ADS)

    Sinitskiy, Anton V.; Voth, Gregory A.

    2018-01-01

    Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.

  8. Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM).

    PubMed

    Sinitskiy, Anton V; Voth, Gregory A

    2018-01-07

    Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.

  9. Monte-Carlo simulation of spatial resolution of an image intensifier in a saturation mode

    NASA Astrophysics Data System (ADS)

    Xie, Yuntao; Wang, Xi; Zhang, Yujun; Sun, Xiaoquan

    2018-04-01

    In order to investigate the spatial resolution of an image intensifier which is irradiated by high-energy pulsed laser, a three-dimensional electron avalanche model was built and the cascade process of the electrons was numerically simulated. The influence of positive wall charges, due to the failure of replenishing charges extracted from the channel during the avalanche, was considered by calculating its static electric field through particle-in-cell (PIC) method. By tracing the trajectory of electrons throughout the image intensifier, the energy of the electrons at the output of the micro channel plate and the electron distribution at the phosphor screen are numerically calculated. The simulated energy distribution of output electrons are in good agreement with experimental data of previous studies. In addition, the FWHM extensions of the electron spot at phosphor screen as a function of the number of incident electrons are calculated. The results demonstrate that the spot size increases significantly with the increase in the number of incident electrons. Furthermore, we got the MTFs of the image intensifier by Fourier transform of a point spread function at phosphor screen. Comparison between the MTFs in our model and the MTFs by analytic method shows that spatial resolution of the image intensifier decreases significantly as the number of incident electrons increases, and it is particularly obvious when incident electron number greater than 100.

  10. Numerical simulations of island effects on airflow and weather during the summer over the island of Oahu

    Treesearch

    Hiep Van Nguyen; Yie-Leng Chen; Francis Fujioka

    2010-01-01

    The high-resolution (1.5 km) nonhydrostatic fifth-generation Pennsylvania StateUniversity–National Center for Atmospheric Research (PSU–NCAR) Mesoscale Model (MM5) and an advanced land surface model (LSM) are used to study the island-induced airflow and weather for the island of Oahu, Hawaii, under summer trade wind conditions. Despite Oahu’s relatively small...

  11. Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) for Breast Tumor Imaging: Numerical Modeling and Simulation

    PubMed Central

    Zhou, Lian; Li, Xu; Zhu, Shanan; He, Bin

    2011-01-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) was recently introduced as a noninvasive electrical conductivity imaging approach with high spatial resolution close to ultrasound imaging. In the present study, we test the feasibility of the MAT-MI method for breast tumor imaging using numerical modeling and computer simulation. Using the finite element method, we have built three dimensional numerical breast models with varieties of embedded tumors for this simulation study. In order to obtain an accurate and stable forward solution that does not have numerical errors caused by singular MAT-MI acoustic sources at conductivity boundaries, we first derive an integral forward method for calculating MAT-MI acoustic sources over the entire imaging volume. An inverse algorithm for reconstructing the MAT-MI acoustic source is also derived with spherical measurement aperture, which simulates a practical setup for breast imaging. With the numerical breast models, we have conducted computer simulations under different imaging parameter setups and all the results suggest that breast tumors that have large conductivity contrast to its surrounding tissues as reported in literature may be readily detected in the reconstructed MAT-MI images. In addition, our simulations also suggest that the sensitivity of imaging breast tumors using the presented MAT-MI setup depends more on the tumor location and the conductivity contrast between the tumor and its surrounding tissues than on the tumor size. PMID:21364262

  12. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  13. Implicitly solving phase appearance and disappearance problems using two-fluid six-equation model

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-01-25

    Phase appearance and disappearance issue presents serious numerical challenges in two-phase flow simulations using the two-fluid six-equation model. Numerical challenges arise from the singular equation system when one phase is absent, as well as from the discontinuity in the solution space when one phase appears or disappears. In this work, a high-resolution spatial discretization scheme on staggered grids and fully implicit methods were applied for the simulation of two-phase flow problems using the two-fluid six-equation model. A Jacobian-free Newton-Krylov (JFNK) method was used to solve the discretized nonlinear problem. An improved numerical treatment was proposed and proved to be effectivemore » to handle the numerical challenges. The treatment scheme is conceptually simple, easy to implement, and does not require explicit truncations on solutions, which is essential to conserve mass and energy. Various types of phase appearance and disappearance problems relevant to thermal-hydraulics analysis have been investigated, including a sedimentation problem, an oscillating manometer problem, a non-condensable gas injection problem, a single-phase flow with heat addition problem and a subcooled flow boiling problem. Successful simulations of these problems demonstrate the capability and robustness of the proposed numerical methods and numerical treatments. As a result, volume fraction of the absent phase can be calculated effectively as zero.« less

  14. High resolution global flood hazard map from physically-based hydrologic and hydraulic models.

    NASA Astrophysics Data System (ADS)

    Begnudelli, L.; Kaheil, Y.; McCollum, J.

    2017-12-01

    The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak corresponds to the return period corresponding to the hazard map being produced (e.g. 100 years, 500 years). Each numerical simulation models one river reach, except for the longest reaches which are split in smaller parts. Here we show results for selected river basins worldwide.

  15. Exploring the impacts of physics and resolution on aqua-planet simulations from a nonhydrostatic global variable-resolution modeling framework: IMPACTS OF PHYSICS AND RESOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Chun; Leung, L. Ruby; Park, Sang-Hun

    Advances in computing resources are gradually moving regional and global numerical forecasting simulations towards sub-10 km resolution, but global high resolution climate simulations remain a challenge. The non-hydrostatic Model for Prediction Across Scales (MPAS) provides a global framework to achieve very high resolution using regional mesh refinement. Previous studies using the hydrostatic version of MPAS (H-MPAS) with the physics parameterizations of Community Atmosphere Model version 4 (CAM4) found notable resolution dependent behaviors. This study revisits the resolution sensitivity using the non-hydrostatic version of MPAS (NH-MPAS) with both CAM4 and CAM5 physics. A series of aqua-planet simulations at global quasi-uniform resolutionsmore » ranging from 240 km to 30 km and global variable resolution simulations with a regional mesh refinement of 30 km resolution over the tropics are analyzed, with a primary focus on the distinct characteristics of NH-MPAS in simulating precipitation, clouds, and large-scale circulation features compared to H-MPAS-CAM4. The resolution sensitivity of total precipitation and column integrated moisture in NH-MPAS is smaller than that in H-MPAS-CAM4. This contributes importantly to the reduced resolution sensitivity of large-scale circulation features such as the inter-tropical convergence zone and Hadley circulation in NH-MPAS compared to H-MPAS. In addition, NH-MPAS shows almost no resolution sensitivity in the simulated westerly jet, in contrast to the obvious poleward shift in H-MPAS with increasing resolution, which is partly explained by differences in the hyperdiffusion coefficients used in the two models that influence wave activity. With the reduced resolution sensitivity, simulations in the refined region of the NH-MPAS global variable resolution configuration exhibit zonally symmetric features that are more comparable to the quasi-uniform high-resolution simulations than those from H-MPAS that displays zonal asymmetry in simulations inside the refined region. Overall, NH-MPAS with CAM5 physics shows less resolution sensitivity compared to CAM4. These results provide a reference for future studies to further explore the use of NH-MPAS for high-resolution climate simulations in idealized and realistic configurations.« less

  16. Classical nucleation theory in the phase-field crystal model

    NASA Astrophysics Data System (ADS)

    Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas

    2018-04-01

    A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.

  17. Classical nucleation theory in the phase-field crystal model.

    PubMed

    Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas

    2018-04-01

    A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.

  18. Modeling of Turbulent Free Shear Flows

    NASA Technical Reports Server (NTRS)

    Yoder, Dennis A.; DeBonis, James R.; Georgiadis, Nicolas J.

    2013-01-01

    The modeling of turbulent free shear flows is crucial to the simulation of many aerospace applications, yet often receives less attention than the modeling of wall boundary layers. Thus, while turbulence model development in general has proceeded very slowly in the past twenty years, progress for free shear flows has been even more so. This paper highlights some of the fundamental issues in modeling free shear flows for propulsion applications, presents a review of past modeling efforts, and identifies areas where further research is needed. Among the topics discussed are differences between planar and axisymmetric flows, development versus self-similar regions, the effect of compressibility and the evolution of compressibility corrections, the effect of temperature on jets, and the significance of turbulent Prandtl and Schmidt numbers for reacting shear flows. Large eddy simulation greatly reduces the amount of empiricism in the physical modeling, but is sensitive to a number of numerical issues. This paper includes an overview of the importance of numerical scheme, mesh resolution, boundary treatment, sub-grid modeling, and filtering in conducting a successful simulation.

  19. Numerical Model Simulation of Atmosphere above A.C. Airport

    NASA Astrophysics Data System (ADS)

    Lutes, Tiffany; Trout, Joseph

    2014-03-01

    In this research project, the Weather Research & Forecasting (WRF) model from the National Center for Atmospheric Research (NCAR) is used to investigate past and present weather conditions. The Atlantic City Airport area in southern New Jersey is the area of interest. Long-term hourly data is analyzed and model simulations are created. By inputting high resolution surface data, a more accurate picture of the effects of different weather conditions will be portrayed. Currently, the impact of gridded model runs is being tested, and the impact of surface characteristics is being investigated.

  20. MCore: A High-Order Finite-Volume Dynamical Core for Atmospheric General Circulation Models

    NASA Astrophysics Data System (ADS)

    Ullrich, P.; Jablonowski, C.

    2011-12-01

    The desire for increasingly accurate predictions of the atmosphere has driven numerical models to smaller and smaller resolutions, while simultaneously exponentially driving up the cost of existing numerical models. Even with the modern rapid advancement of computational performance, it is estimated that it will take more than twenty years before existing models approach the scales needed to resolve atmospheric convection. However, smarter numerical methods may allow us to glimpse the types of results we would expect from these fine-scale simulations while only requiring a fraction of the computational cost. The next generation of atmospheric models will likely need to rely on both high-order accuracy and adaptive mesh refinement in order to properly capture features of interest. We present our ongoing research on developing a set of ``smart'' numerical methods for simulating the global non-hydrostatic fluid equations which govern atmospheric motions. We have harnessed a high-order finite-volume based approach in developing an atmospheric dynamical core on the cubed-sphere. This type of method is desirable for applications involving adaptive grids, since it has been shown that spuriously reflected wave modes are intrinsically damped out under this approach. The model further makes use of an implicit-explicit Runge-Kutta-Rosenbrock (IMEX-RKR) time integrator for accurate and efficient coupling of the horizontal and vertical model components. We survey the algorithmic development of the model and present results from idealized dynamical core test cases, as well as give a glimpse at future work with our model.

  1. Modeling the spatiotemporal variability in subsurface thermal regimes across a low-relief polygonal tundra landscape

    DOE PAGES

    Kumar, Jitendra; Collier, Nathan; Bisht, Gautam; ...

    2016-09-27

    Vast carbon stocks stored in permafrost soils of Arctic tundra are under risk of release to the atmosphere under warming climate scenarios. Ice-wedge polygons in the low-gradient polygonal tundra create a complex mosaic of microtopographic features. This microtopography plays a critical role in regulating the fine-scale variability in thermal and hydrological regimes in the polygonal tundra landscape underlain by continuous permafrost. Modeling of thermal regimes of this sensitive ecosystem is essential for understanding the landscape behavior under the current as well as changing climate. Here, we present an end-to-end effort for high-resolution numerical modeling of thermal hydrology at real-world fieldmore » sites, utilizing the best available data to characterize and parameterize the models. We also develop approaches to model the thermal hydrology of polygonal tundra and apply them at four study sites near Barrow, Alaska, spanning across low to transitional to high-centered polygons, representing a broad polygonal tundra landscape. A multiphase subsurface thermal hydrology model (PFLOTRAN) was developed and applied to study the thermal regimes at four sites. Using a high-resolution lidar digital elevation model (DEM), microtopographic features of the landscape were characterized and represented in the high-resolution model mesh. The best available soil data from field observations and literature were utilized to represent the complex heterogeneous subsurface in the numerical model. Simulation results demonstrate the ability of the developed modeling approach to capture – without recourse to model calibration – several aspects of the complex thermal regimes across the sites, and provide insights into the critical role of polygonal tundra microtopography in regulating the thermal dynamics of the carbon-rich permafrost soils. Moreover, areas of significant disagreement between model results and observations highlight the importance of field-based observations of soil thermal and hydraulic properties for modeling-based studies of permafrost thermal dynamics, and provide motivation and guidance for future observations that will help address model and data gaps affecting our current understanding of the system.« less

  2. Numerical viscosity and resolution of high-order weighted essentially nonoscillatory schemes for compressible flows with high Reynolds numbers.

    PubMed

    Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye

    2003-10-01

    A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.

  3. Relaxation approximations to second-order traffic flow models by high-resolution schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.

    2015-03-10

    A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reportedmore » demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.« less

  4. Isochronal Ice Sheet Model: a New Approach to Tracer Transport by Explicitly Tracing Accumulation Layers

    NASA Astrophysics Data System (ADS)

    Born, A.; Stocker, T. F.

    2014-12-01

    The long, high-resolution and largely undisturbed depositional record of polar ice sheets is one of the greatest resources in paleoclimate research. The vertical profile of isotopic and other geochemical tracers provides a full history of depositional and dynamical variations. Numerical simulations of this archive could afford great advances both in the interpretation of these tracers as well as to help improve ice sheet models themselves, as show successful implementations in oceanography and atmospheric dynamics. However, due to the slow advection velocities, tracer modeling in ice sheets is particularly prone to numerical diffusion, thwarting efforts that employ straightforward solutions. Previous attemps to circumvent this issue follow conceptually and computationally extensive approaches that augment traditional Eulerian models of ice flow with a semi-Lagrangian tracer scheme (e.g. Clarke et al., QSR, 2005). Here, we propose a new vertical discretization for ice sheet models that eliminates numerical diffusion entirely. Vertical motion through the model mesh is avoided by mimicking the real-world ice flow as a thinning of underlying layers (see figure). A new layer is added to the surface at equidistant time intervals (isochronally). Therefore, each layer is uniquely identified with an age. Horizontal motion follows the shallow ice approximation using an implicit numerical scheme. Vertical diffusion of heat which is physically desirable is also solved implicitly. A simulation of a two-dimensional section through the Greenland ice sheet will be discussed.

  5. Unraveling the martian water cycle with high-resolution global climate simulations

    NASA Astrophysics Data System (ADS)

    Pottier, Alizée; Forget, François; Montmessin, Franck; Navarro, Thomas; Spiga, Aymeric; Millour, Ehouarn; Szantai, André; Madeleine, Jean-Baptiste

    2017-07-01

    Global climate modeling of the Mars water cycle is usually performed at relatively coarse resolution (200 - 300km), which may not be sufficient to properly represent the impact of waves, fronts, topography effects on the detailed structure of clouds and surface ice deposits. Here, we present new numerical simulations of the annual water cycle performed at a resolution of 1° × 1° (∼ 60 km in latitude). The model includes the radiative effects of clouds, whose influence on the thermal structure and atmospheric dynamics is significant, thus we also examine simulations with inactive clouds to distinguish the direct impact of resolution on circulation and winds from the indirect impact of resolution via water ice clouds. To first order, we find that the high resolution does not dramatically change the behavior of the system, and that simulations performed at ∼ 200 km resolution capture well the behavior of the simulated water cycle and Mars climate. Nevertheless, a detailed comparison between high and low resolution simulations, with reference to observations, reveal several significant changes that impact our understanding of the water cycle active today on Mars. The key northern cap edge dynamics are affected by an increase in baroclinic wave strength, with a complication of northern summer dynamics. South polar frost deposition is modified, with a westward longitudinal shift, since southern dynamics are also influenced. Baroclinic wave mode transitions are observed. New transient phenomena appear, like spiral and streak clouds, already documented in the observations. Atmospheric circulation cells in the polar region exhibit a large variability and are fine structured, with slope winds. Most modeled phenomena affected by high resolution give a picture of a more turbulent planet, inducing further variability. This is challenging for long-period climate studies.

  6. Impact of the "Symmetric Instability of the Computational Kind" at mesoscale- and submesoscale-permitting resolutions

    NASA Astrophysics Data System (ADS)

    Ducousso, Nicolas; Le Sommer, J.; Molines, J.-M.; Bell, M.

    2017-12-01

    The energy- and enstrophy-conserving momentum advection scheme (EEN) used over the last 10 years in NEMO is subject to a spurious numerical instability. This instability, referred to as the Symmetric Instability of the Computational Kind (SICK), arises from a discrete imbalance between the two components of the vector-invariant form of momentum advection. The properties and the method for removing this instability have been documented by Hollingsworth et al. (1983), but the extent to which the SICK may interfere with processes of interest at mesoscale- and submesoscale-permitting resolutions is still unkown. In this paper, the impact of the SICK in realistic ocean model simulations is assessed by comparing model integrations with different versions of the EEN momentum advection scheme. Investigations are undertaken with a global mesoscale-permitting resolution (1/4 °) configuration and with a regional North Atlantic Ocean submesoscale-permitting resolution (1/60 °) configuration. At both resolutions, the instability is found to alter primarily the most energetic current systems, such as equatorial jets, western boundary currents and coherent vortices. The impact of the SICK is found to increase with model resolution with a noticeable impact at mesoscale-permitting resolution and a dramatic impact at submesoscale-permitting resolution. The SICK is shown to distort the normal functioning of current systems, by redirecting the slow energy transfer between balanced motions to a spurious energy transfer to internal inertia-gravity waves and to dissipation. Our results indicate that the SICK is likely to have significantly corrupted NEMO solutions (when run with the EEN scheme) at mesocale-permitting and finer resolutions over the last 10 years.

  7. A unified high-resolution wind and solar dataset from a rapidly updating numerical weather prediction model

    DOE PAGES

    James, Eric P.; Benjamin, Stanley G.; Marquis, Melinda

    2016-10-28

    A new gridded dataset for wind and solar resource estimation over the contiguous United States has been derived from hourly updated 1-h forecasts from the National Oceanic and Atmospheric Administration High-Resolution Rapid Refresh (HRRR) 3-km model composited over a three-year period (approximately 22 000 forecast model runs). The unique dataset features hourly data assimilation, and provides physically consistent wind and solar estimates for the renewable energy industry. The wind resource dataset shows strong similarity to that previously provided by a Department of Energy-funded study, and it includes estimates in southern Canada and northern Mexico. The solar resource dataset represents anmore » initial step towards application-specific fields such as global horizontal and direct normal irradiance. This combined dataset will continue to be augmented with new forecast data from the advanced HRRR atmospheric/land-surface model.« less

  8. Numerical study of base pressure characteristic curve for a four-engine clustered nozzle configuration

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1993-01-01

    The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a three-dimensional pressure-based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution had a strong influence on the accuracy of the base flowfield prediction.

  9. MEETING REPORT ASSESSING HUMAN GERM-CELL MUTAGENESIS IN THE POST-GENOME ERA: A CELEBRATION OF THE LEGACY OF WILLIAM LAWSON (BILL) RUSSELL

    EPA Science Inventory

    Although numerous germ-cell mutagens have been identified in animal model systems, to date, no human germ-cell mutagens have been confirmed. Because the genomic integrity of our germ cells is essential for the continuation of the human species, a resolution of this enduring conu...

  10. ERS-1 SAR and SSM/I Data in the Greenland Sea Odden Region Used with Numerical Simulations to Identify and Monitor Ocean Convection

    NASA Technical Reports Server (NTRS)

    Carsey, Frank D.; Garwood, Ronald W.; Roach, Andrew T.

    1993-01-01

    In this paper we present an interpretation of coarse resolution passive microwave data for 1989 and 1992 in the context of a simple model of ice-edge retreat to obtain the Nordbukta emayment growth and the formation and migration of an Odden polynya.

  11. A Wind-Forced Modeling Study of the Canary Current System from 30 Degrees N to 42.5 Degrees N

    DTIC Science & Technology

    1998-06-01

    and Haynes and Barton (1990), using high resolution infra-red images from NOAA7 and NOAA9 and numerous in- situ measurements, reveal the existence of...dinamica das Aguas costeiras de Portugal. Dissertacao apresentada a Universidade de Lisboa para obtencao do grau de Doutor em Fisica, especializacao

  12. A Robust Multi-Scale Modeling System for the Study of Cloud and Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    During the past decade, numerical weather and global non-hydrostatic models have started using more complex microphysical schemes originally developed for high resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. These microphysical schemes affect the dynamic through the release of latent heat (buoyancy loading and pressure gradient) the radiation through the cloud coverage (vertical distribution of cloud species), and surface processes through rainfall (both amount and intensity). Recently, several major improvements of ice microphysical processes (or schemes) have been developed for cloud-resolving model (Goddard Cumulus Ensemble, GCE, model) and regional scale (Weather Research and Forecast, WRF) model. These improvements include an improved 3-ICE (cloud ice, snow and graupel) scheme (Lang et al. 2010); a 4-ICE (cloud ice, snow, graupel and hail) scheme and a spectral bin microphysics scheme and two different two-moment microphysics schemes. The performance of these schemes has been evaluated by using observational data from TRMM and other major field campaigns. In this talk, we will present the high-resolution (1 km) GeE and WRF model simulations and compared the simulated model results with observation from recent field campaigns [i.e., midlatitude continental spring season (MC3E; 2010), high latitude cold-season (C3VP, 2007; GCPEx, 2012), and tropical oceanic (TWP-ICE, 2006)].

  13. Hyper-resolution monitoring of urban flooding with social media and crowdsourcing data

    NASA Astrophysics Data System (ADS)

    Wang, Ruo-Qian; Mao, Huina; Wang, Yuan; Rae, Chris; Shaw, Wesley

    2018-02-01

    Hyper-resolution datasets for urban flooding are rare. This problem prevents detailed flooding risk analysis, urban flooding control, and the validation of hyper-resolution numerical models. We employed social media and crowdsourcing data to address this issue. Natural Language Processing and Computer Vision techniques are applied to the data collected from Twitter and MyCoast (a crowdsourcing app). We found these big data based flood monitoring approaches can complement the existing means of flood data collection. The extracted information is validated against precipitation data and road closure reports to examine the data quality. The two data collection approaches are compared and the two data mining methods are discussed. A series of suggestions is given to improve the data collection strategy.

  14. Rapid inundation estimates at harbor scale using tsunami wave heights offshore simulation and Green's law approach

    NASA Astrophysics Data System (ADS)

    Gailler, Audrey; Hébert, Hélène; Loevenbruck, Anne

    2013-04-01

    Improvements in the availability of sea-level observations and advances in numerical modeling techniques are increasing the potential for tsunami warnings to be based on numerical model forecasts. Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response on the scale of an individual harbor. In fact, when facing the problem of the interaction of the tsunami wavefield with a shoreline, any numerical simulation must be performed over an increasingly fine grid, which in turn mandates a reduced time step, and the use of a fully non-linear code. Such calculations become then prohibitively time-consuming, which is clearly unacceptable in the framework of real-time warning. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami wave heights in high seas, and tsunami warning maps at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these deep wave heights simulations. The method involves an empirical correction relation derived from Green's law, expressing conservation of wave energy flux to extend the gridded wave field into the harbor with respect to the nearby deep-water grid node. The main limitation of this method is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, a set of synthetic mareograms is calculated for both fake and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids characterized by a coarse resolution over deep water regions and an increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). This synthetic dataset is then used to approximate the empirical parameters of the correction equation. Results of inundation estimates in several french Mediterranean harbors obtained with the fast "Green's law - derived" method are presented and compared with values given by time-consuming nested grids simulations.

  15. An investigation of the convective region of numerically simulated squall lines

    NASA Astrophysics Data System (ADS)

    Bryan, George Howard

    High resolution numerical simulations are utilized to investigate the thermodynamic and kinematic structure of the convective region of squall lines. A new numerical modeling system was developed for this purpose. The model incorporates several new and/or recent advances in numerical modeling, including: a mass- and energy-conserving equation set, based on the compressible system of equations; third-order Runge-Kutta time integration, with high (third to sixth) order spatial discretization; and a new method for conserved-variable mixing in saturated environments, utilizing an exact definition for ice-liquid water potential temperature. A benchmark simulation for moist environments was designed to evaluate the new model. It was found that the mass- and energy-conserving equation set was necessary to produce acceptable results, and that traditional equation sets have a cool bias that leads to systematic underprediction of vertical velocity. The model was developed to run on massively-parallel distributed memory computing systems. This allows for simulations with very high resolution. In this study, squall lines were simulated with grid spacing of 125 m over a 300 km x 60 km x 18 km domain. Results show that the 125 m simulations contain sub-cloud-scale turbulent eddies that stretch and distort plumes of high equivalent potential temperature (thetae) that rise from the pre-squall-line boundary layer. In contrast, with 1 km grid spacing the high thetae plumes rise in a laminar manner, and require parameterized subgrid terms to diffuse the high theta e air. The high resolution output is used to refine the conceptual model of the structure and lifecycle of moist absolutely unstable layers (MAULs). Moist absolute instability forms in the inflow region of the squall line and is subsequently removed by turbulent processes of varying scales. Three general MAUL regimes (MRs) are identified: a laminar MR, characterized by deep (˜2 km) MAULs that extend continuously in both the cross-line and along-line directions; a convective MR, containing deep (˜10 km) cellular pulses and plumes; and a turbulent MR, characterized by numerous moist turbulent eddies that are a few km (or smaller) in scale. The character of the laminar MR is of particular interest. Parcels in this region experience moist absolute instability for 11--17 minutes before beginning to overturn. Conventional theory suggests that overturning would ensue immediately in these conditions. Two explanations are offered to elucidate why this layer persists without overturning. First, it is found that buoyancy forcing (defined as the sum of buoyancy and the vertical pressure gradient due to the buoyancy field) is reduced in the laminar MR as compared to that of an isolated parcel. The geometry of the laminar MR is directly responsible for this reduction in buoyancy forcing; specifically, the MAUL extends continuously in the along-line direction and for 10 km in the cross-line direction, which inhibits the development of vertical motions due to mass continuity considerations. (Abstract shortened by UMI.)

  16. Two-Dimensional Model for Reactive-Sorption Columns of Cylindrical Geometry: Analytical Solutions and Moment Analysis.

    PubMed

    Khan, Farman U; Qamar, Shamsul

    2017-05-01

    A set of analytical solutions are presented for a model describing the transport of a solute in a fixed-bed reactor of cylindrical geometry subjected to the first (Dirichlet) and third (Danckwerts) type inlet boundary conditions. Linear sorption kinetic process and first-order decay are considered. Cylindrical geometry allows the use of large columns to investigate dispersion, adsorption/desorption and reaction kinetic mechanisms. The finite Hankel and Laplace transform techniques are adopted to solve the model equations. For further analysis, statistical temporal moments are derived from the Laplace-transformed solutions. The developed analytical solutions are compared with the numerical solutions of high-resolution finite volume scheme. Different case studies are presented and discussed for a series of numerical values corresponding to a wide range of mass transfer and reaction kinetics. A good agreement was observed in the analytical and numerical concentration profiles and moments. The developed solutions are efficient tools for analyzing numerical algorithms, sensitivity analysis and simultaneous determination of the longitudinal and transverse dispersion coefficients from a laboratory-scale radial column experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Measurement with microscopic MRI and simulation of flow in different aneurysm models.

    PubMed

    Edelhoff, Daniel; Walczak, Lars; Frank, Frauke; Heil, Marvin; Schmitz, Inge; Weichert, Frank; Suter, Dieter

    2015-10-01

    The impact and the development of aneurysms depend to a significant degree on the exchange of liquid between the regular vessel and the pathological extension. A better understanding of this process will lead to improved prediction capabilities. The aim of the current study was to investigate fluid-exchange in aneurysm models of different complexities by combining microscopic magnetic resonance measurements with numerical simulations. In order to evaluate the accuracy and applicability of these methods, the fluid-exchange process between the unaltered vessel lumen and the aneurysm phantoms was analyzed quantitatively using high spatial resolution. Magnetic resonance flow imaging was used to visualize fluid-exchange in two different models produced with a 3D printer. One model of an aneurysm was based on histological findings. The flow distribution in the different models was measured on a microscopic scale using time of flight magnetic resonance imaging. The whole experiment was simulated using fast graphics processing unit-based numerical simulations. The obtained simulation results were compared qualitatively and quantitatively with the magnetic resonance imaging measurements, taking into account flow and spin-lattice relaxation. The results of both presented methods compared well for the used aneurysm models and the chosen flow distributions. The results from the fluid-exchange analysis showed comparable characteristics concerning measurement and simulation. Similar symmetry behavior was observed. Based on these results, the amount of fluid-exchange was calculated. Depending on the geometry of the models, 7% to 45% of the liquid was exchanged per second. The result of the numerical simulations coincides well with the experimentally determined velocity field. The rate of fluid-exchange between vessel and aneurysm was well-predicted. Hence, the results obtained by simulation could be validated by the experiment. The observed deviations can be caused by the noise in the measurement and by the limited resolution of the simulation. The resulting differences are small enough to allow reliable predictions of the flow distribution in vessels with stents and for pulsed blood flow.

  18. Comparison of 2D numerical models for river flood hazard assessment: simulation of the Secchia River flood in January, 2014

    NASA Astrophysics Data System (ADS)

    Shustikova, Iuliia; Domeneghetti, Alessio; Neal, Jeffrey; Bates, Paul; Castellarin, Attilio

    2017-04-01

    Hydrodynamic modeling of inundation events still brings a large array of uncertainties. This effect is especially evident in the models run for geographically large areas. Recent studies suggest using fully two-dimensional (2D) models with high resolution in order to avoid uncertainties and limitations coming from the incorrect interpretation of flood dynamics and an unrealistic reproduction of the terrain topography. This, however, affects the computational efficiency increasing the running time and hardware demands. Concerning this point, our study evaluates and compares numerical models of different complexity by testing them on a flood event that occurred in the basin of the Secchia River, Northern Italy, on 19th January, 2014. The event was characterized by a levee breach and consequent flooding of over 75 km2 of the plain behind the dike within 48 hours causing population displacement, one death and economic losses in excess of 400 million Euro. We test the well-established TELEMAC 2D, and LISFLOOD-FP codes, together with the recently launched HEC-RAS 5.0.3 (2D model), all models are implemented using different grid size (2-200 m) based on the 1 m digital elevation model resolution. TELEMAC is a fully 2D hydrodynamic model which is based on the finite-element or finite-volume approach. Whereas HEC-RAS 5.0.3 and LISFLOOD-FP are both coupled 1D-2D models. All models are calibrated against observed inundation extent and maximum water depths, which are retrieved from remotely sensed data and field survey reports. Our study quantitatively compares the three modeling strategies highlighting differences in terms of the ease of implementation, accuracy of representation of hydraulic processes within floodplains and computational efficiency. Additionally, we look into the different grid resolutions in terms of the results accuracy and computation time. Our study is a preliminary assessment that focuses on smaller areas in order to identify potential modeling schemes that would be efficient for simulating flooding scenarios for large and very large floodplains. This research aims at contributing to the reduction of uncertainties and limitations in hazard and risk assessment.

  19. Numerical modeling techniques for flood analysis

    NASA Astrophysics Data System (ADS)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  20. Numerical modeling study of silver nano-filling based on grapefruit-type photonic crystal fiber sensor

    NASA Astrophysics Data System (ADS)

    Zheng, Yibo; Zhang, Lei; Wang, Yuan

    2017-10-01

    In this letter, surface plasmon resonance sensors based on grapefruit-type photonic crystal fiber (PCF)with different silver nano-filling structure have been analyzed and compared though the finite element method (FEM). The regularity of the resonant wavelength changing with refractive index of the sample has been numerically simulated. The surface plasmon resonance (SPR) sensing properties have been numerically simulated in both areas of resonant wavelength and intensity detection. Numerical results show that excellent sensor resolution of 4.17×10-5RIU can be achieved as the radius of the filling silver nanowires is 150 nm by spectrum detection method. Comprehensive comparison indicates that the 150 nm silver wire filling structure is suitable for spectrum detection and 30 nm silver film coating structure is suitable for the amplitude detection.

  1. Non-Markovian closure models for large eddy simulations using the Mori-Zwanzig formalism

    NASA Astrophysics Data System (ADS)

    Parish, Eric J.; Duraisamy, Karthik

    2017-01-01

    This work uses the Mori-Zwanzig (M-Z) formalism, a concept originating from nonequilibrium statistical mechanics, as a basis for the development of coarse-grained models of turbulence. The mechanics of the generalized Langevin equation (GLE) are considered, and insight gained from the orthogonal dynamics equation is used as a starting point for model development. A class of subgrid models is considered which represent nonlocal behavior via a finite memory approximation [Stinis, arXiv:1211.4285 (2012)], the length of which is determined using a heuristic that is related to the spectral radius of the Jacobian of the resolved variables. The resulting models are intimately tied to the underlying numerical resolution and are capable of approximating non-Markovian effects. Numerical experiments on the Burgers equation demonstrate that the M-Z-based models can accurately predict the temporal evolution of the total kinetic energy and the total dissipation rate at varying mesh resolutions. The trajectory of each resolved mode in phase space is accurately predicted for cases where the coarse graining is moderate. Large eddy simulations (LESs) of homogeneous isotropic turbulence and the Taylor-Green Vortex show that the M-Z-based models are able to provide excellent predictions, accurately capturing the subgrid contribution to energy transfer. Last, LESs of fully developed channel flow demonstrate the applicability of M-Z-based models to nondecaying problems. It is notable that the form of the closure is not imposed by the modeler, but is rather derived from the mathematics of the coarse graining, highlighting the potential of M-Z-based techniques to define LES closures.

  2. Numerical Simulation of Intense Precipitation Events South of the Alps: Sensitivity to Initial Conditions and Horizontal Resolution

    NASA Astrophysics Data System (ADS)

    Cacciamani, C.; Cesari, D.; Grazzini, F.; Paccagnella, T.; Pantone, M.

    In this paper we describe the results of several numerical experiments performed with the limited area model LAMBO, based on a 1989 version of the NCEP (National Center for Environmental Prediction) ETA model, operational at ARPA-SMR since 1993. The experiments have been designed to assess the impact of different horizontal resolutions and initial conditions on the quality and detail of the forecast, especially as regards the precipitation field in the case of severe flood events. For initial conditions we developed a mesoscale data assimilation scheme, based on the nudging technique. The scheme makes use of upper air and surface meteorological observations to modify ECMWF (European Centre for Medium Range Weather Forecast) operational analyses, used as first-guess fields, in order to better describe smaller scales features, mainly in the lower troposphere. Three flood cases in the Alpine and Mediterranean regions have been simulated with LAMBO, using a horizontal grid spacing of 15 and 5km and starting either from ECMWF initialised analysis or from the result of our mesoscale analysis procedure. The results show that increasing the resolution generally improves the forecast, bringing the precipitation peaks in the flooded areas close to the observed values without producing many spurious precipitation patterns. The use of mesoscale analysis produces a more realistic representation of precipitation patterns giving a further improvement to the forecast of precipitation. Furthermore, when simulations are started from mesoscale analysis, some model-simulated thermodynamic indices show greater vertical instability just in the regions where strongest precipitation occurred.

  3. Atmospheric blocking in the Climate SPHINX simulations: the role of orography and resolution

    NASA Astrophysics Data System (ADS)

    Davini, Paolo; Corti, Susanna; D'Andrea, Fabio; Riviere, Gwendal; von Hardenberg, Jost

    2017-04-01

    The representation of atmospheric blocking in numerical simulations, especially over the Euro-Atlantic region, still represents a main concern for the climate modelling community. We here discuss the Northern Hemisphere winter atmospheric blocking representation in a set of 30-year simulations which has been performed in the framework of the PRACE project "Climate SPHINX". Simulations were run using the EC-Earth Global Climate Model with several ensemble members at 5 different horizontal resolutions (ranging from 125 km to 16 km). Results show that the negative bias in blocking frequency over Europe becomes negligible at resolutions of about 40 km and finer. However, the blocking duration is still underestimated by 1-2 days, suggesting that the correct blocking frequencies are achieved with an overestimation of the number of blocking onsets. The reasons leading to such improvements are then discussed, highlighting the role of orography in shaping the Atlantic jet stream: at higher resolution the jet is weaker and less penetrating over Europe, favoring the breaking of synoptic Rossby waves over the Atlantic stationary ridge and thus increasing the simulated blocking frequency.

  4. Forecasting sea fog on the coast of southern China

    NASA Astrophysics Data System (ADS)

    Huang, H.; Huang, B.; Liu, C.; Tu, J.; Wen, G.; Mao, W.

    2016-12-01

    Forecast sea fog is still full of challenges. We have performed the numerical forecasting of sea fog on the coast of southern China by using the operational meso-scale regional model GRAPES (Global/Regional assimilation and prediction system). The GRAPES model horizontal resolution was 3km and with 66 vertical levels. A total of 72 hours forecasting of sea fog was conducted with hourly outputs over the sea fog event. The results show that the model system can predict reasonable characteristics of typical sea fog events on the coast of southern China. The scope of sea fog coincides with the observations of meteorological stations, the observations of the Marine Meteorological Science Experiment Base (MMSEB) at Bohe, Maoming and satellite products of sea fog. The goal of this study is to establish an operational numerical forecasting model system of sea fog on the coast of southern China.

  5. Modeling and numerical simulations of growth and morphologies of three dimensional aggregated silver films

    NASA Astrophysics Data System (ADS)

    Davis, L. J.; Boggess, M.; Kodpuak, E.; Deutsch, M.

    2012-11-01

    We report on a model for the deposition of three dimensional, aggregated nanocrystalline silver films, and an efficient numerical simulation method developed for visualizing such structures. We compare our results to a model system comprising chemically deposited silver films with morphologies ranging from dilute, uniform distributions of nanoparticles to highly porous aggregated networks. Disordered silver films grown in solution on silica substrates are characterized using digital image analysis of high resolution scanning electron micrographs. While the latter technique provides little volume information, plane-projected (two dimensional) island structure and surface coverage may be reliably determined. Three parameters governing film growth are evaluated using these data and used as inputs for the deposition model, greatly reducing computing requirements while still providing direct access to the complete (bulk) structure of the films throughout the growth process. We also show how valuable three dimensional characteristics of the deposited materials can be extracted using the simulated structures.

  6. Unstructured mesh adaptivity for urban flooding modelling

    NASA Astrophysics Data System (ADS)

    Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.

    2018-05-01

    Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.

  7. Avoiding numerical pitfalls in social force models

    NASA Astrophysics Data System (ADS)

    Köster, Gerta; Treml, Franz; Gödel, Marion

    2013-06-01

    The social force model of Helbing and Molnár is one of the best known approaches to simulate pedestrian motion, a collective phenomenon with nonlinear dynamics. It is based on the idea that the Newtonian laws of motion mostly carry over to pedestrian motion so that human trajectories can be computed by solving a set of ordinary differential equations for velocity and acceleration. The beauty and simplicity of this ansatz are strong reasons for its wide spread. However, the numerical implementation is not without pitfalls. Oscillations, collisions, and instabilities occur even for very small step sizes. Classic solution ideas from molecular dynamics do not apply to the problem because the system is not Hamiltonian despite its source of inspiration. Looking at the model through the eyes of a mathematician, however, we realize that the right hand side of the differential equation is nondifferentiable and even discontinuous at critical locations. This produces undesirable behavior in the exact solution and, at best, severe loss of accuracy in efficient numerical schemes even in short range simulations. We suggest a very simple mollified version of the social force model that conserves the desired dynamic properties of the original many-body system but elegantly and cost efficiently resolves several of the issues concerning stability and numerical resolution.

  8. Numerical Estimation of the Outer Bank Resistance Characteristics in AN Evolving Meandering River

    NASA Astrophysics Data System (ADS)

    Wang, D.; Konsoer, K. M.; Rhoads, B. L.; Garcia, M. H.; Best, J.

    2017-12-01

    Few studies have examined the three-dimensional flow structure and its interaction with bed morphology within elongate loops of large meandering rivers. The present study uses a numerical model to simulate the flow pattern and sediment transport, especially the flow close to the outer-bank, at two elongate meandering loops in Wabash River, USA. The numerical grid for the model is based on a combination of airborne LIDAR data on floodplains and the multibeam data within the river channel. A Finite Element Method (FEM) is used to solve the non-hydrostatic RANS equation using a K-epsilon turbulence closure scheme. High-resolution topographic data allows detailed numerical simulation of flow patterns along the outer bank and model calibration involves comparing simulated velocities to ADCP measurements at 41 cross sections near this bank. Results indicate that flow along the outer bank is strongly influenced by large resistance elements, including woody debris, large erosional scallops within the bank face, and outcropping bedrock. In general, patterns of bank migration conform with zones of high near-bank velocity and shear stress. Using the existing model, different virtual events can be simulated to explore the impacts of different resistance characteristics on patterns of flow, sediment transport, and bank erosion.

  9. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  10. Short-Range Prediction of Monsoon Precipitation by NCMRWF Regional Unified Model with Explicit Convection

    NASA Astrophysics Data System (ADS)

    Mamgain, Ashu; Rajagopal, E. N.; Mitra, A. K.; Webster, S.

    2018-03-01

    There are increasing efforts towards the prediction of high-impact weather systems and understanding of related dynamical and physical processes. High-resolution numerical model simulations can be used directly to model the impact at fine-scale details. Improvement in forecast accuracy can help in disaster management planning and execution. National Centre for Medium Range Weather Forecasting (NCMRWF) has implemented high-resolution regional unified modeling system with explicit convection embedded within coarser resolution global model with parameterized convection. The models configurations are based on UK Met Office unified seamless modeling system. Recent land use/land cover data (2012-2013) obtained from Indian Space Research Organisation (ISRO) are also used in model simulations. Results based on short-range forecast of both the global and regional models over India for a month indicate that convection-permitting simulations by the high-resolution regional model is able to reduce the dry bias over southern parts of West Coast and monsoon trough zone with more intense rainfall mainly towards northern parts of monsoon trough zone. Regional model with explicit convection has significantly improved the phase of the diurnal cycle of rainfall as compared to the global model. Results from two monsoon depression cases during study period show substantial improvement in details of rainfall pattern. Many categories in rainfall defined for operational forecast purposes by Indian forecasters are also well represented in case of convection-permitting high-resolution simulations. For the statistics of number of days within a range of rain categories between `No-Rain' and `Heavy Rain', the regional model is outperforming the global model in all the ranges. In the very heavy and extremely heavy categories, the regional simulations show overestimation of rainfall days. Global model with parameterized convection have tendency to overestimate the light rainfall days and underestimate the heavy rain days compared to the observation data.

  11. Variational data assimilation problem for the Baltic Sea thermodynamics

    NASA Astrophysics Data System (ADS)

    Zakharova, Natalia; Agoshkov, Valery; Parmuzin, Eugene

    2015-04-01

    The most versatile and promising technology for solving problems of monitoring and analysis of the natural environment is a four-dimensional variational data assimilation of observation data. In such problems not only the development and justification of algorithms for numerical solution of variational data assimilation problems but the properties of the optimal solution play an important role. In this work the variational data assimilation problems in the Baltic Sea water area were formulated and studied. Numerical experiments on restoring the ocean heat flux and obtaining solution of the system (temperature, salinity, velocity, and sea surface height) in the Baltic Sea primitive equation hydrodynamics model with assimilation procedure were carried out. In the calculations we used daily sea surface temperature observation from Danish meteorological Institute, prepared on the basis of measurements of the radiometer (AVHRR, AATSR and AMSRE) and spectroradiometer (SEVIRI and MODIS). The spatial resolution of the model grid with respect to the horizontal variables amounted to 0.0625x0.03125 degree. The results of the numerical experiments are presented. This study was supported by the Russian Foundation for Basic Research (project 13-01-00753, project 14-01-31195) and project 14-11-00609 by the Russian Science Foundation. References: 1 E.I. Parmuzin, V.I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, 69-94 2 Zakharova N.B., Agoshkov V.I., Parmuzin E.I., The new method of ARGO buoys system observation data interpolation. Russian Journal of Numerical Analysis and Mathematical Modelling. Vol. 28, Issue 1, 2013. 3 Zalesny V.B., Gusev A.V., Chernobay S.Yu., Aps R., Tamsalu R., Kujala P., Rytkönen J. The Bal-tic Sea circulation modelling and assessment of marine pollution, Russ. J. Numer. Analysis and Math. Modelling, 2014, V 29, No. 2, pp. 129-138.

  12. Numerical Investigation of the Middle Atlantic Bight Shelfbreak Frontal Circulation Using a High-Resolution Ocean Hindcast Model

    DTIC Science & Technology

    2010-05-01

    circulation from December 2003 to June 2008 . The model is driven by tidal harmonics, realistic atmospheric forcing, and dynamically consistent initial and open...important element of the regional circulation (He and Wilkin 2006). We applied the method of Mellor and Yamada (1982) to compute vertical turbulent...shelfbreak ROMS hindcast ran continuously from December 2003 through January 2008 . Initial conditions were taken from the MABGOM ROMS simulation on 1

  13. Analysis of High Spatial, Temporal, and Directional Resolution Recordings of Biological Sounds in the Southern California Bight

    DTIC Science & Technology

    2013-09-30

    transiting whales in the Southern California Bight, b) the use of passive underwater acoustic techniques for improved habitat assessment in biologically...sensitive areas and improved ecosystem modeling, and c) the application of the physics of excitable media to numerical modeling of biological choruses...was on the potential impact of man-made sounds on the calling behavior of transiting humpback whales in the Southern California Bight. The main

  14. Understanding heat and fluid flow in linear GTA welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacharia, T.; David, S.A.; Vitek, J.M.

    1992-01-01

    A transient heat flow and fluid flow model was used to predict the development of gas tungsten arc (GTA) weld pools in 1.5 mm thick AISI 304 SS. The welding parameters were chosen so as to correspond to an earlier experimental study which produced high-resolution surface temperature maps. The motivation of the present study was to verify the predictive capability of the computational model. Comparison of the numerical predictions and experimental observations indicate good agreement.

  15. Understanding heat and fluid flow in linear GTA welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacharia, T.; David, S.A.; Vitek, J.M.

    1992-12-31

    A transient heat flow and fluid flow model was used to predict the development of gas tungsten arc (GTA) weld pools in 1.5 mm thick AISI 304 SS. The welding parameters were chosen so as to correspond to an earlier experimental study which produced high-resolution surface temperature maps. The motivation of the present study was to verify the predictive capability of the computational model. Comparison of the numerical predictions and experimental observations indicate good agreement.

  16. A combined Eulerian-volume of fraction-Lagrangian method for atomization simulation

    NASA Technical Reports Server (NTRS)

    Seung, S. P.; Chen, C. P.; Ziebarth, John P.

    1994-01-01

    The tracking of free surfaces between liquid and gas phases and analysis of the interfacial phenomena between the two during the atomization and breakup process of a liquid fuel jet is modeled. Numerical modeling of liquid-jet atomization requires the resolution of different conservation equations. Detailed formulation and validation are presented for the confined dam broken problem, the water surface problem, the single droplet problem, a jet breakup problem, and the liquid column instability problem.

  17. Numerical Solutions for the CAWAPI Configuration on Structured Grids at NASA LaRC, United States. Chapter 7

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa A.; Abdol-Hamid, Khaled S.; Massey, Steven J.

    2009-01-01

    In this chapter numerical simulations of the flow around F-16XL are performed as a contribution to the Cranked Arrow Wing Aerodynamic Project International (CAWAPI) using the PAB3D CFD code. Two turbulence models are used in the calculations: a standard k-epsilon model, and the Shih-Zhu-Lumley (SZL) algebraic stress model. Seven flight conditions are simulated for the flow around the F-16XL where the free stream Mach number varies from 0.242 to 0.97. The range of angles of attack varies from 0 deg to 20 deg. Computational results, surface static pressure, boundary layer velocity profiles, and skin friction are presented and compared with flight data. Numerical results are generally in good agreement with flight data, considering that only one grid resolution is utilized for the different flight conditions simulated in this study. The Algebraic Stress Model (ASM) results are closer to the flight data than the k-epsilon model results. The ASM predicted a stronger primary vortex, however, the origin of the vortex and footprint is approximately the same as in the k-epsilon predictions.

  18. Random element method for numerical modeling of diffusional processes

    NASA Technical Reports Server (NTRS)

    Ghoniem, A. F.; Oppenheim, A. K.

    1982-01-01

    The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.

  19. Implicit LES using adaptive filtering

    NASA Astrophysics Data System (ADS)

    Sun, Guangrui; Domaradzki, Julian A.

    2018-04-01

    In implicit large eddy simulations (ILES) numerical dissipation prevents buildup of small scale energy in a manner similar to the explicit subgrid scale (SGS) models. If spectral methods are used the numerical dissipation is negligible but it can be introduced by applying a low-pass filter in the physical space, resulting in an effective ILES. In the present work we provide a comprehensive analysis of the numerical dissipation produced by different filtering operations in a turbulent channel flow simulated using a non-dissipative, pseudo-spectral Navier-Stokes solver. The amount of numerical dissipation imparted by filtering can be easily adjusted by changing how often a filter is applied. We show that when the additional numerical dissipation is close to the subgrid-scale (SGS) dissipation of an explicit LES the overall accuracy of ILES is also comparable, indicating that periodic filtering can replace explicit SGS models. A new method is proposed, which does not require any prior knowledge of a flow, to determine the filtering period adaptively. Once an optimal filtering period is found, the accuracy of ILES is significantly improved at low implementation complexity and computational cost. The method is general, performing well for different Reynolds numbers, grid resolutions, and filter shapes.

  20. A study of overflow simulations using MPAS-Ocean: Vertical grids, resolution, and viscosity

    NASA Astrophysics Data System (ADS)

    Reckinger, Shanon M.; Petersen, Mark R.; Reckinger, Scott J.

    2015-12-01

    MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are carried out using three of the vertical coordinate types available in MPAS-Ocean, including z-star with partial bottom cells, z-star with full cells, and sigma coordinates. The results are first benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model, which are used to set the base case used for this work. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Lastly, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates.

  1. Numeric stratigraphic modeling: Testing sequence Numeric stratigraphic modeling: Testing sequence stratigraphic concepts using high resolution geologic examples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armentrout, J.M.; Smith-Rouch, L.S.; Bowman, S.A.

    1996-08-01

    Numeric simulations based on integrated data sets enhance our understanding of depositional geometry and facilitate quantification of depositional processes. Numeric values tested against well-constrained geologic data sets can then be used in iterations testing each variable, and in predicting lithofacies distributions under various depositional scenarios using the principles of sequence stratigraphic analysis. The stratigraphic modeling software provides a broad spectrum of techniques for modeling and testing elements of the petroleum system. Using well-constrained geologic examples, variations in depositional geometry and lithofacies distributions between different tectonic settings (passive vs. active margin) and climate regimes (hothouse vs. icehouse) can provide insight tomore » potential source rock and reservoir rock distribution, maturation timing, migration pathways, and trap formation. Two data sets are used to illustrate such variations: both include a seismic reflection profile calibrated by multiple wells. The first is a Pennsylvanian mixed carbonate-siliciclastic system in the Paradox basin, and the second a Pliocene-Pleistocene siliciclastic system in the Gulf of Mexico. Numeric simulations result in geometry and facies distributions consistent with those interpreted using the integrated stratigraphic analysis of the calibrated seismic profiles. An exception occurs in the Gulf of Mexico study where the simulated sediment thickness from 3.8 to 1.6 Ma within an upper slope minibasin was less than that mapped using a regional seismic grid. Regional depositional patterns demonstrate that this extra thickness was probably sourced from out of the plane of the modeled transect, illustrating the necessity for three-dimensional constraints on two-dimensional modeling.« less

  2. Study of compressible turbulent flows in supersonic environment by large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Genin, Franklin

    The numerical resolution of turbulent flows in high-speed environment is of fundamental importance but remains a very challenging problem. First, the capture of strong discontinuities, typical of high-speed flows, requires the use of shock-capturing schemes, which are not adapted to the resolution of turbulent structures due to their intrinsic dissipation. On the other hand, low-dissipation schemes are unable to resolve shock fronts and other sharp gradients without creating high amplitude numerical oscillations. Second, the nature of turbulence in high-speed flows differs from its incompressible behavior, and, in the context of Large-Eddy Simulation, the subgrid closure must be adapted to the modeling of compressibility effects and shock waves on turbulent flows. The developments described in this thesis are two-fold. First, a state of the art closure approach for LES is extended to model subgrid turbulence in compressible flows. The energy transfers due to compressible turbulence and the diffusion of turbulent kinetic energy by pressure fluctuations are assessed and integrated in the Localized Dynamic ksgs model. Second, a hybrid numerical scheme is developed for the resolution of the LES equations and of the model transport equation, which combines a central scheme for turbulent resolutions to a shock-capturing method. A smoothness parameter is defined and used to switch from the base smooth solver to the upwind scheme in regions of discontinuities. It is shown that the developed hybrid methodology permits a capture of shock/turbulence interactions in direct simulations that agrees well with other reference simulations, and that the LES methodology effectively reproduces the turbulence evolution and physical phenomena involved in the interaction. This numerical approach is then employed to study a problem of practical importance in high-speed mixing. The interaction of two shock waves with a high-speed turbulent shear layer as a mixing augmentation technique is considered. It is shown that the levels of turbulence are increased through the interaction, and that the mixing is significantly improved in this flow configuration. However, the region of increased mixing is found to be localized to a region close to the impact of the shocks, and that the statistical levels of turbulence relax to their undisturbed levels some short distance downstream of the interaction. The present developments are finally applied to a practical configuration relevant to scramjet injection. The normal injection of a sonic jet into a supersonic crossflow is considered numerically, and compared to the results of an experimental study. A fair agreement in the statistics of mean and fluctuating velocity fields is obtained. Furthermore, some of the instantaneous flow structures observed in experimental visualizations are identified in the present simulation. The dynamics of the interaction for the reference case, based on the experimental study, as well as for a case of higher freestream Mach number and a case of higher momentum ratio, are examined. The classical instantaneous vortical structures are identified, and their generation mechanisms, specific to supersonic flow, are highlighted. Furthermore, two vortical structures, recently revealed in low-speed jets in crossflow but never documented for high-speed flows, are identified during the flow evolution.

  3. Reconstruction of the Greenland ice sheet dynamics in a fully coupled Earth System Model

    NASA Astrophysics Data System (ADS)

    Rybak, Oleg; Volodin, Evgeny; Huybrechts, Philippe

    2016-04-01

    Earth system models (ESMs) are undoubtedly effective tools for studying climate dynamics. Incorporation of evolving ice sheets to ESMs is a challenging task because response times of the climate system and of ice sheets differ by several orders of magnitude. Besides, AO GCMs operate on spatial and temporal resolutions substantially differing from those of ice sheet models (ICMs). Therefore elaboration of an effective coupling methodology of an AO GCM and an ICM is the key problem of an ESM construction and utilization. Several downscaling strategies of varying complexity exist now of data exchange between modeled climate system and ice sheets. Application of a particular strategy depends on the research objectives. In our view, the optimum approach for model studying of significant environmental changes (e.g. glacial/interglacial transitions) when ice sheets undergo substantial evolution of geometry and volume would be an asynchronous coupling. The latter allows simulation in the interactive way of growth and decay of ice sheets in the changing climatic conditions. In the focus of the presentation, is the overview of coupling aspects of an AO GCM INMCM32 elaborated in the Institute of Numerical Mathematics (Moscow, Russia) to the Greenland ice sheet model (GrISM, Vrije Uninersiteit Brussel, Belgium). To provide interactive coupling of INMCM32 (spatial resolution 5°×4°, 21 vertical layers and temporal resolution 6 min. in the atmospheric block) and GrISM (spatial resolution 20×20 km, 51 vertical layers and 1 yr temporal resolution), we employ a special energy- and water balance model (EWBM-G), which serves as a buffer providing effective data exchange between INMCM32 and GrISM. EWBM-G operates in a rectangle domain including Greenland. Transfer of daily meanings of simulated climatic variables (air surface temperature and specific humidity) is provided on the lateral boundarias of the domain and inside the domain (sea level air pressure, wind speed and total cloudiness) after applying spline interpolation. EWBM-G calculates annual surface mass balance, SMB, (further transferred as an external forcing to the GrISM) and fresh water flux (transferred to the oceanic block of the INMCM32). After receiving SMB, GrIS is integrated and returns update surface topography back to the INMCM32. The aim of the current research is to establish equilibration time of climate and GrIS in the transient coupled run and to elaborate optimum methodology for performing numerical experiments simulating glacial/interglacial transitions.

  4. Preliminary validation of WRF model in two Arctic fjords, Hornsund and Porsanger

    NASA Astrophysics Data System (ADS)

    Aniskiewicz, Paulina; Stramska, Małgorzata

    2017-04-01

    Our research is focused on development of efficient modeling system for arctic fjords. This tool should include high-resolution meteorological data derived using downscaling approach. In this presentation we have focused on modeling, with high spatial resolution, of the meteorological conditions in two Arctic fjords: Hornsund (H), located in the western part of Svalbard archipelago and Porsanger (P) located in the coastal waters of the Barents Sea. The atmospheric downscaling is based on The Weather Research and Forecasting Model (WRF, www.wrf-model.org) with polar stereographic projection. We have created two parent domains with grid point distances of about 3.2 km (P) and 3.0 km (H) and with nested domains (almost 5 times higher resolution than parent domains). We tested what is the impact of the spatial resolution of the model on derived meteorological quantities. For both fjords the input topography data resolution is 30 sec. To validate the results we have used meteorological data from the Norwegian Meteorological Institute for stations Lakselv (L) and Honningsvåg (Ho) located in the inner and outer parts of the Porsanger fjord as well as from station in the outer part of the Hornsund fjord. We have estimated coefficients of determination (r2), statistical errors (St) and systematic errors (Sy) between measured and modelled air temperature and wind speed at each station. This approach will allow us to create high resolution spatially variable meteorological fields that will serve as forcing for numerical models of the fjords. We will investigate the role of different meteorological quantities (e. g. wind, solar insolation, precipitation) on hydrohraphic processes in fjords. The project has been financed from the funds of the Leading National Research Centre (KNOW) received by the Centre for Polar Studies for the period 2014-2018. This work was also funded by the Norway Grants (NCBR contract No. 201985, project NORDFLUX). Partial support comes from the Institute of Oceanology (IO PAN).

  5. Mesoscale vortices in the Ligurian Sea and their effect on coastal upwelling processes

    NASA Astrophysics Data System (ADS)

    Casella, Elisa; Molcard, Anne; Provenzale, Antonello

    2011-10-01

    We study numerically the dynamics of intense anticyclonic eddies in the Ligurian Sea (NW Mediterranean Sea). To this end, we use the Regional Ocean Modeling System (ROMS) with a resolution of 3 km for a domain covering the whole Ligurian Sea, with an embedded child grid covering the northwestern part of Ligurian Sea at resolution 1 km. The model is forced with daily boundary conditions obtained from the MFS dataset for the year 2006 at the open lateral boundaries. Surface heat and evapotranspiration fluxes are provided by the monthly climatological dataset COADS at 1/2° spatial resolution. For wind forcing, we consider two configurations. In the first setting, the model is forced by the COADS climatological monthly mean wind stresses; in a second configuration, the model is forced by the daily mean wind stresses provided by a mesoscale meteorological model for the area of interest in the year 2006. The latter setting shows the formation of intense anticyclonic eddy structures in the coastal area, generated by the variable winds and by the interaction of transient currents with bottom and coastal topography (in the NW part of the Ligurian Sea). Comparison of model output with satellite SST data shows definite agreement between numerical results and observations. Analysis of the simulation results over the whole year 2006 and of SST satellite images in 2006 and 2007 indicates that coastal anticyclonic eddies are of common occurrence in the Ligurian Sea, with several events per year, mainly concentrated in autumn and winter. The eddies are characterized by a complex pattern of intense vertical velocities and induce strong, long-lasting coastal upwelling events. For this reason, anticyclonic vortices in the coastal area can generate bursts of nutrient input in the euphotic layer and contribute to the fertilization of the Ligurian Sea, with potentially important effects on the dynamics of phyto- and zooplankton.

  6. The Canadian Hydrological Model (CHM): A multi-scale, variable-complexity hydrological model for cold regions

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2016-12-01

    There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.

  7. Numerical predictions of shock propagation through unreactive and reactive liquids with experimental validation

    NASA Astrophysics Data System (ADS)

    Stekovic, Svjetlana; Nissen, Erin; Bhowmick, Mithun; Stewart, Donald S.; Dlott, Dana D.

    2017-06-01

    The objective of this work is to numerically analyze shock behavior as it propagates through compressed, unreactive and reactive liquid, such as liquid water and liquid nitromethane. Parameters, such as pressure and density, are analyzed using the Mie-Gruneisen EOS and each multi-material system is modeled using the ALE3D software. The motivation for this study is based on provided high-resolution, optical interferometer (PDV) and optical pyrometer measurements. In the experimental set-up, a liquid is placed between an Al 1100 plate and Pyrex BK-7 glass. A laser-driven Al 1100 flyer impacts the plate, causing the liquid to be highly compressed. The numerical model investigates the influence of the high pressure, shock-compressed behavior in each liquid, the energy transfer, and the wave impedance at the interface of each material in contact. The numerical results using ALE3D will be validated by experimental data. This work aims to provide further understanding of shock-compressed behavior and how the shock influences phase transition in each liquid.

  8. Reconstruction of the Eemian climate using a fully coupled Earth system model

    NASA Astrophysics Data System (ADS)

    Rybak, Oleg; Volodin, Evgeny; Morozova, Polina; Huybrechts, Philippe

    2017-04-01

    Climate of the Last Interglacial (LIG) between ca. 130 and 115 kyr BP is thought to be a good analogue for future climate warming. Though the driving mechanisms of the past and current climate evolution differ, analysis of the LIG climate may provide important insights for projections of future environmental changes. We do not know properly what was spatial distribution and magnitude of surface air temperature and precipitation anomalies with respect to present. Sparse proxy data are attributed mostly to the continental margins, internal areas of ice sheets and particular regions of the World Ocean. Combining mathematical modeling and indirect evidence can help to identify driving mechanisms and feed-backs which formed climatic conditions of the LIG. In order to reproduce the LIG climate, we carried out transient numerical experiments using a fully coupled Earth System Model (ESM) consisting of an AO GCM, which includes decription of the biosphere, atmospheric and oceanic chemistry ets. (INMCM), developed in the Institute of Numerical Mathematics (Moscow, Russia) and the models of Greenland and Antarctic ice sheets (GrISM and AISM, Vrije Uninersiteit Brussel, Belgium). Though the newest version of the INMCM has rather high spatial resolution, it canot be used in long transient numerical experimemts because of high computational demand. Coupling of the GrISM and AISM to the low resolution version of the INMCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmosphere and the ocean components of the ESM (spatial resolution 5˚×4˚, 21 vertical layers in the atmospheric block, 2.5°×2°, 6 min. temporal resolution; 33 vertical layers in the oceanic block; 20×20 km, 51 vertical layers and 1 yr temporal resolution in the GrISM and AISM). We apply two different coupling strategies. AISM is incorporated into the ESM via using procedures of resampling and interpolation of the input fields of annually averaged air surface temperatures and precipitation fields generated by the INMCM. To provide interactive coupling of the INMCM and the GrISM, we employ a special energy- and water balance model (EWBM-G), which serves as a buffer providing effective data exchange between sub-models. EWBM-G operates in a rectangle domain including Greenland and calculates annual surface mass balance (further transferred as an external forcing to the GrISM) and fresh water flux (transferred to the oceanic block of the INMCM). Orbital parameters of the LIG were set with 1 kyr step with further interpolation to 100 years. Assuming concentrations of greenhouse gases during the LIG were not very much different from the preindustrial values, this potential forcing was neglected. Climatic block of the ESM was called every 100 model years to follow changes in orbital forcing. AISM and GrISM were asynchronously coupled to sub-models of the atmosphere and the ocean with the ratio of model years as 100 to 1. Obtained fields of deviations of air surface temperature from preindustrial values correspond in general to the estimates made in earlier studies. Evaluated contribution of the Greenland ice sheet to the global sea level rise (approximately 2 m) supports the newest estimates based on model results and proxy data analysis.

  9. The Impact of Varying the Physics Grid Resolution Relative to the Dynamical Core Resolution in CAM-SE-CSLAM

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.

    2017-12-01

    The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.

  10. Model velocities assessment and HF radar data assimilation in the Ibiza Channel

    NASA Astrophysics Data System (ADS)

    Hernandez Lasheras, Jaime; Mourre, Baptiste; Reyes, Emma; Marmain, Julien; Orfila, Alejandro; Tintoré, Joaquin

    2017-04-01

    High Frequency Radar (HFR) provides continuous and high-resolution surface current measurements over wide coastal areas, enabling the observation of dynamic processes at the atmosphere-ocean interface, where a lot of momentum and heat exchange takes place, which is still not fully understood. Furthermore, HFR data provide critical information to improve numerical model predictions through data assimilation. However, the routine assimilation of HFR surface current data in operational models is still a challenge from both the methodological and computational points of view. Since 2012, SOCIB, the Balearic Islands Coastal Observing and Forecasting System, operates two coastal HFR sites with the purpose of monitoring the surface currents of the Ibiza Channel (Western Mediterranean Sea). It is an area characterized by important meridional flow exchanges with significant impacts on ecosystems. The circulation in the Ibiza Channel results from the complex interaction of different water masses under strong topographic constraints. This makes the area very challenging from the point of view of numerical modeling. Indeed, models are generally found to represent erroneous flows across this section. In this work, we perform the first steps to evaluate the potential of HFR data to improve the model circulation in the Ibiza Channel area with data assimilation. A multimodel Ensemble Optimal Interpolation scheme has been coupled to the SOCIB Western Mediterranean Operational Model (WMOP) to assimilate multiplatform observations, including the HFR surface velocities. WMOP is a 2-km resolution configuration of the ROMS model using CMEMS numerical products as initial and boundary conditions and high-resolution surface forcing from the Spanish Meteorological Agency. To evaluate whether the model properly captures the main dynamical features observed in the Ibiza Channel (which is a prerequisite for a successful data assimilation), comparison of spatial empirical orthogonal function (EOF) patterns from HFR observations and from model results have been performed. Results show good agreement between the two first modes of variability of both data sets, which explain the north-south and east-west flows, respectively. The comparison with ADCP data in the HFR coverage area shows also good agreement with the main vertical modes of the model at the first 120 m. In our approach, model error covariances are estimated by sampling three long-run simulations of the WMOP system with different initial/boundary forcing and mixing parameters. Vertical correlations in the HFR coverage area are validated using ADCP measurements at the mooring. As expected, correlations decrease with depth both in the model as well as with the ADCP data. The agreement is found to vary with the season and the velocity component under consideration. The first results of multiplatform data assimilation experiments using this modelling setup and including HFR, SST, SSH and in situ profiles will then be presented.

  11. An Unsplit Monte-Carlo solver for the resolution of the linear Boltzmann equation coupled to (stiff) Bateman equations

    NASA Astrophysics Data System (ADS)

    Bernede, Adrien; Poëtte, Gaël

    2018-02-01

    In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.

  12. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  13. Conventional and modified Schwarzschild objective for EUV lithography: design relations

    NASA Astrophysics Data System (ADS)

    Bollanti, S.; di Lazzaro, P.; Flora, F.; Mezi, L.; Murra, D.; Torre, A.

    2006-12-01

    The design criteria of a Schwarzschild-type optical system are reviewed in relation to its use as an imaging system in an extreme ultraviolet lithography setup. Both the conventional and the modified reductor imaging configurations are considered, and the respective performances, as far as the geometrical resolution in the image plane is concerned, are compared. In this connection, a formal relation defining the modified configuration is elaborated, refining a rather naïve definition presented in an earlier work. The dependence of the geometrical resolution on the image-space numerical aperture for a given magnification is investigated in detail for both configurations. So, the advantages of the modified configuration with respect to the conventional one are clearly evidenced. The results of a semi-analytical procedure are compared with those obtained from a numerical simulation performed by an optical design program. The Schwarzschild objective based system under implementation at the ENEA Frascati Center within the context of the Italian FIRB project for EUV lithography has been used as a model. Best-fit functions accounting for the behaviour of the system parameters vs. the numerical aperture are reported; they can be a useful guide for the design of Schwarzschild objective type optical systems.

  14. Numerical simulation of incidence and sweep effects on delta wing vortex breakdown

    NASA Technical Reports Server (NTRS)

    Ekaterinaris, J. A.; Schiff, Lewis B.

    1994-01-01

    The structure of the vortical flowfield over delta wings at high angles of attack was investigated. Three-dimensional Navier-Stokes numerical simulations were carried out to predict the complex leeward-side flowfield characteristics, including leading-edge separation, secondary separation, and vortex breakdown. Flows over a 75- and a 63-deg sweep delta wing with sharp leading edges were investigated and compared with available experimental data. The effect of variation of circumferential grid resolution grid resolution in the vicinity of the wing leading edge on the accuracy of the solutions was addressed. Furthermore, the effect of turbulence modeling on the solutions was investigated. The effects of variation of angle of attack on the computed vortical flow structure for the 75-deg sweep delta wing were examined. At moderate angles of attack no vortex breakdown was observed. When a critical angle of attack was reached, bubble-type vortex breakdown was found. With further increase in angle of attack, a change from bubble-type breakdown to spiral-type vortex breakdown was predicted by the numerical solution. The effects of variation of sweep angle and freestream Mach number were addressed with the solutions on a 63-deg sweep delta wing.

  15. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    NASA Astrophysics Data System (ADS)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  16. Polycrystalline magma behaviour in dykes: Insights from high-resolution numerical models

    NASA Astrophysics Data System (ADS)

    Yamato, Philippe; Duretz, Thibault; Tartèse, Romain; May, Dave

    2013-04-01

    The presence of a crystalline load in magmas modifies their effective rheology and thus their flow behaviour. In dykes, for instance, the presence of crystals denser than the melt reduces the ascent velocity and modifies the shape of the velocity profile from a Newtonian Poiseuille flow to a Bingham type flow. Nevertheless, several unresolved issues still remain poorly understood and need to be quantified: (1) What are the mechanisms controlling crystals segregation during magma ascent in dykes? (2) How does crystals transportation within a melt depend on their concentration, geometry, size and density? (3) Do crystals evolve in isolation to each other or as a cluster? (4) What is the influence of considering inertia of the melt within the system? In this study, we present numerical models following the setup previously used in Yamato et al. (2012). Our model setup simulates an effective pressure gradient between the base and the top of a channel (representing a dyke), by pushing a rigid piston into a magmatic mush that comprised crystals and melt and perforated by a hole. The initial resolution of the models (401x1551 nodes) has been doubled in order to ensure that the smallest crystalline fractions are sufficiently well resolved. Results show that the melt phase can be squeezed out from a crystal-rich magma when subjected to a given pressure gradient range and that clustering of crystals might be an important parameter controlling their behaviour. This demonstrates that crystal-melt segregation in dykes during magma ascent constitutes a viable mechanism for magmatic differentiation of residual melts. These results also explain how isolated crystal clusters and melt pockets, with different chemistry, can be formed. In addition, we discuss the impact of taking into account inertia in our models. Reference: Yamato, P., Tartèse, R., Duretz, T., May, D.A., 2012. Numerical modelling of magma transport in dykes. Tectonophysics 526-529, 97-109.

  17. Simulations of NLC formation using a microphysical model driven by three-dimensional dynamics

    NASA Astrophysics Data System (ADS)

    Kirsch, Annekatrin; Becker, Erich; Rapp, Markus; Megner, Linda; Wilms, Henrike

    2014-05-01

    Noctilucent clouds (NLCs) represent an optical phenomenon occurring in the polar summer mesopause region. These clouds have been known since the late 19th century. Current physical understanding of NLCs is based on numerous observational and theoretical studies, in recent years especially observations from satellites and by lidars from ground. Theoretical studies based on numerical models that simulate NLCs with the underlying microphysical processes are uncommon. Up to date no three-dimensional numerical simulations of NLCs exist that take all relevant dynamical scales into account, i.e., from the planetary scale down to gravity waves and turbulence. Rather, modeling is usually restricted to certain flow regimes. In this study we make a more rigorous attempt and simulate NLC formation in the environment of the general circulation of the mesopause region by explicitly including gravity waves motions. For this purpose we couple the Community Aerosol and Radiation Model for Atmosphere (CARMA) to gravity-wave resolving dynamical fields simulated beforehand with the Kuehlungsborn Mechanistic Circulation Model (KMCM). In our case, the KMCM is run with a horizontal resolution of T120 which corresponds to a minimum horizontal wavelength of 350 km. This restriction causes the resolved gravity waves to be somewhat biased to larger scales. The simulated general circulation is dynamically controlled by these waves in a self-consitent fashion and provides realistic temperatures and wind-fields for July conditions. Assuming a water vapor mixing ratio profile in agreement with current observations results in reasonable supersaturations of up to 100. In a first step, CARMA is applied to a horizontal section covering the Northern hemisphere. The vertical resolution is 120 levels ranging from 72 to 101 km. In this paper we will present initial results of this coupled dynamical microphysical model focussing on the interaction of waves and turbulent diffusion with NLC-microphysics.

  18. The circulation in the Levantine Basin as inferred from in-situ data and numerical modelling (1995-2013)

    NASA Astrophysics Data System (ADS)

    Zodiatis, George; Radhakrishnan, Hari; Lardner, Robin; Hayes, Daniel; Gertman, Isaac; Menna, Milena; Poulain, Pierre-Marie

    2014-05-01

    The general anticlockwise circulation along the coastline of the Eastern Mediterranean Levantine Basin was first proposed by Nielsen in 1912. Half a century later the schematic of the circulation in the area was enriched with sub-basin flow structures. In late 1980s, a more detailed picture of the circulation composed of eddies, gyres and coastal-offshore jets was defined during the POEM cruises. In 2005, Millot and Taupier-Letage have used SST satellite imagery to argue for a simpler pattern similar to the one proposed almost a century ago. During the last decade, renewed in-situ multi-platforms investigations under the framework of CYBO, CYCLOPS, NEMED, GROOM, HaiSec and PERSEUS projects, as well the development of the operational ocean forecasts and hindcasts in the framework of the MFS, ECOOP, MERSEA and MyOcean projects, have made possible to obtain an improved, higher spatial and temporal resolution picture of the circulation in the area. After some years of scientific disputes on the circulation pattern of the region, the new in-situ data sets and the operational numerical simulations confirm the relevant POEM results. The existing POM-based Cyprus Coastal Ocean Forecasting System (CYCOFOS), downscaling the MyOcean MFS, has been providing operational forecasts in the Eastern Mediterranean Levantine Basin region since early 2002. Recently, Radhakrishnan et al. (2012) parallelized the CYCOFOS hydrodynamic flow model using MPI to improve the accuracy of predictions while reducing the computational time. The parallel flow model is capable of modeling the Eastern Mediterranean Levantine Basin flow at a resolution of 500 m. The model was run in hindcast mode during which the innovations were computed using the historical data collected using gliders and cruises. Then, DD-OceanVar (D'Amore et al., 2013), a data assimilation tool based on 3DVAR developed by CMCC was used to compute the temperature and salinity field corrections. Numerical modeling results after the data assimilation will be presented.

  19. Calibration and validation of a small-scale urban surface water flood event using crowdsourced images

    NASA Astrophysics Data System (ADS)

    Green, Daniel; Yu, Dapeng; Pattison, Ian

    2017-04-01

    Surface water flooding occurs when intense precipitation events overwhelm the drainage capacity of an area and excess overland flow is unable to infiltrate into the ground or drain via natural or artificial drainage channels, such as river channels, manholes or SuDS. In the UK, over 3 million properties are at risk from surface water flooding alone, accounting for approximately one third of the UK's flood risk. The risk of surface water flooding is projected to increase due to several factors, including population increases, land-use alterations and future climatic changes in precipitation resulting in an increased magnitude and frequency of intense precipitation events. Numerical inundation modelling is a well-established method of investigating surface water flood risk, allowing the researcher to gain a detailed understanding of the depth, velocity, discharge and extent of actual or hypothetical flood scenarios over a wide range of spatial scales. However, numerical models require calibration of key hydrological and hydraulic parameters (e.g. infiltration, evapotranspiration, drainage rate, roughness) to ensure model outputs adequately represent the flood event being studied. Furthermore, validation data such as crowdsourced images or spatially-referenced flood depth collected during a flood event may provide a useful validation of inundation depth and extent for actual flood events. In this study, a simplified two-dimensional inertial based flood inundation model requiring minimal pre-processing of data (FloodMap-HydroInundation) was used to model a short-duration, intense rainfall event (27.8 mm in 15 minutes) that occurred over the Loughborough University campus on the 28th June 2012. High resolution (1m horizontal, +/- 15cm vertical) DEM data, rasterised Ordnance Survey topographic structures data and precipitation data recorded at the University weather station were used to conduct numerical modelling over the small (< 2km2), contained urban catchment. To validate model outputs and allow a reconstruction of spatially referenced flood depth and extent during the flood event, crowdsourced images were obtained from social media (Twitter) and from individuals present during the flood event via the University noticeboards, as well as using dGPS flood depth data collected at one of the worst affected areas. An investigation into the sensitivity of key model parameters suggests that the numerical model code is highly sensitivity to changes within the recommended range of roughness and infiltration values, as well as changes in DEM and building mesh resolutions, but less sensitive to changes in evapotranspiration and drainage capacity parameters. The study also demonstrates the potential of using crowdsourced images to validate urban surface water flood models and inform parameterisation when calibrating numerical inundation models.

  20. Projected changes over western Canada using convection-permitting regional climate model and the pseudo-global warming method

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kurkute, S.; Chen, L.

    2017-12-01

    Results from the General Circulation Models (GCMs) suggest more frequent and more severe extreme rain events in a climate warmer than the present. However, current GCMs cannot accurately simulate extreme rainfall events of short duration due to their coarse model resolutions and parameterizations. This limitation makes it difficult to provide the detailed quantitative information for the development of regional adaptation and mitigation strategies. Dynamical downscaling using nested Regional Climate Models (RCMs) are able to capture key regional and local climate processes with an affordable computational cost. Recent studies have demonstrated that the downscaling of GCM results with weather-permitting mesoscale models, such as the pseudo-global warming (PGW) technique, could be a viable and economical approach of obtaining valuable climate change information on regional scales. We have conducted a regional climate 4-km Weather Research and Forecast Model (WRF) simulation with one domain covering the whole western Canada, for a historic run (2000-2015) and a 15-year future run to 2100 and beyond with the PGW forcing. The 4-km resolution allows direct use of microphysics and resolves the convection explicitly, thus providing very convincing spatial detail. With this high-resolution simulation, we are able to study the convective mechanisms, specifically the control of convections over the Prairies, the projected changes of rainfall regimes, and the shift of the convective mechanisms in a warming climate, which has never been examined before numerically at such large scale with such high resolution.

  1. Take Away Body Parts! An Investigation into the Use of 3D-Printed Anatomical Models in Undergraduate Anatomy Education

    ERIC Educational Resources Information Center

    Smith, Claire F.; Tollemache, Nicholas; Covill, Derek; Johnston, Malcolm

    2018-01-01

    Understanding the three-dimensional (3D) nature of the human form is imperative for effective medical practice and the emergence of 3D printing creates numerous opportunities to enhance aspects of medical and healthcare training. A recently deceased, un-embalmed donor was scanned through high-resolution computed tomography. The scan data underwent…

  2. Freshwater Export from the Arctic Ocean and its Downstream Effect on Labrador Sea Deep Convection in a High-Resolution Numerical Model

    DTIC Science & Technology

    2010-12-01

    Arctic has been observed in the northern Canadian Arctic Archipelago ( Bourke and McLaren 1992). There, thick multiyear ice of Arctic origin encounters...Affairs, 87(2), 63-77. 172 Bourke , R. H., and A. S. McLaren, 1992: Contour mapping of Arctic Basin ice draft and roughness parameters. J. Geophys

  3. Rotordynamic Instability Problems in High-Performance Turbomachinery, 1988

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The continuing trend toward a unified view is supported with several developments in the design and manufacture of turbomachines with enhanced stability characteristics along with data and associated numerical/theoretical results. The intent is to provide a continuing impetus for an understanding and resolution of these problems. Topics addressed include: field experience, dampers, seals, impeller forces, bearings, and compressor and rotor modeling.

  4. Nanometric depth resolution from multi-focal images in microscopy.

    PubMed

    Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H

    2011-07-06

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.

  5. Nanometric depth resolution from multi-focal images in microscopy

    PubMed Central

    Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.

    2011-01-01

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948

  6. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    NASA Astrophysics Data System (ADS)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  7. Development of an EMC3-EIRENE Synthetic Imaging Diagnostic

    NASA Astrophysics Data System (ADS)

    Meyer, William; Allen, Steve; Samuell, Cameron; Lore, Jeremy

    2017-10-01

    2D and 3D flow measurements are critical for validating numerical codes such as EMC3-EIRENE. Toroidal symmetry assumptions preclude tomographic reconstruction of 3D flows from single camera views. In addition, the resolution of the grids utilized in numerical code models can easily surpass the resolution of physical camera diagnostic geometries. For these reasons we have developed a Synthetic Imaging Diagnostic capability for forward projection comparisons of EMC3-EIRENE model solutions with the line integrated images from the Doppler Coherence Imaging diagnostic on DIII-D. The forward projection matrix is 2.8 Mpixel by 6.4 Mcells for the non-axisymmetric case we present. For flow comparisons, both simple line integral, and field aligned component matrices must be calculated. The calculation of these matrices is a massive embarrassingly parallel problem and performed with a custom dispatcher that allows processing platforms to join mid-problem as they become available, or drop out if resources are needed for higher priority tasks. The matrices are handled using standard sparse matrix techniques. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Office of Fusion Energy Sciences. LLNL-ABS-734800.

  8. Design and analysis of a fast, two-mirror soft-x-ray microscope

    NASA Technical Reports Server (NTRS)

    Shealy, D. L.; Wang, C.; Jiang, W.; Jin, L.; Hoover, R. B.

    1992-01-01

    During the past several years, a number of investigators have addressed the design, analysis, fabrication, and testing of spherical Schwarzschild microscopes for soft-x-ray applications using multilayer coatings. Some of these systems have demonstrated diffraction limited resolution for small numerical apertures. Rigorously aplanatic, two-aspherical mirror Head microscopes can provide near diffraction limited resolution for very large numerical apertures. The relationships between the numerical aperture, mirror radii and diameters, magnifications, and total system length for Schwarzschild microscope configurations are summarized. Also, an analysis of the characteristics of the Head-Schwarzschild surfaces will be reported. The numerical surface data predicted by the Head equations were fit by a variety of functions and analyzed by conventional optical design codes. Efforts have been made to determine whether current optical substrate and multilayer coating technologies will permit construction of a very fast Head microscope which can provide resolution approaching that of the wavelength of the incident radiation.

  9. Preliminary investigations into macroscopic attenuated total reflection-fourier transform infrared imaging of intact spherical domains: spatial resolution and image distortion.

    PubMed

    Everall, Neil J; Priestnall, Ian M; Clarke, Fiona; Jayes, Linda; Poulter, Graham; Coombs, David; George, Michael W

    2009-03-01

    This paper describes preliminary investigations into the spatial resolution of macro attenuated total reflection (ATR) Fourier transform infrared (FT-IR) imaging and the distortions that arise when imaging intact, convex domains, using spheres as an extreme example. The competing effects of shallow evanescent wave penetration and blurring due to finite spatial resolution meant that spheres within the range 20-140 microm all appeared to be approximately the same size ( approximately 30-35 microm) when imaged with a numerical aperture (NA) of approximately 0.2. A very simple model was developed that predicted this extreme insensitivity to particle size. On the basis of these studies, it is anticipated that ATR imaging at this NA will be insensitive to the size of intact highly convex objects. A higher numerical aperture device should give a better estimate of the size of small spheres, owing to superior spatial resolution, but large spheres should still appear undersized due to the shallow sampling depth. An estimate of the point spread function (PSF) was required in order to develop and apply the model. The PSF was measured by imaging a sharp interface; assuming an Airy profile, the PSF width (distance from central maximum to first minimum) was estimated to be approximately 20 and 30 microm for IR bands at 1600 and 1000 cm(-1), respectively. This work has two significant limitations. First, underestimation of domain size only arises when imaging intact convex objects; if surfaces are prepared that randomly and representatively section through domains, the images can be analyzed to calculate parameters such as domain size, area, and volume. Second, the model ignores reflection and refraction and assumes weak absorption; hence, the predicted intensity profiles are not expected to be accurate; they merely give a rough estimate of the apparent sphere size. Much further work is required to place the field of quantitative ATR-FT-IR imaging on a sound basis.

  10. An integrated approach to flood hazard assessment on alluvial fans using numerical modeling, field mapping, and remote sensing

    USGS Publications Warehouse

    Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.

    2005-01-01

    Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In contrast, FEMA Flood Insurance Rate Maps (FIRMs) based on the FAN model predict uniformly high flood risk across the study areas without regard for small-scale topography and surficial geology. ?? 2005 Geological Society of America.

  11. Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix

    NASA Astrophysics Data System (ADS)

    Hagen, V. S.; Arntsen, B.; Raknes, E. B.

    2017-12-01

    Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.

  12. Improving the Long-Term Stability of Atmospheric Surface Deformation Predictions by Mitigating the Effects of Orography Updates in Operational Weather Forecast Models

    NASA Astrophysics Data System (ADS)

    Dill, Robert; Bergmann-Wolf, Inga; Thomas, Maik; Dobslaw, Henryk

    2016-04-01

    The global numerical weather prediction model routinely operated at the European Centre for Medium-Range Weather Forecasts (ECMWF) is typically updated about two times a year to incorporate the most recent improvements in the numerical scheme, the physical model or the data assimilation procedures into the system for steadily improving daily weather forecasting quality. Even though such changes frequently affect the long-term stability of meteorological quantities, data from the ECMWF deterministic model is often preferred over alternatively available atmospheric re-analyses due to both the availability of the data in near real-time and the substantially higher spatial resolution. However, global surface pressure time-series, which are crucial for the interpretation of geodetic observables, such as Earth rotation, surface deformation, and the Earth's gravity field, are in particular affected by changes in the surface orography of the model associated with every major change in horizontal resolution happened, e.g., in February 2006, January 2010, and May 2015 in case of the ECMWF operational model. In this contribution, we present an algorithm to harmonize surface pressure time-series from the operational ECMWF model by projecting them onto a time-invariant reference topography under consideration of the time-variable atmospheric density structure. The effectiveness of the method will be assessed globally in terms of pressure anomalies. In addition, we will discuss the impact of the method on predictions of crustal deformations based on ECMWF input, which have been recently made available by GFZ Potsdam.

  13. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  14. High-resolution gravity field modeling using GRAIL mission data

    NASA Astrophysics Data System (ADS)

    Lemoine, F. G.; Goossens, S. J.; Sabaka, T. J.; Nicholas, J. B.; Mazarico, E.; Rowlands, D. D.; Neumann, G. A.; Loomis, B.; Chinn, D. S.; Smith, D. E.; Zuber, M. T.

    2015-12-01

    The Gravity Recovery and Interior Laboratory (GRAIL) spacecraft were designed to map the structure of the Moon through high-precision global gravity mapping. The mission consisted of two spacecraft with Ka-band inter-satellite tracking complemented by tracking from Earth. The mission had two phases: a primary mapping mission from March 1 until May 29, 2012 at an average altitude of 50 km, and an extended mission from August 30 until December 14, 2012, with an average altitude of 23 km before November 18, and 20 and 11 km after. High-resolution gravity field models using both these data sets have been estimated, with the current resolution being degree and order 1080 in spherical harmonics. Here, we focus on aspects of the analysis of the GRAIL data: we investigate eclipse modeling, the influence of empirical accelerations on the results, and we discuss the inversion of large-scale systems. In addition to global models we also estimated local gravity adjustments in areas of particular interest such as Mare Orientale, the south pole area, and the farside. We investigate the use of Ka-band Range Rate (KBRR) data versus numerical derivatives of KBRR data, and show that the latter have the capability to locally improve correlations with topography.

  15. Numerical analysis of base flowfield at high altitude for a four-engine clustered nozzle configuration

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1993-01-01

    The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a pressure based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution and turbulence model are two primary factors that influence the accuracy of the base flowfield prediction.

  16. Normal modes of the shallow water system on the cubed sphere

    NASA Astrophysics Data System (ADS)

    Kang, H. G.; Cheong, H. B.; Lee, C. H.

    2017-12-01

    Spherical harmonics expressed as the Rossby-Haurwitz waves are the normal modes of non-divergent barotropic model. Among the normal modes in the numerical models, the most unstable mode will contaminate the numerical results, and therefore the investigation of normal mode for a given grid system and a discretiztaion method is important. The cubed-sphere grid which consists of six identical faces has been widely adopted in many atmospheric models. This grid system is non-orthogonal grid so that calculation of the normal mode is quiet challenge problem. In the present study, the normal modes of the shallow water system on the cubed sphere discretized by the spectral element method employing the Gauss-Lobatto Lagrange interpolating polynomials as orthogonal basis functions is investigated. The algebraic equations for the shallow water equation on the cubed sphere are derived, and the huge global matrix is constructed. The linear system representing the eigenvalue-eigenvector relations is solved by numerical libraries. The normal mode calculated for the several horizontal resolution and lamb parameters will be discussed and compared to the normal mode from the spherical harmonics spectral method.

  17. MUSE: the Multi-Slit Solar Explorer

    NASA Astrophysics Data System (ADS)

    Tarbell, Theodore D.; De Pontieu, Bart

    2017-08-01

    The Multi-Slit Solar Explorer is a proposed Small Explorer mission for studying the dynamics of the corona and transition region using both conventional and novel spectral imaging techniques. The physical processes that heat the multi-million degree solar corona, accelerate the solar wind and drive solar activity (CMEs and flares) remain poorly known. A breakthrough in these areas can only come from radically innovative instrumentation and state-of-the-art numerical modeling and will lead to better understanding of space weather origins. MUSE’s multi-slit coronal spectroscopy will use a 100x improvement in spectral raster cadence to fill a crucial gap in our knowledge of Sun-Earth connections; it will reveal temperatures, velocities and non-thermal processes over a wide temperature range to diagnose physical processes that remain invisible to current or planned instruments. MUSE will contain two instruments: an EUV spectrograph (SG) and EUV context imager (CI). Both have similar spatial resolution and leverage extensive heritage from previous high-resolution instruments such as IRIS and the HiC rocket payload. The MUSE investigation will build on the success of IRIS by combining numerical modeling with a uniquely capable observatory: MUSE will obtain EUV spectra and images with the highest resolution in space (1/3 arcsec) and time (1-4 s) ever achieved for the transition region and corona, along 35 slits and a large context FOV simultaneously. The MUSE consortium includes LMSAL, SAO, Stanford, ARC, HAO, GSFC, MSFC, MSU, ITA Oslo and other institutions.

  18. The Numerical Analysis of a Turbulent Compressible Jet. Degree awarded by Ohio State Univ., 2000

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2001-01-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Subgrid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two- and three-dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and subgrid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data was relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved subgrid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately 1/2 D(sub j). Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to 0.71 U(sub j).

  19. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.

    2012-12-01

    Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.

  20. Evolution of a double-front Rayleigh-Taylor system using a graphics-processing-unit-based high-resolution thermal lattice-Boltzmann model.

    PubMed

    Ripesi, P; Biferale, L; Schifano, S F; Tripiccione, R

    2014-04-01

    We study the turbulent evolution originated from a system subjected to a Rayleigh-Taylor instability with a double density at high resolution in a two-dimensional geometry using a highly optimized thermal lattice-Boltzmann code for GPUs. Our investigation's initial condition, given by the superposition of three layers with three different densities, leads to the development of two Rayleigh-Taylor fronts that expand upward and downward and collide in the middle of the cell. By using high-resolution numerical data we highlight the effects induced by the collision of the two turbulent fronts in the long-time asymptotic regime. We also provide details on the optimized lattice-Boltzmann code that we have run on a cluster of GPUs.

  1. Satellite observed thermodynamics during FGGE

    NASA Technical Reports Server (NTRS)

    Smith, W. L.

    1985-01-01

    During the First Global Atmospheric Research Program (GARP) Global Experiment (FGGE), determinations of temperature and moisture were made from TIROS-N and NOAA-6 satellite infrared and microwave sounding radiance measurements. The data were processed by two methods differing principally in their horizontal resolution. At the National Earth Satellite Service (NESS) in Washington, D.C., the data were produced operationally with a horizontal resolution of 250 km for inclusion in the FGGE Level IIb data sets for application to large-scale numerical analysis and prediction models. High horizontal resolution (75 km) sounding data sets were produced using man-machine interactive methods for the special observing periods of FGGE at the NASA/Goddard Space Flight Center and archived as supplementary Level IIb. The procedures used for sounding retrieval and the characteristics and quality of these thermodynamic observations are given.

  2. Uncertainties in estimates of mortality attributable to ambient PM2.5 in Europe

    NASA Astrophysics Data System (ADS)

    Kushta, Jonilda; Pozzer, Andrea; Lelieveld, Jos

    2018-06-01

    The assessment of health impacts associated with airborne particulate matter smaller than 2.5 μm in diameter (PM2.5) relies on aerosol concentrations derived either from monitoring networks, satellite observations, numerical models, or a combination thereof. When global chemistry-transport models are used for estimating PM2.5, their relatively coarse resolution has been implied to lead to underestimation of health impacts in densely populated and industrialized areas. In this study the role of spatial resolution and of vertical layering of a regional air quality model, used to compute PM2.5 impacts on public health and mortality, is investigated. We utilize grid spacings of 100 km and 20 km to calculate annual mean PM2.5 concentrations over Europe, which are in turn applied to the estimation of premature mortality by cardiovascular and respiratory diseases. Using model results at a 100 km grid resolution yields about 535 000 annual premature deaths over the extended European domain (242 000 within the EU-28), while numbers approximately 2.4% higher are derived by using the 20 km resolution. Using the surface (i.e. lowest) layer of the model for PM2.5 yields about 0.6% higher mortality rates compared with PM2.5 averaged over the first 200 m above ground. Further, the calculation of relative risks (RR) from PM2.5, using 0.1 μg m‑3 size resolution bins compared to the commonly used 1 μg m‑3, is associated with ±0.8% uncertainty in estimated deaths. We conclude that model uncertainties contribute a small part of the overall uncertainty expressed by the 95% confidence intervals, which are of the order of ±30%, mostly related to the RR calculations based on epidemiological data.

  3. SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows

    NASA Astrophysics Data System (ADS)

    Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu

    2017-12-01

    A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.

  4. The Rise of Complexity in Flood Forecasting: Opportunities, Challenges and Tradeoffs

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Clark, M. P.; Nijssen, B.

    2017-12-01

    Operational flood forecasting is currently undergoing a major transformation. Most national flood forecasting services have relied for decades on lumped, highly calibrated conceptual hydrological models running on local office computing resources, providing deterministic streamflow predictions at gauged river locations that are important to stakeholders and emergency managers. A variety of recent technological advances now make it possible to run complex, high-to-hyper-resolution models for operational hydrologic prediction over large domains, and the US National Weather Service is now attempting to use hyper-resolution models to create new forecast services and products. Yet other `increased-complexity' forecasting strategies also exist that pursue different tradeoffs between model complexity (i.e., spatial resolution, physics) and streamflow forecast system objectives. There is currently a pressing need for a greater understanding in the hydrology community of the opportunities, challenges and tradeoffs associated with these different forecasting approaches, and for a greater participation by the hydrology community in evaluating, guiding and implementing these approaches. Intermediate-resolution forecast systems, for instance, use distributed land surface model (LSM) physics but retain the agility to deploy ensemble methods (including hydrologic data assimilation and hindcast-based post-processing). Fully coupled numerical weather prediction (NWP) systems, another example, use still coarser LSMs to produce ensemble streamflow predictions either at the model scale or after sub-grid scale runoff routing. Based on the direct experience of the authors and colleagues in research and operational forecasting, this presentation describes examples of different streamflow forecast paradigms, from the traditional to the recent hyper-resolution, to illustrate the range of choices facing forecast system developers. We also discuss the degree to which the strengths and weaknesses of each strategy map onto the requirements for different types of forecasting services (e.g., flash flooding, river flooding, seasonal water supply prediction).

  5. Generic magnetohydrodynamic model at the Community Coordinated Modeling Center

    NASA Astrophysics Data System (ADS)

    Honkonen, I. J.; Rastaetter, L.; Glocer, A.

    2016-12-01

    The Community Coordinated Modeling Center (CCMC) at NASA Goddard Space Flight Center is a multi-agency partnership to enable, support and perform research and development for next-generation space science and space weather models. CCMC currently hosts nearly 100 numerical models and a cornerstone of this activity is the Runs on Request (RoR) system which allows anyone to request a model run and analyse/visualize the results via a web browser. CCMC is also active in the education community by organizing student research contests, heliophysics summer schools, and space weather forecaster training for students, government and industry representatives. Recently a generic magnetohydrodynamic (MHD) model was added to the CCMC RoR system which allows the study of a variety of fluid and plasma phenomena in one, two and three dimensions using a dynamic point-and-click web interface. For example students can experiment with the physics of fundamental wave modes of hydrodynamic and MHD theory, behavior of discontinuities and shocks as well as instabilities such as Kelvin-Helmholtz.Students can also use the model to experiments with numerical effects of models, i.e. how the process of discretizing a system of equations and solving them on a computer changes the solution. This can provide valuable background understanding e.g. for space weather forecasters on the effects of model resolution, numerical resistivity, etc. on the prediction.

  6. Evaluation of predicted diurnal cycle of precipitation after tests with convection and microphysics schemes in the Eta Model

    NASA Astrophysics Data System (ADS)

    Gomes, J. L.; Chou, S. C.; Yaguchi, S. M.

    2012-04-01

    Physics parameterizations and the model vertical and horizontal resolutions, for example, can significantly contribute to the uncertainty in the numerical weather predictions, especially at regions with complex topography. The objective of this study is to assess the influences of model precipitation production schemes and horizontal resolution on the diurnal cycle of precipitation in the Eta Model . The model was run in hydrostatic mode at 3- and 5-km grid sizes, the vertical resolution was set to 50 layers, and the time steps to 6 and 10 s, respectively. The initial and boundary conditions were taken from ERA-Interim reanalysis. Over the sea the 0.25-deg sea surface temperature from NOAA was used. The model was setup to run for each resolution over Angra dos Reis, located in the Southeast region of Brazil, for the rainy period between 18 December 2009 and 01 de January 2010, the model simulation range was 48 hours. In one set of runs the cumulus parameterization was switched off, in this case the model precipitation was fully simulated by cloud microphysics scheme, and in the other set the model was run with weak cumulus convection. The results show that as the model horizontal resolution increases from 5 to 3 km, the spatial pattern of the precipitation hardly changed, although the maximum precipitation core increased in magnitude. Daily data from automatic station data was used to evaluate the runs and shows that the diurnal cycle of temperature and precipitation were better simulated for 3 km when compared against observations. The model configuration results without cumulus convection shows a small contraction in the precipitating area and an increase in the simulated maximum values. The diurnal cycle of precipitation was better simulated with some activity of the cumulus convection scheme. The skill scores for the period and for different forecast ranges are higher at weak and moderate precipitation rates.

  7. Numerical analysis of heat transfer in the exhaust gas flow in a diesel power generator

    NASA Astrophysics Data System (ADS)

    Brito, C. H. G.; Maia, C. B.; Sodré, J. R.

    2016-09-01

    This work presents a numerical study of heat transfer in the exhaust duct of a diesel power generator. The analysis was performed using two different approaches: the Finite Difference Method (FDM) and the Finite Volume Method (FVM), this last one by means of a commercial computer software, ANSYS CFX®. In FDM, the energy conservation equation was solved taking into account the estimated velocity profile for fully developed turbulent flow inside a tube and literature correlations for heat transfer. In FVM, the mass conservation, momentum, energy and transport equations were solved for turbulent quantities by the K-ω SST model. In both methods, variable properties were considered for the exhaust gas composed by six species: CO2, H2O, H2, O2, CO and N2. The entry conditions for the numerical simulations were given by experimental data available. The results were evaluated for the engine operating under loads of 0, 10, 20, and 37.5 kW. Test mesh and convergence were performed to determine the numerical error and uncertainty of the simulations. The results showed a trend of increasing temperature gradient with load increase. The general behaviour of the velocity and temperature profiles obtained by the numerical models were similar, with some divergence arising due to the assumptions made for the resolution of the models.

  8. Assessment of vulnerability in karst aquifers using a quantitative integrated numerical model: catchment characterization and high resolution monitoring - Application to semi-arid regions- Lebanon.

    NASA Astrophysics Data System (ADS)

    Doummar, Joanna; Aoun, Michel; Andari, Fouad

    2016-04-01

    Karst aquifers are highly heterogeneous and characterized by a duality of recharge (concentrated; fast versus diffuse; slow) and a duality of flow which directly influences groundwater flow and spring responses. Given this heterogeneity in flow and infiltration, karst aquifers do not always obey standard hydraulic laws. Therefore the assessment of their vulnerability reveals to be challenging. Studies have shown that vulnerability of aquifers is highly governed by recharge to groundwater. On the other hand specific parameters appear to play a major role in the spatial and temporal distribution of infiltration on a karst system, thus greatly influencing the discharge rates observed at a karst spring, and consequently the vulnerability of a spring. This heterogeneity can only be depicted using an integrated numerical model to quantify recharge spatially and assess the spatial and temporal vulnerability of a catchment for contamination. In the framework of a three-year PEER NSF/USAID funded project, the vulnerability of a karst catchment in Lebanon is assessed quantitatively using a numerical approach. The aim of the project is also to refine actual evapotranspiration rates and spatial recharge distribution in a semi arid environment. For this purpose, a monitoring network was installed since July 2014 on two different pilot karst catchment (drained by Qachqouch Spring and Assal Spring) to collect high resolution data to be used in an integrated catchment numerical model with MIKE SHE, DHI including climate, unsaturated zone, and saturated zone. Catchment characterization essential for the model included geological mapping and karst features (e.g., dolines) survey as they contribute to fast flow. Tracer experiments were performed under different flow conditions (snow melt and low flow) to delineate the catchment area, reveal groundwater velocities and response to snowmelt events. An assessment of spring response after precipitation events allowed the estimation of the fast infiltration component. A series of laboratory tests were performed to acquire physical values to be used as a benchmark for model parameterization, such as laboratory tests on soils for conductivity at saturation and grain size analysis. Time series used for input or calibration were collected and computed from continuous high resolution monitoring of climatic data, moisture variation in the soil, and discharge at the investigated spring. This similar model approach used on a catchment site in Germany is to be applied and validated on two pilot karst catchments in Lebanon governed by semi-arid climatic conditions. References Doummar J., Sauter M., Geyer T., 2012. Simulation of flow processes in a large scale karst system with an integrated catchment model (Mike She) - Identification of relevant parameters influencing spring discharge. Journal of Hydrology, v. 426-427- p 112-123. Jukić, D., and Denić-Jukić, V., 2009. Groundwater balance estimation in karst by using a conceptual rainfall-runoff model. Journal of Hydrology, v. 373- p 302-315

  9. Active Vertex Model for cell-resolution description of epithelial tissue mechanics

    PubMed Central

    Barton, Daniel L.; Henkes, Silke

    2017-01-01

    We introduce an Active Vertex Model (AVM) for cell-resolution studies of the mechanics of confluent epithelial tissues consisting of tens of thousands of cells, with a level of detail inaccessible to similar methods. The AVM combines the Vertex Model for confluent epithelial tissues with active matter dynamics. This introduces a natural description of the cell motion and accounts for motion patterns observed on multiple scales. Furthermore, cell contacts are generated dynamically from positions of cell centres. This not only enables efficient numerical implementation, but provides a natural description of the T1 transition events responsible for local tissue rearrangements. The AVM also includes cell alignment, cell-specific mechanical properties, cell growth, division and apoptosis. In addition, the AVM introduces a flexible, dynamically changing boundary of the epithelial sheet allowing for studies of phenomena such as the fingering instability or wound healing. We illustrate these capabilities with a number of case studies. PMID:28665934

  10. Active Vertex Model for cell-resolution description of epithelial tissue mechanics.

    PubMed

    Barton, Daniel L; Henkes, Silke; Weijer, Cornelis J; Sknepnek, Rastko

    2017-06-01

    We introduce an Active Vertex Model (AVM) for cell-resolution studies of the mechanics of confluent epithelial tissues consisting of tens of thousands of cells, with a level of detail inaccessible to similar methods. The AVM combines the Vertex Model for confluent epithelial tissues with active matter dynamics. This introduces a natural description of the cell motion and accounts for motion patterns observed on multiple scales. Furthermore, cell contacts are generated dynamically from positions of cell centres. This not only enables efficient numerical implementation, but provides a natural description of the T1 transition events responsible for local tissue rearrangements. The AVM also includes cell alignment, cell-specific mechanical properties, cell growth, division and apoptosis. In addition, the AVM introduces a flexible, dynamically changing boundary of the epithelial sheet allowing for studies of phenomena such as the fingering instability or wound healing. We illustrate these capabilities with a number of case studies.

  11. Applying narrowband remote-sensing reflectance models to wideband data.

    PubMed

    Lee, Zhongping

    2009-06-10

    Remote sensing of coastal and inland waters requires sensors to have a high spatial resolution to cover the spatial variation of biogeochemical properties in fine scales. High spatial-resolution sensors, however, are usually equipped with spectral bands that are wide in bandwidth (50 nm or wider). In this study, based on numerical simulations of hyperspectral remote-sensing reflectance of optically-deep waters, and using Landsat band specifics as an example, the impact of a wide spectral channel on remote sensing is analyzed. It is found that simple adoption of a narrowband model may result in >20% underestimation in calculated remote-sensing reflectance, and inversely may result in >20% overestimation in inverted absorption coefficients even under perfect conditions, although smaller (approximately 5%) uncertainties are found for higher absorbing waters. These results provide a cautious note, but also a justification for turbid coastal waters, on applying narrowband models to wideband data.

  12. The need to consider temporal variability when modelling exchange at the sediment-water interface

    USGS Publications Warehouse

    Rosenberry, Donald O.

    2011-01-01

    Most conceptual or numerical models of flows and processes at the sediment-water interface assume steady-state conditions and do not consider temporal variability. The steady-state assumption is required because temporal variability, if quantified at all, is usually determined on a seasonal or inter-annual scale. In order to design models that can incorporate finer-scale temporal resolution we first need to measure variability at a finer scale. Automated seepage meters that can measure flow across the sediment-water interface with temporal resolution of seconds to minutes were used in a variety of settings to characterize seepage response to rainfall, wind, and evapotranspiration. Results indicate that instantaneous seepage fluxes can be much larger than values commonly reported in the literature, although seepage does not always respond to hydrological processes. Additional study is needed to understand the reasons for the wide range and types of responses to these hydrologic and atmospheric events.

  13. Wave Modelling - The State of the Art

    DTIC Science & Technology

    2007-09-27

    Numerics and Resolution in Large Scale Wave Modelling 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 0601153N 6. AUTHOR( S ) 5d. PROJECT NUMBER Erick Rogers...Hendrik Tolman, Fabrice Ardhuin, Igor Lavrenov 5e. TASK NUMBER 5f. WORK UNIT NUMBER 73-8580-06-5 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) 8...SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM( S ) Office of Naval Research ONR 800 N. Quincy St. Arlington, VA 22217

  14. Asymptotic-Preserving methods and multiscale models for plasma physics

    NASA Astrophysics Data System (ADS)

    Degond, Pierre; Deluzet, Fabrice

    2017-05-01

    The purpose of the present paper is to provide an overview of Asymptotic-Preserving methods for multiscale plasma simulations by addressing three singular perturbation problems. First, the quasi-neutral limit of fluid and kinetic models is investigated in the framework of non-magnetized as well as magnetized plasmas. Second, the drift limit for fluid descriptions of thermal plasmas under large magnetic fields is addressed. Finally efficient numerical resolutions of anisotropic elliptic or diffusion equations arising in magnetized plasma simulation are reviewed.

  15. An efficient spectral method for the simulation of dynamos in Cartesian geometry and its implementation on massively parallel computers

    NASA Astrophysics Data System (ADS)

    Stellmach, Stephan; Hansen, Ulrich

    2008-05-01

    Numerical simulations of the process of convection and magnetic field generation in planetary cores still fail to reach geophysically realistic control parameter values. Future progress in this field depends crucially on efficient numerical algorithms which are able to take advantage of the newest generation of parallel computers. Desirable features of simulation algorithms include (1) spectral accuracy, (2) an operation count per time step that is small and roughly proportional to the number of grid points, (3) memory requirements that scale linear with resolution, (4) an implicit treatment of all linear terms including the Coriolis force, (5) the ability to treat all kinds of common boundary conditions, and (6) reasonable efficiency on massively parallel machines with tens of thousands of processors. So far, algorithms for fully self-consistent dynamo simulations in spherical shells do not achieve all these criteria simultaneously, resulting in strong restrictions on the possible resolutions. In this paper, we demonstrate that local dynamo models in which the process of convection and magnetic field generation is only simulated for a small part of a planetary core in Cartesian geometry can achieve the above goal. We propose an algorithm that fulfills the first five of the above criteria and demonstrate that a model implementation of our method on an IBM Blue Gene/L system scales impressively well for up to O(104) processors. This allows for numerical simulations at rather extreme parameter values.

  16. Non-robust numerical simulations of analogue extension experiments

    NASA Astrophysics Data System (ADS)

    Naliboff, John; Buiter, Susanne

    2016-04-01

    Numerical and analogue models of lithospheric deformation provide significant insight into the tectonic processes that lead to specific structural and geophysical observations. As these two types of models contain distinct assumptions and tradeoffs, investigations drawing conclusions from both can reveal robust links between first-order processes and observations. Recent studies have focused on detailed comparisons between numerical and analogue experiments in both compressional and extensional tectonics, sometimes involving multiple lithospheric deformation codes and analogue setups. While such comparisons often show good agreement on first-order deformation styles, results frequently diverge on second-order structures, such as shear zone dip angles or spacing, and in certain cases even on first-order structures. Here, we present finite-element experiments that are designed to directly reproduce analogue "sandbox" extension experiments at the cm-scale. We use material properties and boundary conditions that are directly taken from analogue experiments and use a Drucker-Prager failure model to simulate shear zone formation in sand. We find that our numerical experiments are highly sensitive to numerous numerical parameters. For example, changes to the numerical resolution, velocity convergence parameters and elemental viscosity averaging commonly produce significant changes in first- and second-order structures accommodating deformation. The sensitivity of the numerical simulations to small parameter changes likely reflects a number of factors, including, but not limited to, high angles of internal friction assigned to sand, complex, unknown interactions between the brittle sand (used as an upper crust equivalent) and viscous silicone (lower crust), highly non-linear strain weakening processes and poor constraints on the cohesion of sand. Our numerical-analogue comparison is hampered by (a) an incomplete knowledge of the fine details of sand failure and sand properties, and (b) likely limitations to the use of a continuum Drucker-Prager model for representing shear zone formation in sand. In some cases our numerical experiments provide reasonable fits to first-order structures observed in the analogue experiments, but the numerical sensitivity to small parameter variations leads us to conclude that the numerical experiments are not robust.

  17. Examining the Impacts of High-Resolution Land Surface Initialization on Model Predictions of Convection in the Southeastern U.S.

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Kumar, Sujay V.; Santos, Pablo; Medlin, Jeffrey M.; Jedlovec, Gary J.

    2009-01-01

    One of the most challenging weather forecast problems in the southeastern U.S. is daily summertime pulse convection. During the summer, atmospheric flow and forcing are generally weak in this region; thus, convection typically initiates in response to local forcing along sea/lake breezes, and other discontinuities often related to horizontal gradients in surface heating rates. Numerical simulations of pulse convection usually have low skill, even in local predictions at high resolution, due to the inherent chaotic nature of these precipitation systems. Forecast errors can arise from assumptions within physics parameterizations, model resolution limitations, as well as uncertainties in both the initial state of the atmosphere and land surface variables such as soil moisture and temperature. For this study, it is hypothesized that high-resolution, consistent representations of surface properties such as soil moisture and temperature, ground fluxes, and vegetation are necessary to better simulate the interactions between the land surface and atmosphere, and ultimately improve predictions of local circulations and summertime pulse convection. The NASA Short-term Prediction Research and Transition (SPORT) Center has been conducting studies to examine the impacts of high-resolution land surface initialization data generated by offline simulations of the NASA Land Informatiot System (LIS) on subsequent numerical forecasts using the Weather Research and Forecasting (WRF) model (Case et al. 2008, to appear in the Journal of Hydrometeorology). Case et al. presents improvements to simulated sea breezes and surface verification statistics over Florida by initializing WRF with land surface variables from an offline LIS spin-up run, conducted on the exact WRF domain and resolution. The current project extends the previous work over Florida, focusing on selected case studies of typical pulse convection over the southeastern U.S., with an emphasis on improving local short-term WRF simulations over the Mobile, AL and Miami, FL NWS county warning areas. Future efforts may involve examining the impacts of assimilating remotely-sensed soil moisture data, and/or introducing weekly greenness vegetation fraction composites (as opposed to monthly climatologies) into ol'fline NASA LIS runs. Based on positive impacts, the offline LIS runs could be transitioned into an operational mode, providing land surface initialization data to NWS forecast offices in real time.

  18. Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil

    NASA Astrophysics Data System (ADS)

    Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo

    2013-04-01

    The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive precipitation was reduced by adjusting some parameters in the cloud microphysics scheme. Precipitation overestimate still occurs and further tests are still necessary. The increase of horizontal resolution to 1 km required adjusting model diffusion parameters and refining divergence calculations. Available observations in the region for a thorough evaluation is a major constraint.

  19. Numerical and Experimental Study on the Residual Stresses in the Nitrided Steel

    NASA Astrophysics Data System (ADS)

    Song, X.; Zhang, Zhi-Qian; Narayanaswamy, S.; Huang, Y. Z.; Zarinejad, M.

    2016-09-01

    In the present work, residual stresses distribution in the gas nitrided AISI 4140 sample has been studied using finite element (FE) simulation. The nitrogen concentration profile is obtained from the diffusion-controlled compound layer growth model, and nitrogen concentration controls the material volume change through phase transformation and lattice interstitials which results in residual stresses. Such model is validated through residual stress measurement technique—micro-ring-core method, which is applied to the nitriding process to obtain the residual stresses profiles in both the compound and diffusion layer. The numerical and experimental results are in good agreement with each other; they both indicate significant stress variation in the compound layer, which was not captured in previous research works due to the resolution limit of the traditional methods.

  20. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  1. Inverse constraints for emission fluxes of atmospheric tracers estimated from concentration measurements and Lagrangian transport

    NASA Astrophysics Data System (ADS)

    Pisso, Ignacio; Patra, Prabir; Breivik, Knut

    2015-04-01

    Lagrangian transport models based on times series of Eulerian fields provide a computationally affordable way of achieving very high resolution for limited areas and time periods. This makes them especially suitable for the analysis of point-wise measurements of atmospheric tracers. We present an application illustrated with examples of greenhouse gases from anthropogenic emissions in urban areas and biogenic emissions in Japan and of pollutants in the Arctic. We asses the algorithmic complexity of the numerical implementation as well as the use of non-procedural techniques such as Object-Oriented programming. We discuss aspects related to the quantification of uncertainty from prior information in the presence of model error and limited number of observations. The case of non-linear constraints is explored using direct numerical optimisation methods.

  2. Nonhydrostatic icosahedral atmospheric model (NICAM) for global cloud resolving simulations

    NASA Astrophysics Data System (ADS)

    Satoh, M.; Matsuno, T.; Tomita, H.; Miura, H.; Nasuno, T.; Iga, S.

    2008-03-01

    A new type of ultra-high resolution atmospheric global circulation model is developed. The new model is designed to perform "cloud resolving simulations" by directly calculating deep convection and meso-scale circulations, which play key roles not only in the tropical circulations but in the global circulations of the atmosphere. Since cores of deep convection have a few km in horizontal size, they have not directly been resolved by existing atmospheric general circulation models (AGCMs). In order to drastically enhance horizontal resolution, a new framework of a global atmospheric model is required; we adopted nonhydrostatic governing equations and icosahedral grids to the new model, and call it Nonhydrostatic ICosahedral Atmospheric Model (NICAM). In this article, we review governing equations and numerical techniques employed, and present the results from the unique 3.5-km mesh global experiments—with O(10 9) computational nodes—using realistic topography and land/ocean surface thermal forcing. The results show realistic behaviors of multi-scale convective systems in the tropics, which have not been captured by AGCMs. We also argue future perspective of the roles of the new model in the next generation atmospheric sciences.

  3. Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Larsson, Johan

    2013-01-01

    A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.

  4. The Dynamical Core Model Intercomparison Project (DCMIP-2016): Results of the Supercell Test Case

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.; Reed, K. A.; Jablonowski, C.; Ullrich, P. A.; Kent, J.; Lauritzen, P. H.; Nair, R. D.

    2016-12-01

    The 2016 Dynamical Core Model Intercomparison Project (DCMIP-2016) assesses the modeling techniques for global climate and weather models and was recently held at the National Center for Atmospheric Research (NCAR) in conjunction with a two-week summer school. Over 12 different international modeling groups participated in DCMIP-2016 and focused on the evaluation of the newest non-hydrostatic dynamical core designs for future high-resolution weather and climate models. The paper highlights the results of the third DCMIP-2016 test case, which is an idealized supercell storm on a reduced-radius Earth. The supercell storm test permits the study of a non-hydrostatic moist flow field with strong vertical velocities and associated precipitation. This test assesses the behavior of global modeling systems at extremely high spatial resolution and is used in the development of next-generation numerical weather prediction capabilities. In this regime the effective grid spacing is very similar to the horizontal scale of convective plumes, emphasizing resolved non-hydrostatic dynamics. The supercell test case sheds light on the physics-dynamics interplay and highlights the impact of diffusion on model solutions.

  5. ESiWACE: A Center of Excellence for HPC applications to support cloud resolving earth system modelling

    NASA Astrophysics Data System (ADS)

    Biercamp, Joachim; Adamidis, Panagiotis; Neumann, Philipp

    2017-04-01

    With the exa-scale era approaching, length and time scales used for climate research on one hand and numerical weather prediction on the other hand blend into each other. The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) represents a European consortium comprising partners from climate, weather and HPC in their effort to address key scientific challenges that both communities have in common. A particular challenge is to reach global models with spatial resolutions that allow simulating convective clouds and small-scale ocean eddies. These simulations would produce better predictions of trends and provide much more fidelity in the representation of high-impact regional events. However, running such models in operational mode, i.e with sufficient throughput in ensemble mode clearly will require exa-scale computing and data handling capability. We will discuss the ESiWACE initiative and relate it to work-in-progress on high-resolution simulations in Europe. We present recent strong scalability measurements from ESiWACE to demonstrate current computability in weather and climate simulation. A special focus in this particular talk is on the Icosahedal Nonhydrostatic (ICON) model used for a comparison of high resolution regional and global simulations with high quality observation data. We demonstrate that close-to-optimal parallel efficiency can be achieved in strong scaling global resolution experiments on Mistral/DKRZ, e.g. 94% for 5km resolution simulations using 36k cores on Mistral/DKRZ. Based on our scalability and high-resolution experiments, we deduce and extrapolate future capabilities for ICON that are expected for weather and climate research at exascale.

  6. Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling

    NASA Astrophysics Data System (ADS)

    Čunderlík, Róbert; Vipiana, Francesca

    2017-04-01

    Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.

  7. Development of ALARO-Climate regional climate model for a very high resolution

    NASA Astrophysics Data System (ADS)

    Skalak, Petr; Farda, Ales; Brozkova, Radmila; Masek, Jan

    2014-05-01

    ALARO-Climate is a new regional climate model (RCM) derived from the ALADIN LAM model family. It is based on the numerical weather prediction model ALARO and developed at the Czech Hydrometeorological Institute. The model is expected to able to work in the so called "grey zone" physics (horizontal resolution of 4 - 7 km) and at the same time retain its ability to be operated in resolutions in between 20 and 50 km, which are typical for contemporary generation of regional climate models. Here we present the main results of the RCM ALARO-Climate model simulations in 25 and 6.25 km resolutions on the longer time-scale (1961-1990). The model was driven by the ERA-40 re-analyses and run on the integration domain of ~ 2500 x 2500 km size covering the central Europe. The simulated model climate was compared with the gridded observation of air temperature (mean, maximum, minimum) and precipitation from the E-OBS version dataset 8. Other simulated parameters (e.g., cloudiness, radiation or components of water cycle) were compared to the ERA-40 re-analyses. The validation of the first ERA-40 simulation in both, 25 km and 6.25 km resolutions, revealed significant cold biases in all seasons and overestimation of precipitation in the selected Central Europe target area (0° - 30° eastern longitude ; 40° - 60° northern latitude). The differences between these simulations were small and thus revealed a robustness of the model's physical parameterization on the resolution change. The series of 25 km resolution simulations with several model adaptations was carried out to study their effect on the simulated properties of climate variables and thus possibly identify a source of major errors in the simulated climate. The current investigation suggests the main reason for biases is related to the model physic. Acknowledgements: This study was performed within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation) and CzechGlobe Centre (CZ.1.05/1.1.00/02.0073). The partial support was also provided under the projects P209-11-0956 of the Czech Science Foundation and CZ.1.07/2.4.00/31.0056 (Operational Programme of Education for Competitiveness of Ministry of Education, Youth and Sports of the Czech Republic).

  8. Star-disc interaction in galactic nuclei: orbits and rates of accreted stars

    NASA Astrophysics Data System (ADS)

    Kennedy, Gareth F.; Meiron, Yohai; Shukirgaliyev, Bekdaulet; Panamarev, Taras; Berczik, Peter; Just, Andreas; Spurzem, Rainer

    2016-07-01

    We examine the effect of an accretion disc on the orbits of stars in the central star cluster surrounding a central massive black hole by performing a suite of 39 high-accuracy direct N-body simulations using state-of-the art software and accelerator hardware, with particle numbers up to 128k. The primary focus is on the accretion rate of stars by the black hole (equivalent to their tidal disruption rate for black holes in the small to medium mass range) and the eccentricity distribution of these stars. Our simulations vary not only the particle number, but disc model (two models examined), spatial resolution at the centre (characterized by the numerical accretion radius) and softening length. The large parameter range and physically realistic modelling allow us for the first time to confidently extrapolate these results to real galactic centres. While in a real galactic centre both particle number and accretion radius differ by a few orders of magnitude from our models, which are constrained by numerical capability, we find that the stellar accretion rate converges for models with N ≥ 32k. The eccentricity distribution of accreted stars, however, does not converge. We find that there are two competing effects at work when improving the resolution: larger particle number leads to a smaller fraction of stars accreted on nearly circular orbits, while higher spatial resolution increases this fraction. We scale our simulations to some nearby galaxies and find that the expected boost in stellar accretion (or tidal disruption, which could be observed as X-ray flares) in the presence of a gas disc is about a factor of 10. Even with this boost, the accretion of mass from stars is still a factor of ˜100 slower than the accretion of gas from the disc. Thus, it seems accretion of stars is not a major contributor to black hole mass growth.

  9. A multi-component evaporation model for beam melting processes

    NASA Astrophysics Data System (ADS)

    Klassen, Alexander; Forster, Vera E.; Körner, Carolin

    2017-02-01

    In additive manufacturing using laser or electron beam melting technologies, evaporation losses and changes in chemical composition are known issues when processing alloys with volatile elements. In this paper, a recently described numerical model based on a two-dimensional free surface lattice Boltzmann method is further developed to incorporate the effects of multi-component evaporation. The model takes into account the local melt pool composition during heating and fusion of metal powder. For validation, the titanium alloy Ti-6Al-4V is melted by selective electron beam melting and analysed using mass loss measurements and high-resolution microprobe imaging. Numerically determined evaporation losses and spatial distributions of aluminium compare well with experimental data. Predictions of the melt pool formation in bulk samples provide insight into the competition between the loss of volatile alloying elements from the irradiated surface and their advective redistribution within the molten region.

  10. 3D SPH numerical simulation of the wave generated by the Vajont rockslide

    NASA Astrophysics Data System (ADS)

    Vacondio, R.; Mignosa, P.; Pagani, S.

    2013-09-01

    A 3D numerical modeling of the wave generated by the Vajont slide, one of the most destructive ever occurred, is presented in this paper. A meshless Lagrangian Smoothed Particle Hydrodynamics (SPH) technique was adopted to simulate the highly fragmented violent flow generated by the falling slide in the artificial reservoir. The speed-up achievable via General Purpose Graphic Processing Units (GP-GPU) allowed to adopt the adequate resolution to describe the phenomenon. The comparison with the data available in literature showed that the results of the numerical simulation reproduce satisfactorily the maximum run-up, also the water surface elevation in the residual lake after the event. Moreover, the 3D velocity field of the flow during the event and the discharge hydrograph which overtopped the dam, were obtained.

  11. A numerical and experimental comparison of human head phantoms for compliance testing of mobile telephone equipment.

    PubMed

    Christ, Andreas; Chavannes, Nicolas; Nikoloski, Neviana; Gerber, Hans-Ulrich; Poković, Katja; Kuster, Niels

    2005-02-01

    A new human head phantom has been proposed by CENELEC/IEEE, based on a large scale anthropometric survey. This phantom is compared to a homogeneous Generic Head Phantom and three high resolution anatomical head models with respect to specific absorption rate (SAR) assessment. The head phantoms are exposed to the radiation of a generic mobile phone (GMP) with different antenna types and a commercial mobile phone. The phones are placed in the standardized testing positions and operate at 900 and 1800 MHz. The average peak SAR is evaluated using both experimental (DASY3 near field scanner) and numerical (FDTD simulations) techniques. The numerical and experimental results compare well and confirm that the applied SAR assessment methods constitute a conservative approach.

  12. Real-time flight conflict detection and release based on Multi-Agent system

    NASA Astrophysics Data System (ADS)

    Zhang, Yifan; Zhang, Ming; Yu, Jue

    2018-01-01

    This paper defines two-aircrafts, multi-aircrafts and fleet conflict mode, sets up space-time conflict reservation on the basis of safety interval and conflict warning time in three-dimension. Detect real-time flight conflicts combined with predicted flight trajectory of other aircrafts in the same airspace, and put forward rescue resolutions for the three modes respectively. When accorded with the flight conflict conditions, determine the conflict situation, and enter the corresponding conflict resolution procedures, so as to avoid the conflict independently, as well as ensure the flight safety of aimed aircraft. Lastly, the correctness of model is verified with numerical simulation comparison.

  13. Generation of real-time mode high-resolution water vapor fields from GPS observations

    NASA Astrophysics Data System (ADS)

    Yu, Chen; Penna, Nigel T.; Li, Zhenhong

    2017-02-01

    Pointwise GPS measurements of tropospheric zenith total delay can be interpolated to provide high-resolution water vapor maps which may be used for correcting synthetic aperture radar images, for numeral weather prediction, and for correcting Network Real-time Kinematic GPS observations. Several previous studies have addressed the importance of the elevation dependency of water vapor, but it is often a challenge to separate elevation-dependent tropospheric delays from turbulent components. In this paper, we present an iterative tropospheric decomposition interpolation model that decouples the elevation and turbulent tropospheric delay components. For a 150 km × 150 km California study region, we estimate real-time mode zenith total delays at 41 GPS stations over 1 year by using the precise point positioning technique and demonstrate that the decoupled interpolation model generates improved high-resolution tropospheric delay maps compared with previous tropospheric turbulence- and elevation-dependent models. Cross validation of the GPS zenith total delays yields an RMS error of 4.6 mm with the decoupled interpolation model, compared with 8.4 mm with the previous model. On converting the GPS zenith wet delays to precipitable water vapor and interpolating to 1 km grid cells across the region, validations with the Moderate Resolution Imaging Spectroradiometer near-IR water vapor product show 1.7 mm RMS differences by using the decoupled model, compared with 2.0 mm for the previous interpolation model. Such results are obtained without differencing the tropospheric delays or water vapor estimates in time or space, while the errors are similar over flat and mountainous terrains, as well as for both inland and coastal areas.

  14. Subpixel target detection and enhancement in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Tiwari, K. C.; Arora, M.; Singh, D.

    2011-06-01

    Hyperspectral data due to its higher information content afforded by higher spectral resolution is increasingly being used for various remote sensing applications including information extraction at subpixel level. There is however usually a lack of matching fine spatial resolution data particularly for target detection applications. Thus, there always exists a tradeoff between the spectral and spatial resolutions due to considerations of type of application, its cost and other associated analytical and computational complexities. Typically whenever an object, either manmade, natural or any ground cover class (called target, endmembers, components or class) gets spectrally resolved but not spatially, mixed pixels in the image result. Thus, numerous manmade and/or natural disparate substances may occur inside such mixed pixels giving rise to mixed pixel classification or subpixel target detection problems. Various spectral unmixing models such as Linear Mixture Modeling (LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the endmember spectrum and their corresponding abundance fractions inside the pixel. It, however, does not provide spatial distribution of these abundance fractions within a pixel. This limits the applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse Euclidean distance based super-resolution mapping method has been presented that achieves subpixel target detection in hyperspectral images by adjusting spatial distribution of abundance fraction within a pixel. Results obtained at different resolutions indicate that super-resolution mapping may effectively aid subpixel target detection.

  15. Microsphere-assisted super-resolution imaging with enlarged numerical aperture by semi-immersion

    NASA Astrophysics Data System (ADS)

    Wang, Fengge; Yang, Songlin; Ma, Huifeng; Shen, Ping; Wei, Nan; Wang, Meng; Xia, Yang; Deng, Yun; Ye, Yong-Hong

    2018-01-01

    Microsphere-assisted imaging is an extraordinary simple technology that can obtain optical super-resolution under white-light illumination. Here, we introduce a method to improve the resolution of a microsphere lens by increasing its numerical aperture. In our proposed structure, BaTiO3 glass (BTG) microsphere lenses are semi-immersed in a S1805 layer with a refractive index of 1.65, and then, the semi-immersed microspheres are fully embedded in an elastomer with an index of 1.4. We experimentally demonstrate that this structure, in combination with a conventional optical microscope, can clearly resolve a two-dimensional 200-nm-diameter hexagonally close-packed (hcp) silica microsphere array. On the contrary, the widely used structure where BTG microsphere lenses are fully immersed in a liquid or elastomer cannot even resolve a 250-nm-diameter hcp silica microsphere array. The improvement in resolution through the proposed structure is due to an increase in the effective numerical aperture by semi-immersing BTG microsphere lenses in a high-refractive-index S1805 layer. Our results will inform on the design of microsphere-based high-resolution imaging systems.

  16. An application of a two-equation model of turbulence to three-dimensional chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Lee, J.

    1994-01-01

    A numerical study of three dimensional chemically reacting and non-reacting flowfields is conducted using a two-equation model of turbulence. A generalized flow solver using an implicit Lower-Upper (LU) diagonal decomposition numerical technique and finite-rate chemistry has been coupled with a low-Reynolds number two-equation model of turbulence. This flow solver is then used to study chemically reacting turbulent supersonic flows inside combustors with synergetic fuel injectors. The reacting and non-reacting turbulent combustor solutions obtained are compared with zero-equation turbulence model solutions and with available experimental data. The hydrogen-air chemistry is modeled using a nine-species/eighteen reaction model. A low-Reynolds number k-epsilon model was used to model the effect of turbulence because, in general, the low-Reynolds number k-epsilon models are easier to implement numerically and are far more general than algebraic models. However, low-Reynolds number k-epsilon models require a much finer near-wall grid resolution than high-Reynolds number models to resolve accurately the near-wall physics. This is especially true in complex flowfields, where the stiff nature of the near-wall turbulence must be resolved. Therefore, the limitations imposed by the near-wall characteristics and compressible model corrections need to be evaluated further. The gradient-diffusion hypothesis is used to model the effects of turbulence on the mass diffusion process. The influence of this low-Reynolds number turbulence model on the reacting flowfield predictions was studied parametrically.

  17. The impact of resolution on the dynamics of the martian global atmosphere: Varying resolution studies with the MarsWRF GCM

    NASA Astrophysics Data System (ADS)

    Toigo, Anthony D.; Lee, Christopher; Newman, Claire E.; Richardson, Mark I.

    2012-09-01

    We investigate the sensitivity of the circulation and thermal structure of the martian atmosphere to numerical model resolution in a general circulation model (GCM) using the martian implementation (MarsWRF) of the planetWRF atmospheric model. We provide a description of the MarsWRF GCM and use it to study the global atmosphere at horizontal resolutions from 7.5° × 9° to 0.5° × 0.5°, encompassing the range from standard Mars GCMs to global mesoscale modeling. We find that while most of the gross-scale features of the circulation (the rough location of jets, the qualitative thermal structure, and the major large-scale features of the surface level winds) are insensitive to horizontal resolution over this range, several major features of the circulation are sensitive in detail. The northern winter polar circulation shows the greatest sensitivity, showing a continuous transition from a smooth polar winter jet at low resolution, to a distinct vertically “split” jet as resolution increases. The separation of the lower and middle atmosphere polar jet occurs at roughly 10 Pa, with the split jet structure developing in concert with the intensification of meridional jets at roughly 10 Pa and above 0.1 Pa. These meridional jets appear to represent the separation of lower and middle atmosphere mean overturning circulations (with the former being consistent with the usual concept of the “Hadley cell”). Further, the transition in polar jet structure is more sensitive to changes in zonal than meridional horizontal resolution, suggesting that representation of small-scale wave-mean flow interactions is more important than fine-scale representation of the meridional thermal gradient across the polar front. Increasing the horizontal resolution improves the match between the modeled thermal structure and the Mars Climate Sounder retrievals for northern winter high latitudes. While increased horizontal resolution also improves the simulation of the northern high latitudes at equinox, even the lowest model resolution considered here appears to do a good job for the southern winter and southern equinoctial pole (although in detail some discrepancies remain). These results suggest that studies of the northern winter jet (e.g., transient waves and cyclogenesis) will be more sensitive to global model resolution that those of the south (e.g., the confining dynamics of the southern polar vortex relevant to studies of argon transport). For surface winds, the major effect of increased horizontal resolution is in the superposition of circulations forced by local-scale topography upon the large-scale surface wind patterns. While passive predictions of dust lifting are generally insensitive to model horizontal resolution when no lifting threshold is considered, increasing the stress threshold produces significantly more lifting in higher resolution simulations with the generation of finer-scale, higher-stress winds due primarily to better-resolved topography. Considering the positive feedbacks expected for radiatively active dust lifting, we expect this bias to increase when such feedbacks are permitted.

  18. Evaluating the Impact of Spatial Resolution of Landsat Predictors on the Accuracy of Biomass Models for Large-area Estimation Across the Eastern USA

    NASA Astrophysics Data System (ADS)

    Deo, R. K.; Domke, G. M.; Russell, M.; Woodall, C. W.

    2017-12-01

    Landsat data have been widely used to support strategic forest inventory and management decisions despite the limited success of passive optical remote sensing for accurate estimation of aboveground biomass (AGB). The archive of publicly available Landsat data, available at 30-m spatial resolutions since 1984, has been a valuable resource for cost-effective large-area estimation of AGB to inform national requirements such as for the US national greenhouse gas inventory (NGHGI). In addition, other optical satellite data such as MODIS imagery of wider spatial coverage and higher temporal resolution are enriching the domain of spatial predictors for regional scale mapping of AGB. Because NGHGIs require national scale AGB information and there are tradeoffs in the prediction accuracy versus operational efficiency of Landsat, this study evaluated the impact of various resolutions of Landsat predictors on the accuracy of regional AGB models across three different sites in the eastern USA: Maine, Pennsylvania-New Jersey, and South Carolina. We used recent national forest inventory (NFI) data with numerous Landsat-derived predictors at ten different spatial resolutions ranging from 30 to 1000 m to understand the optimal spatial resolution of the optical data for enhanced spatial inventory of AGB for NGHGI reporting. Ten generic spatial models at different spatial resolutions were developed for all sites and large-area estimates were evaluated (i) at the county-level against the independent designed-based estimates via the US NFI Evalidator tool and (ii) within a large number of strips ( 1 km wide) predicted via LiDAR metrics at a high spatial resolution. The county-level estimates by the Evalidator and Landsat models were statistically equivalent and produced coefficients of determination (R2) above 0.85 that varied with sites and resolution of predictors. The mean and standard deviation of county-level estimates followed increasing and decreasing trends, respectively, with models of decreasing resolutions. The Landsat-based total AGB estimates within the strips against the total AGB obtained using LiDAR metrics did not differ significantly and were within ±15 Mg/ha for each of the sites. We conclude that the optical satellite data at resolutions up to 1000 m provide acceptable accuracy for the US' NGHGI.

  19. Application of a numerical model for the planetary boundary layer to the vertical distribution of radon and its daughter products

    NASA Astrophysics Data System (ADS)

    Vinod Kumar, A.; Sitaraman, V.; Oza, R. B.; Krishnamoorthy, T. M.

    A one-dimensional numerical planetary boundary layer (PBL) model is developed and applied to study the vertical distribution of radon and its daughter products in the atmosphere. The meteorological model contains parameterization for the vertical diffusion coefficient based on turbulent kinetic energy and energy dissipation ( E- ɛ model). The increased vertical resolution and the realistic concentration of radon and its daughter products based on the time-dependent PBL model is compared with the steady-state model results and field observations. The ratio of radon concentration at higher levels to that at the surface has been studied to see the effects of atmospheric stability. The significant change in the vertical profile of concentration due to decoupling of the upper portion of the boundary layer from the shallow lower stable layer is explained by the PBL model. The disequilibrium ratio of 214Bi/ 214Pb broadly agrees with the observed field values. The sharp decrease in the ratio during transition from unstable to stable atmospheric condition is also reproduced by the model.

  20. Numerical Mantle Convection Models With a Flexible Thermodynamic Interface

    NASA Astrophysics Data System (ADS)

    van den Berg, A. P.; Jacobs, M. H.; de Jong, B. H.

    2001-12-01

    Accurate material properties are needed for deep mantle (P,T) conditions in order to predict the longterm behavior of convection planetary mantles. Also the interpretation of seismological observations concerning the deep mantle in terms of mantle flow models calls for a consistent thermodynamical description of the basic physical parameters. We have interfaced a compressible convection code using the anelastic liquid approach based on finite element methods, to a database containing a full thermodynamic description of mantle silicates (Ita and King, J. Geophys. Res., 99, 15,939-15,940, 1994). The model is based on high resolution (P,T) tables of the relevant thermodynamic properties containing typically 50 million (P,T) table gridpoints to obtain resolution in (P,T) space of 1 K and an equivalent of 1 km. The resulting model is completely flexible such that numerical mantle convection experiments can be performed for any mantle composition for which the thermodynamic database is available. We present results of experiments for 2D cartesian models using a data base for magnesium-iron silicate in a pyrolitic composition (Stixrude and Bukowinski, Geoph.Monogr.Ser., 74, 131-142, 1993) and a recent thermodynamical model for magnesium silicate for the complete mantle (P,T) range, (Jacobs and Oonk, Phys. Chem. Mineral, 269, inpress 2001). Preliminary results of bulksound velocity distribution derived in a consistent way from the convection results and the thermodynamic database show a `realistic' mantle profile with bulkvelocity variations decreasing from several percent in the upper mantle to less than a percent in the deep lower mantle.

  1. Assimilation of GRACE Terrestrial Water Storage Data into a Land Surface Model: Results for the Mississippi River Basin

    NASA Technical Reports Server (NTRS)

    Zaitchik, Benjamin F.; Rodell, Matthew; Reichle, Rolf H.

    2007-01-01

    NASA's GRACE mission has the potential to be extremely valuable for water resources applications and global water cycle research. What makes GRACE unique among Earth Science satellite systems is that it is able to monitor variations in water stored in all forms, from snow and surface water to soil moisture to groundwater in the deepest aquifers. However, the space and time resolutions of GRACE observations are coarse. GRACE typically resolves water storage changes over regions the size of Nebraska on a monthly basis, while city-scale, daily observations would be more useful for water management, agriculture, and weather prediction. High resolution numerical (computer) hydrology models have been developed, which predict the fates of water and energy after they strike the land surface as precipitation and sunlight. These are similar to weather and climate forecast models, which simulate atmospheric processes. We integrated the GRACE observations into a hydrology model using an advanced technique called data assimilation. The results were new estimates of groundwater, soil moisture, and snow variations, which combined the veracity of GRACE with the high resolution of the model. We tested the technique over the Mississippi River basin, but it will be even more valuable in parts of the world which lack reliable data on water availability.

  2. The vector radiative transfer numerical model of coupled ocean-atmosphere system using the matrix-operator method

    NASA Astrophysics Data System (ADS)

    Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu

    2005-10-01

    The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.

  3. Finite-difference time-domain modelling of through-the-Earth radio signal propagation

    NASA Astrophysics Data System (ADS)

    Ralchenko, M.; Svilans, M.; Samson, C.; Roper, M.

    2015-12-01

    This research seeks to extend the knowledge of how a very low frequency (VLF) through-the-Earth (TTE) radio signal behaves as it propagates underground, by calculating and visualizing the strength of the electric and magnetic fields for an arbitrary geology through numeric modelling. To achieve this objective, a new software tool has been developed using the finite-difference time-domain method. This technique is particularly well suited to visualizing the distribution of electromagnetic fields in an arbitrary geology. The frequency range of TTE radio (400-9000 Hz) and geometrical scales involved (1 m resolution for domains a few hundred metres in size) involves processing a grid composed of millions of cells for thousands of time steps, which is computationally expensive. Graphics processing unit acceleration was used to reduce execution time from days and weeks, to minutes and hours. Results from the new modelling tool were compared to three cases for which an analytic solution is known. Two more case studies were done featuring complex geologic environments relevant to TTE communications that cannot be solved analytically. There was good agreement between numeric and analytic results. Deviations were likely caused by numeric artifacts from the model boundaries; however, in a TTE application in field conditions, the uncertainty in the conductivity of the various geologic formations will greatly outweigh these small numeric errors.

  4. Bridging the scales in atmospheric composition simulations using a nudging technique

    NASA Astrophysics Data System (ADS)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.

  5. Improving Barotropic Tides by Two-way Nesting High and Low Resolution Domains

    NASA Astrophysics Data System (ADS)

    Jeon, C. H.; Buijsman, M. C.; Wallcraft, A. J.; Shriver, J. F.; Hogan, P. J.; Arbic, B. K.; Richman, J. G.

    2017-12-01

    In a realistically forced global ocean model, relatively large sea-surface-height root-mean-square (RMS) errors are observed in the North Atlantic near the Hudson Strait. These may be associated with large tidal resonances interacting with coastal bathymetry that are not correctly represented with a low resolution grid. This issue can be overcome by using high resolution grids, but at a high computational cost. In this paper we apply two-way nesting as an alternative solution. This approach applies high resolution to the area with large RMS errors and a lower resolution to the rest. It is expected to improve the tidal solution as well as reduce the computational cost. To minimize modification of the original source codes of the ocean circulation model (HYCOM), we apply the coupler OASIS3-MCT. This coupler is used to exchange barotropic pressures and velocity fields through its APIs (Application Programming Interface) between the parent and the child components. The developed two-way nesting framework has been validated with an idealized test case where the parent and the child domains have identical grid resolutions. The result of the idealized case shows very small RMS errors between the child and parent solutions. We plan to show results for a case with realistic tidal forcing in which the resolution of the child grid is three times that of the parent grid. The numerical results of this realistic case are compared to TPXO data.

  6. Scalar excursions in large-eddy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matheou, Georgios; Dimotakis, Paul E.

    Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less

  7. Scalar excursions in large-eddy simulations

    DOE PAGES

    Matheou, Georgios; Dimotakis, Paul E.

    2016-08-31

    Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less

  8. Graphical tools for TV weather presentation

    NASA Astrophysics Data System (ADS)

    Najman, M.

    2010-09-01

    Contemporary meteorology and its media presentation faces in my opinion following key tasks: - Delivering the meteorological information to the end user/spectator in understandable and modern fashion, which follows industry standard of video output (HD, 16:9) - Besides weather icons show also the outputs of numerical weather prediction models, climatological data, satellite and radar images, observed weather as actual as possible. - Does not compromise the accuracy of presented data. - Ability to prepare and adjust the weather show according to actual synoptic situtation. - Ability to refocus and completely adjust the weather show to actual extreme weather events. - Ground map resolution weather data presentation need to be at least 20 m/pixel to be able to follow the numerical weather prediction model resolution. - Ability to switch between different numerical weather prediction models each day, each show or even in the middle of one weather show. - The graphical weather software need to be flexible and fast. The graphical changes nee to be implementable and airable within minutes before the show or even live. These tasks are so demanding and the usual original approach of custom graphics could not deal with it. It was not able to change the show every day, the shows were static and identical day after day. To change the content of the weather show daily was costly and most of the time impossible with the usual approach. The development in this area is fast though and there are several different options for weather predicting organisations such as national meteorological offices and private meteorological companies to solve this problem. What are the ways to solve it? What are the limitations and advantages of contemporary graphical tools for meteorologists? All these questions will be answered.

  9. Evaluation of the UnTRIM model for 3-D tidal circulation

    USGS Publications Warehouse

    Cheng, R.T.; Casulli, V.; ,

    2001-01-01

    A family of numerical models, known as the TRIM models, shares the same modeling philosophy for solving the shallow water equations. A characteristic analysis of the shallow water equations points out that the numerical instability is controlled by the gravity wave terms in the momentum equations and by the transport terms in the continuity equation. A semi-implicit finite-difference scheme has been formulated so that these terms and the vertical diffusion terms are treated implicitly and the remaining terms explicitly to control the numerical stability and the computations are carried out over a uniform finite-difference computational mesh without invoking horizontal or vertical coordinate transformations. An unstructured grid version of TRIM model is introduced, or UnTRIM (pronounces as "you trim"), which preserves these basic numerical properties and modeling philosophy, only the computations are carried out over an unstructured orthogonal grid. The unstructured grid offers the flexibilities in representing complex study areas so that fine grid resolution can be placed in regions of interest, and coarse grids are used to cover the remaining domain. Thus, the computational efforts are concentrated in areas of importance, and an overall computational saving can be achieved because the total number of grid-points is dramatically reduced. To use this modeling approach, an unstructured grid mesh must be generated to properly reflect the properties of the domain of the investigation. The new modeling flexibility in grid structure is accompanied by new challenges associated with issues of grid generation. To take full advantage of this new model flexibility, the model grid generation should be guided by insights into the physics of the problems; and the insights needed may require a higher degree of modeling skill.

  10. Advanced Turbulence Modeling Concepts

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing

    2005-01-01

    The ZCET program developed at NASA Glenn Research Center is to study hydrogen/air injection concepts for aircraft gas turbine engines that meet conventional gas turbine performance levels and provide low levels of harmful NOx emissions. A CFD study for ZCET program has been successfully carried out. It uses the most recently enhanced National combustion code (NCC) to perform CFD simulations for two configurations of hydrogen fuel injectors (GRC- and Sandia-injector). The results can be used to assist experimental studies to provide quick mixing, low emission and high performance fuel injector designs. The work started with the configuration of the single-hole injector. The computational models were taken from the experimental designs. For example, the GRC single-hole injector consists of one air tube (0.78 inches long and 0.265 inches in diameter) and two hydrogen tubes (0.3 inches long and 0.0226 inches in diameter opposed at 180 degree). The hydrogen tubes are located 0.3 inches upstream from the exit of the air element (the inlet location for the combustor). To do the simulation, the single-hole injector is connected to a combustor model (8.16 inches long and 0.5 inches in diameter). The inlet conditions for air and hydrogen elements are defined according to actual experimental designs. Two crossing jets of hydrogen/air are simulated in detail in the injector. The cold flow, reacting flow, flame temperature, combustor pressure and possible flashback phenomena are studied. Two grid resolutions of the numerical model have been adopted. The first computational grid contains 0.52 million elements, the second one contains over 1.3 million elements. The CFD results have shown only about 5% difference between the two grid resolutions. Therefore, the CFD result obtained from the model of 1.3-million grid resolution can be considered as a grid independent numerical solution. Turbulence models built in NCC are consolidated and well tested. They can handle both coarse and fine grids near the wall. They can model the effect of anisotropy of turbulent stresses and the effect of swirling. The chemical reactions of Magnusson model and ILDM method were both used in this study.

  11. The upwind control volume scheme for unstructured triangular grids

    NASA Technical Reports Server (NTRS)

    Giles, Michael; Anderson, W. Kyle; Roberts, Thomas W.

    1989-01-01

    A new algorithm for the numerical solution of the Euler equations is presented. This algorithm is particularly suited to the use of unstructured triangular meshes, allowing geometric flexibility. Solutions are second-order accurate in the steady state. Implementation of the algorithm requires minimal grid connectivity information, resulting in modest storage requirements, and should enhance the implementation of the scheme on massively parallel computers. A novel form of upwind differencing is developed, and is shown to yield sharp resolution of shocks. Two new artificial viscosity models are introduced that enhance the performance of the new scheme. Numerical results for transonic airfoil flows are presented, which demonstrate the performance of the algorithm.

  12. A general-purpose computer program for studying ultrasonic beam patterns generated with acoustic lenses

    NASA Technical Reports Server (NTRS)

    Roberti, Dino; Ludwig, Reinhold; Looft, Fred J.

    1988-01-01

    A 3-D computer model of a piston radiator with lenses for focusing and defocusing is presented. To achieve high-resolution imaging, the frequency of the transmitted and received ultrasound must be as high as 10 MHz. Current ultrasonic transducers produce an extremely narrow beam at these high frequencies and thus are not appropriate for imaging schemes such as synthetic-aperture focus techniques (SAFT). Consequently, a numerical analysis program has been developed to determine field intensity patterns that are radiated from ultrasonic transducers with lenses. Lens shapes are described and the field intensities are numerically predicted and compared with experimental results.

  13. 3D Numerical Rift Modeling with Application to the East African Rift System

    NASA Astrophysics Data System (ADS)

    Glerum, A.; Brune, S.; Naliboff, J.

    2017-12-01

    As key components of plate tectonics, continental rifting and the formation of passive margins have been extensively studied with both analogue models and numerical techniques. Only recently however, technical advances have enabled numerical investigations into rift evolution in three dimensions, as is actually required for including those processes that cause rift-parallel variability, such as structural inheritance and oblique extension (Brune 2016). We use the massively parallel finite element code ASPECT (Kronbichler et al. 2012; Heister et al. 2017) to investigate rift evolution. ASPECT's adaptive mesh refinement enables us to focus resolution on the regions of interest (i.e. the rift center), while leaving other areas such as the asthenospheric mantle at coarse resolution, leading to kilometer-scale local mesh resolution in 3D. Furthermore, we implemented plastic and viscous strain weakening of the nonlinear viscoplastic rheology required to develop asymmetric rift geometries (e.g. Huismans and Beaumont 2003). Additionally created plugins to ASPECT allow us to specify initial temperature and composition conditions based on geophysical data (e.g. LITHO1.0, Pasyanos et al. 2014) or to prescribe more general along-strike variation in the initial strain seeding the rift. Employing the above functionality, we construct regional models of the East African Rift System (EARS), the world's largest currently active rift. As the EARS is characterized by both orthogonal and oblique rift sections, multi-phase extension histories as well as magmatic and a-magmatic branches (e.g. Chorowicz 2005; Ebinger and Scholz 2011), it constitutes an extensive natural laboratory for our research into the 3D nature of continental rifting. References:Brune, S. (2016), in Plate boundaries and natural hazards, AGU Geophysical Monograph 219, J. C. Duarte and W. P. Schellart (Eds.). Chorowicz, J. (2005). J. Afr. Earth Sci., 43, 379-410. Ebinger, C. and Scholz, C. A. (2011), in Tectonics of Sedimentary Basins: Recent Advances, Wiley, C. Busby and A. Azor (Eds.). Heister et al. (2017). Geophys. J. Int., 210, 833-851. Huismans, R. S. and Beaumont, C. (2003). J. Geophys. Res., 108, B10, 2496. Kronbichler et al. (2012). Geophys. J. Int., 191, 12-29. Pasyanos et al. (2014). J. of Geophys. Res., 119, 3, 2153-2173.

  14. A two-dimensional kinematic dynamo model of the ionospheric magnetic field at Venus

    NASA Technical Reports Server (NTRS)

    Cravens, T. E.; Wu, D.; Shinagawa, H.

    1990-01-01

    The results of a high-resolution, two-dimensional, time dependent, kinematic dynamo model of the ionospheric magnetic field of Venus are presented. Various one-dimensional models are considered and the two-dimensional model is then detailed. In this model, the two-dimensional magnetic induction equation, the magnetic diffusion-convection equation, is numerically solved using specified plasma velocities. Origins of the vertical velocity profile and of the horizontal velocities are discussed. It is argued that the basic features of the vertical magnetic field profile remain unaltered by horizontal flow effects and also that horizontal plasma flow can strongly affect the magnetic field for altitudes above 300 km.

  15. A probabilistic method for constructing wave time-series at inshore locations using model scenarios

    USGS Publications Warehouse

    Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.

    2014-01-01

    Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.

  16. Lagrangian predictability characteristics of an Ocean Model

    NASA Astrophysics Data System (ADS)

    Lacorata, Guglielmo; Palatella, Luigi; Santoleri, Rosalia

    2014-11-01

    The Mediterranean Forecasting System (MFS) Ocean Model, provided by INGV, has been chosen as case study to analyze Lagrangian trajectory predictability by means of a dynamical systems approach. To this regard, numerical trajectories are tested against a large amount of Mediterranean drifter data, used as sample of the actual tracer dynamics across the sea. The separation rate of a trajectory pair is measured by computing the Finite-Scale Lyapunov Exponent (FSLE) of first and second kind. An additional kinematic Lagrangian model (KLM), suitably treated to avoid "sweeping"-related problems, has been nested into the MFS in order to recover, in a statistical sense, the velocity field contributions to pair particle dispersion, at mesoscale level, smoothed out by finite resolution effects. Some of the results emerging from this work are: (a) drifter pair dispersion displays Richardson's turbulent diffusion inside the [10-100] km range, while numerical simulations of MFS alone (i.e., without subgrid model) indicate exponential separation; (b) adding the subgrid model, model pair dispersion gets very close to observed data, indicating that KLM is effective in filling the energy "mesoscale gap" present in MFS velocity fields; (c) there exists a threshold size beyond which pair dispersion becomes weakly sensitive to the difference between model and "real" dynamics; (d) the whole methodology here presented can be used to quantify model errors and validate numerical current fields, as far as forecasts of Lagrangian dispersion are concerned.

  17. Physically based modeling in catchment hydrology at 50: Survey and outlook

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Putti, Mario

    2015-09-01

    Integrated, process-based numerical models in hydrology are rapidly evolving, spurred by novel theories in mathematical physics, advances in computational methods, insights from laboratory and field experiments, and the need to better understand and predict the potential impacts of population, land use, and climate change on our water resources. At the catchment scale, these simulation models are commonly based on conservation principles for surface and subsurface water flow and solute transport (e.g., the Richards, shallow water, and advection-dispersion equations), and they require robust numerical techniques for their resolution. Traditional (and still open) challenges in developing reliable and efficient models are associated with heterogeneity and variability in parameters and state variables; nonlinearities and scale effects in process dynamics; and complex or poorly known boundary conditions and initial system states. As catchment modeling enters a highly interdisciplinary era, new challenges arise from the need to maintain physical and numerical consistency in the description of multiple processes that interact over a range of scales and across different compartments of an overall system. This paper first gives an historical overview (past 50 years) of some of the key developments in physically based hydrological modeling, emphasizing how the interplay between theory, experiments, and modeling has contributed to advancing the state of the art. The second part of the paper examines some outstanding problems in integrated catchment modeling from the perspective of recent developments in mathematical and computational science.

  18. Breaking Gravity Waves Over Large-Scale Topography

    NASA Astrophysics Data System (ADS)

    Doyle, J. D.; Shapiro, M. A.

    2002-12-01

    The importance of mountain waves is underscored by the numerous studies that document the impact on the atmospheric momentum balance, turbulence generation, and the creation of severe downslope winds. As stably stratified air is forced to rise over topography, large amplitude internal gravity waves may be generated that propagate vertically, amplify and breakdown in the upper troposphere and lower stratosphere. Many of the numerical studies reported on in the literature have used two- and three-dimensional models with simple, idealized initial states to examine gravity wave breaking. In spite of the extensive previous work, many questions remain regarding gravity wave breaking in the real atmosphere. Outstanding issues that are potentially important include: turbulent mixing and wave overturning processes, mountain wave drag, downstream effects, and the mesoscale predictability of wave breaking. The current limit in our knowledge of gravity wave breaking can be partially attributed to lack of observations. During the Fronts and Atlantic Storm-Track Experiment (FASTEX), a large amplitude gravity wave was observed in the lee of Greenland on 29 January 1997. Observations taken collected during FASTEX presented a unique opportunity to study topographically forced gravity wave breaking and to assess the ability of high-resolution numerical models to predict the structure and evolution of such phenomena. Measurements from the NOAA G-4 research aircraft and high-resolution numerical simulations are used to study the evolution and dynamics of the large-amplitude gravity wave event that took place during the FASTEX. Vertical cross section analysis of dropwindsonde data, with 50-km horizontal spacing, indicates the presence of a large amplitude breaking gravity wave that extends from above the 150-hPa level to 500 hPa. Flight-level data indicate a horizontal shear of over 10-3 s-1 across the breaking wave with 25 K potential temperature perturbations. This breaking wave may have important implications for momentum flux parameterization in mesoscale models, stratospheric-tropospheric exchange dynamics as well as the dynamic sources and sinks of the ozone budget. Additionally, frequent breaking waves over Greenland are a known commercial and military aviation hazard. NRL's nonhydrostatic COAMPS^{TM}$ model is used with four nested grids with horizontal resolutions of 45 km, 15 km, 5 km and 1.67 km and 65 vertical levels to simulate the gravity wave event. The model simulation captures the temporal evolution and horizontal structure of the wave. However, the model underestimates the vertical amplitude of the wave. The model simulation suggests that the breaking wave may be triggered as a consequence of vertically propagating internal gravity waves emanating from katabatic flow near the extreme slopes of eastern Greenland. Additionally, a number of simulations that make use of a horizontally homogeneous initial state and both idealized and actual Greenland topography are performed. These simulations highlight the sensitivity of gravity wave amplification and breaking to the planetary rotation, slope of the Greenland topography, representation of turbulent mixing, and surface processes.

  19. Hydrodynamic Simulations of Giant Impacts

    NASA Astrophysics Data System (ADS)

    Reinhardt, Christian; Stadel, Joachim

    2013-07-01

    We studied the basic numerical aspects of giant impacts using Smoothed Particles Hydrodynamics (SPH), which has been used in most of the prior studies conducted in this area (e.g., Benz, Canup). Our main goal was to modify the massive parallel, multi-stepping code GASOLINE widely used in cosmological simulations so that it can properly simulate the behavior of condensed materials such as granite or iron using the Tillotson equation of state. GASOLINE has been used to simulate hundreds of millions of particles for ideal gas physics so that using several millions of particles in condensed material simulations seems possible. In order to focus our attention of the numerical aspects of the problem we neglected the internal structure of the protoplanets and modelled them as homogenous (isothermal) granite spheres. For the energy balance we only considered PdV work and shock heating of the material during the impact (neglected cooling of the material). Starting at a low resolution of 2048 particles for the target and the impactor we run several simulations for different impact parameters and impact velocities and successfully reproduced the main features of the pioneering work of Benz from 1986. The impact sends a shock wave through both bodies heating the target and disrupting the remaining impactor. As in prior simulations material is ejected from the collision. How much, and whether it leaves the system or survives in an orbit for a longer time, depends on the initial conditions but also on resolution. Increasing the resolution (to 1.2x10⁶ particles) results in both a much clearer shock wave and deformation of the bodies during the impact and a more compact and detailed "arm" like structure of the ejected material. Currently we are investigating some numerical issues we encountered and are implementing differentiated models, making one step closer to more realistic protoplanets in such giant impact simulations.

  20. Exploring the nearshore marine wind profile from field measurements and numerical hindcast

    NASA Astrophysics Data System (ADS)

    del Jesus, F.; Menendez, M.; Guanche, R.; Losada, I.

    2012-12-01

    Wind power is the predominant offshore renewable energy resource. In the last years, offshore wind farms have become a technically feasible source of electrical power. The economic feasibility of offshore wind farms depends on the quality of the offshore wind conditions compared to that of onshore sites. Installation and maintenance costs must be balanced with more hours and a higher quality of the available resources. European offshore wind development has revealed that the optimum offshore sites are those in which the distance from the coast is limited with high available resource. Due to the growth in the height of the turbines and the complexity of the coast, with interactions between inland wind/coastal orography and ocean winds, there is a need for field measurements and validation of numerical models to understand the marine wind profile near the coast. Moreover, recent studies have pointed out that the logarithmic law describing the vertical wind profile presents limitations. The aim of this work is to characterize the nearshore vertical wind profile in the medium atmosphere boundary layer. Instrumental observations analyzed in this work come from the Idermar project (www.Idermar.es). Three floating masts deployed at different locations on the Cantabrian coast provide wind measurements from a height of 20 to 90 meters. Wind speed and direction are measured as well as several meteorological variables at different heights of the profile. The shortest wind time series has over one year of data. A 20 year high-resolution atmospheric hindcast, using the WRF-ARW model and focusing on hourly offshore wind fields, is also analyzed. Two datasets have been evaluated: a European reanalysis with a ~15 Km spatial resolution, and a hybrid downscaling of wind fields with a spatial resolution of one nautical mile over the northern coast of Spain.. These numerical hindcasts have been validated based on field measurement data. Several parameterizations of the vertical wind profile are evaluated and, based on this work, a particular parameterization of the wind profile is proposed.

Top