Sample records for resolution simulations application

  1. Development of the GEOS-5 Atmospheric General Circulation Model: Evolution from MERRA to MERRA2.

    NASA Technical Reports Server (NTRS)

    Molod, Andrea; Takacs, Lawrence; Suarez, Max; Bacmeister, Julio

    2014-01-01

    The Modern-Era Retrospective Analysis for Research and Applications-2 (MERRA2) version of the GEOS-5 (Goddard Earth Observing System Model - 5) Atmospheric General Circulation Model (AGCM) is currently in use in the NASA Global Modeling and Assimilation Office (GMAO) at a wide range of resolutions for a variety of applications. Details of the changes in parameterizations subsequent to the version in the original MERRA reanalysis are presented here. Results of a series of atmosphere-only sensitivity studies are shown to demonstrate changes in simulated climate associated with specific changes in physical parameterizations, and the impact of the newly implemented resolution-aware behavior on simulations at different resolutions is demonstrated. The GEOS-5 AGCM presented here is the model used as part of the GMAO's MERRA2 reanalysis, the global mesoscale "nature run", the real-time numerical weather prediction system, and for atmosphere-only, coupled ocean-atmosphere and coupled atmosphere-chemistry simulations. The seasonal mean climate of the MERRA2 version of the GEOS-5 AGCM represents a substantial improvement over the simulated climate of the MERRA version at all resolutions and for all applications. Fundamental improvements in simulated climate are associated with the increased re-evaporation of frozen precipitation and cloud condensate, resulting in a wetter atmosphere. Improvements in simulated climate are also shown to be attributable to changes in the background gravity wave drag, and to upgrades in the relationship between the ocean surface stress and the ocean roughness. The series of "resolution aware" parameters related to the moist physics were shown to result in improvements at higher resolutions, and result in AGCM simulations that exhibit seamless behavior across different resolutions and applications.

  2. Simulating the x-ray image contrast to setup techniques with desired flaw detectability

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2015-04-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  3. UWB Tracking Algorithms: AOA and TDOA

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David; Arndt, D.; Ngo, P.; Gross, J.; Refford, Melinda

    2006-01-01

    Ultra-Wideband (UWB) tracking prototype systems are currently under development at NASA Johnson Space Center for various applications on space exploration. For long range applications, a two-cluster Angle of Arrival (AOA) tracking method is employed for implementation of the tracking system; for close-in applications, a Time Difference of Arrival (TDOA) positioning methodology is exploited. Both AOA and TDOA are chosen to utilize the achievable fine time resolution of UWB signals. This talk presents a brief introduction to AOA and TDOA methodologies. The theoretical analysis of these two algorithms reveal the affecting parameters impact on the tracking resolution. For the AOA algorithm, simulations show that a tracking resolution less than 0.5% of the range can be achieved with the current achievable time resolution of UWB signals. For the TDOA algorithm used in close-in applications, simulations show that the (sub-inch) high tracking resolution is achieved with a chosen tracking baseline configuration. The analytical and simulated results provide insightful guidance for the UWB tracking system design.

  4. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    NASA Astrophysics Data System (ADS)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  5. Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2015-01-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  6. Retrieval of Precipitation Profiles from Multiresolution, Multifrequency, Active and Passive Microwave Observations

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Anagnostou, Emmanouil N.; Olson, William S.; Starr, David OC. (Technical Monitor)

    2002-01-01

    In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated using both simulated and airborne data. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) satellite multi-frequency active and passive observations. These observations are characterized by various spatial and sampling resolutions. This makes the retrieval problem mathematically more difficult and ill-determined because the quality of information decreases with decreasing resolution. A model that, given reflectivity profiles and a small set of parameters (including the cloud water content, the intercept drop size distribution, and a variable describing the frozen hydrometeor properties), simulates high-resolution brightness temperatures is used. The high-resolution simulated brightness temperatures are convolved at the real sensor resolution. An optimal estimation procedure is used to minimize the differences between simulated and observed brightness temperatures. The retrieval technique is investigated using cloud model synthetic and airborne data from the Fourth Convection And Moisture Experiment. Simulated high-resolution brightness temperatures and reflectivities and airborne observation strong are convolved at the resolution of the TRMM instruments and retrievals are performed and analyzed relative to the reference data used in observations synthesis. An illustration of the possible use of the technique in satellite rainfall estimation is presented through an application to TRMM data. The study suggests improvements in combined active and passive retrievals even when the instruments resolutions are significantly different. Future work needs to better quantify the retrievals performance, especially in connection with satellite applications, and the uncertainty of the models used in retrieval.

  7. A new method to assess the added value of high-resolution regional climate simulations: application to the EURO-CORDEX dataset

    NASA Astrophysics Data System (ADS)

    Soares, P. M. M.; Cardoso, R. M.

    2017-12-01

    Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons and regions. For temperature, the added value is smaller. AcknowledgmentsThe authors wish to acknowledge SOLAR (PTDC/GEOMET/7078/2014) and FCT UID/GEO/50019/ 2013 (Instituto Dom Luiz) projects.

  8. A comparative study via Monte Carlo simulation of new inorganic scintillator Cs2HfCl6 for applications in nuclear medicine, security and defense, and astrophysics

    NASA Astrophysics Data System (ADS)

    Chen, Henry; Raby, Paul

    2016-09-01

    Cs2HfCl6 (CHC) is one of the most promising recently discovered new inorganic single crystal scintillator that has high light output, non-hygroscopic, no self-activity, having energy resolution significantly better than NaI(Tl), even approaching that of LaBr3 yet can also potentially be at a much lower cost than LaBr3. This study attempts to use Monte Carlo simulation to examine the great potential offered by this new scintillator. CHC's detector performance is compared via simulation with that of 4 typical existing scintillators of the same size and same PMT readout. Two halide-scintillators: NaI(Tl) and LaBr3 and two oxide-scintillators: GSO and LSO were used in this simulation to compare their 122 keV and 511 keV gamma responses with that of CHC with both spectroscopy application and imaging applications in mind. Initial simulation results are very promising and consistent with reported experimental measurements. Beside detector energy resolution, image-quality measurement parameters commonly used to characterize imaging detectors as in nuclear medicine such as Light Response Function (LRF) which goes in parallel with spatial resolution and simulated position spectra will also be presented and discussed.

  9. Regional Community Climate Simulations with variable resolution meshes in the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.; Gettelman, A.; Callaghan, P.

    2017-12-01

    Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.

  10. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  11. Effect of elevation resolution on evapotranspiration simulations using MODFLOW.

    PubMed

    Kambhammettu, B V N P; Schmid, Wolfgang; King, James P; Creel, Bobby J

    2012-01-01

    Surface elevations represented in MODFLOW head-dependent packages are usually derived from digital elevation models (DEMs) that are available at much high resolution. Conventional grid refinement techniques to simulate the model at DEM resolution increases computational time, input file size, and in many cases are not feasible for regional applications. This research aims at utilizing the increasingly available high resolution DEMs for effective simulation of evapotranspiration (ET) in MODFLOW as an alternative to grid refinement techniques. The source code of the evapotranspiration package is modified by considering for a fixed MODFLOW grid resolution and for different DEM resolutions, the effect of variability in elevation data on ET estimates. Piezometric head at each DEM cell location is corrected by considering the gradient along row and column directions. Applicability of the research is tested for the lower Rio Grande (LRG) Basin in southern New Mexico. The DEM at 10 m resolution is aggregated to resampled DEM grid resolutions which are integer multiples of MODFLOW grid resolution. Cumulative outflows and ET rates are compared at different coarse resolution grids. Results of the analysis conclude that variability in depth-to-groundwater within the MODFLOW cell is a major contributing parameter to ET outflows in shallow groundwater regions. DEM aggregation methods for the LRG Basin have resulted in decreased volumetric outflow due to the formation of a smoothing error, which lowered the position of water table to a level below the extinction depth. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  12. Validation of a Monte Carlo simulation of the Inveon PET scanner using GATE

    NASA Astrophysics Data System (ADS)

    Lu, Lijun; Zhang, Houjin; Bian, Zhaoying; Ma, Jianhua; Feng, Qiangjin; Chen, Wufan

    2016-08-01

    The purpose of this study is to validate the application of GATE (Geant4 Application for Tomographic Emission) Monte Carlo simulation toolkit in order to model the performance characteristics of Siemens Inveon small animal PET system. The simulation results were validated against experimental/published data in accordance with the NEMA NU-4 2008 protocol for standardized evaluation of spatial resolution, sensitivity, scatter fraction (SF) and noise equivalent counting rate (NECR) of a preclinical PET system. An agreement of less than 18% was obtained between the radial, tangential and axial spatial resolutions of the simulated and experimental results. The simulated peak NECR of mouse-size phantom agreed with the experimental result, while for the rat-size phantom simulated value was higher than experimental result. The simulated and experimental SFs of mouse- and rat- size phantom both reached an agreement of less than 2%. It has been shown the feasibility of our GATE model to accurately simulate, within certain limits, all major performance characteristics of Inveon PET system.

  13. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  14. Application of Geostatistical Simulation to Enhance Satellite Image Products

    NASA Technical Reports Server (NTRS)

    Hlavka, Christine A.; Dungan, Jennifer L.; Thirulanambi, Rajkumar; Roy, David

    2004-01-01

    With the deployment of Earth Observing System (EOS) satellites that provide daily, global imagery, there is increasing interest in defining the limitations of the data and derived products due to its coarse spatial resolution. Much of the detail, i.e. small fragments and notches in boundaries, is lost with coarse resolution imagery such as the EOS MODerate-Resolution Imaging Spectroradiometer (MODIS) data. Higher spatial resolution data such as the EOS Advanced Spaceborn Thermal Emission and Reflection Radiometer (ASTER), Landsat and airborne sensor imagery provide more detailed information but are less frequently available. There are, however, both theoretical and analytical evidence that burn scars and other fragmented types of land covers form self-similar or self-affine patterns, that is, patterns that look similar when viewed at widely differing spatial scales. Therefore small features of the patterns should be predictable, at least in a statistical sense, with knowledge about the large features. Recent developments in fractal modeling for characterizing the spatial distribution of undiscovered petroleum deposits are thus applicable to generating simulations of finer resolution satellite image products. We will present example EOS products, analysis to investigate self-similarity, and simulation results.

  15. NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R825792)

    EPA Science Inventory

    General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...

  16. NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R827169)

    EPA Science Inventory

    General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...

  17. A fast mass spring model solver for high-resolution elastic objects

    NASA Astrophysics Data System (ADS)

    Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian

    2017-03-01

    Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.

  18. The implementation of sea ice model on a regional high-resolution scale

    NASA Astrophysics Data System (ADS)

    Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter

    2015-09-01

    The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.

  19. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, Kimberly A.

    2009-08-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

  20. Simulation study of a high performance brain PET system with dodecahedral geometry.

    PubMed

    Tao, Weijie; Chen, Gaoyu; Weng, Fenghua; Zan, Yunlong; Zhao, Zhixiang; Peng, Qiyu; Xu, Jianfeng; Huang, Qiu

    2018-05-25

    In brain imaging, the spherical PET system achieves the highest sensitivity when the solid angle is concerned. However it is not practical. In this work we designed an alternative sphere-like scanner, the dodecahedral scanner, which has a high sensitivity in imaging and a high feasibility to manufacture. We simulated this system and compared the performance with a few other dedicated brain PET systems. Monte Carlo simulations were conducted to generate data of the dedicated brain PET system with the dodecahedral geometry (11 regular pentagon detectors). The data were then reconstructed using the in-house developed software with the fully three-dimensional maximum-likelihood expectation maximization (3D-MLEM) algorithm. Results show that the proposed system has a high sensitivity distribution for the whole field of view (FOV). With a depth-of-interaction (DOI) resolution around 6.67 mm, the proposed system achieves the spatial resolution of 1.98 mm. Our simulation study also shows that the proposed system improves the image contrast and reduces noise compared with a few other dedicated brain PET systems. Finally, simulations with the Hoffman phantom show the potential application of the proposed system in clinical applications. In conclusion, the proposed dodecahedral PET system is potential for widespread applications in high-sensitivity, high-resolution PET imaging, to lower the injected dose. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.

    2012-04-01

    Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.

  2. KINETIC ENERGY FROM SUPERNOVA FEEDBACK IN HIGH-RESOLUTION GALAXY SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, Christine M.; Bryan, Greg L.; Ostriker, Jeremiah P.

    We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (∼10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 10{sup 9} M{sub ⊙} dwarf halo. Wemore » find that in high-density media (≳50 cm{sup −3}) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.« less

  3. A method for generating high resolution satellite image time series

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.

  4. A Variable Resolution Stretched Grid General Circulation Model: Regional Climate Simulation

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.; Suarez, Max J.

    2000-01-01

    The development of and results obtained with a variable resolution stretched-grid GCM for the regional climate simulation mode, are presented. A global variable resolution stretched- grid used in the study has enhanced horizontal resolution over the U.S. as the area of interest The stretched-grid approach is an ideal tool for representing regional to global scale interaction& It is an alternative to the widely used nested grid approach introduced over a decade ago as a pioneering step in regional climate modeling. The major results of the study are presented for the successful stretched-grid GCM simulation of the anomalous climate event of the 1988 U.S. summer drought- The straightforward (with no updates) two month simulation is performed with 60 km regional resolution- The major drought fields, patterns and characteristics such as the time averaged 500 hPa heights precipitation and the low level jet over the drought area. appear to be close to the verifying analyses for the stretched-grid simulation- In other words, the stretched-grid GCM provides an efficient down-scaling over the area of interest with enhanced horizontal resolution. It is also shown that the GCM skill is sustained throughout the simulation extended to one year. The developed and tested in a simulation mode stretched-grid GCM is a viable tool for regional and subregional climate studies and applications.

  5. Assessment of the effects of horizontal grid resolution on long ...

    EPA Pesticide Factsheets

    The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment.

  6. Comments on "Adaptive resolution simulation in equilibrium and beyond" by H. Wang and A. Agarwal

    NASA Astrophysics Data System (ADS)

    Klein, R.

    2015-09-01

    Wang and Agarwal (Eur. Phys. J. Special Topics, this issue, 2015, doi: 10.1140/epjst/e2015-02411-2) discuss variants of Adaptive Resolution Molecular Dynamics Simulations (AdResS), and their applications. Here we comment on their report, addressing scaling properties of the method, artificial forcings implemented to ensure constant density across the full simulation despite changing thermodynamic properties of the simulated media, the possible relation between an AdResS system on the one hand and a phase transition phenomenon on the other, and peculiarities of the SPC/E water model.

  7. Simulation of High-Resolution Magnetic Resonance Images on the IBM Blue Gene/L Supercomputer Using SIMRI

    DOE PAGES

    Baum, K. G.; Menezes, G.; Helguera, M.

    2011-01-01

    Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256 3 voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.

  8. Simulation of High-Resolution Magnetic Resonance Images on the IBM Blue Gene/L Supercomputer Using SIMRI.

    PubMed

    Baum, K G; Menezes, G; Helguera, M

    2011-01-01

    Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256(3) voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.

  9. Microdome-gooved Gd(2)O(2)S:Tb scintillator for flexible and high resolution digital radiography.

    PubMed

    Jung, Phill Gu; Lee, Chi Hoon; Bae, Kong Myeong; Lee, Jae Min; Lee, Sang Min; Lim, Chang Hwy; Yun, Seungman; Kim, Ho Kyung; Ko, Jong Soo

    2010-07-05

    A flexible microdome-grooved Gd(2)O(2)S:Tb scintillator is simulated, fabricated, and characterized for digital radiography applications. According to Monte Carlo simulation results, the dome-grooved structure has a high spatial resolution, which is verified by X-ray image performance of the scintillator. The proposed scintillator has lower X-ray sensitivity than a nonstructured scintillator but almost two times higher spatial resolution at high spatial frequency. Through evaluation of the X-ray performance of the fabricated scintillators, we confirm that the microdome-grooved scintillator can be applied to next-generation flexible digital radiography systems requiring high spatial resolution.

  10. Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer

    NASA Astrophysics Data System (ADS)

    Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik

    2018-05-01

    Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.

  11. Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer.

    PubMed

    Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik

    2018-05-10

    Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.

  12. The GEOS-5 Atmospheric General Circulation Model: Mean Climate and Development from MERRA to Fortuna

    NASA Technical Reports Server (NTRS)

    Molod, Andrea; Takacs, Lawrence; Suarez, Max; Bacmeister, Julio; Song, In-Sun; Eichmann, Andrew

    2012-01-01

    This report is a documentation of the Fortuna version of the GEOS-5 Atmospheric General Circulation Model (AGCM). The GEOS-5 AGCM is currently in use in the NASA Goddard Modeling and Assimilation Office (GMAO) for simulations at a wide range of resolutions, in atmosphere only, coupled ocean-atmosphere, and data assimilation modes. The focus here is on the development subsequent to the version that was used as part of NASA s Modern-Era Retrospective Analysis for Research and Applications (MERRA). We present here the results of a series of 30-year atmosphere-only simulations at different resolutions, with focus on the behavior of the 1-degree resolution simulation. The details of the changes in parameterizations subsequent to the MERRA model version are outlined, and results of a series of 30-year, atmosphere-only climate simulations at 2-degree resolution are shown to demonstrate changes in simulated climate associated with specific changes in parameterizations. The GEOS-5 AGCM presented here is the model used for the GMAO s atmosphere-only and coupled CMIP-5 simulations.

  13. Resolution requirements for aero-optical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mani, Ali; Wang Meng; Moin, Parviz

    2008-11-10

    Analytical criteria are developed to estimate the error of aero-optical computations due to inadequate spatial resolution of refractive index fields in high Reynolds number flow simulations. The unresolved turbulence structures are assumed to be locally isotropic and at low turbulent Mach number. Based on the Kolmogorov spectrum for the unresolved structures, the computational error of the optical path length is estimated and linked to the resulting error in the computed far-field optical irradiance. It is shown that in the high Reynolds number limit, for a given geometry and Mach number, the spatial resolution required to capture aero-optics within a pre-specifiedmore » error margin does not scale with Reynolds number. In typical aero-optical applications this resolution requirement is much lower than the resolution required for direct numerical simulation, and therefore, a typical large-eddy simulation can capture the aero-optical effects. The analysis is extended to complex turbulent flow simulations in which non-uniform grid spacings are used to better resolve the local turbulence structures. As a demonstration, the analysis is used to estimate the error of aero-optical computation for an optical beam passing through turbulent wake of flow over a cylinder.« less

  14. Review of ultraresolution (10-100 megapixel) visualization systems built by tiling commercial display components

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.; Haralson, David G.; Simpson, Matthew A.; Longo, Sam J.

    2002-08-01

    Ultra-resolution visualization systems are achieved by the technique of tiling many direct or project-view displays. During the past fews years, several such systems have been built from commercial electronics components (displays, computers, image generators, networks, communication links, and software). Civil applications driving this development have independently determined that they require images at 10-100 megapixel (Mpx) resolution to enable state-of-the-art research, engineering, design, stock exchanges, flight simulators, business information and enterprise control centers, education, art and entertainment. Military applications also press the art of the possible to improve the productivity of warfighters and lower the cost of providing for the national defense. The environment in some 80% of defense applications can be addressed by ruggedization of commercial components. This paper reviews the status of ultra-resolution systems based on commercial components and describes a vision for their integration into advanced yet affordable military command centers, simulator/trainers, and, eventually, crew stations in air, land, sea and space systems.

  15. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  16. Regional Climate Simulation and Data Assimilation with Variable-Resolution GCMs

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.

    2002-01-01

    Variable resolution GCMs using a global stretched grid (SG) with enhanced regional resolution over one or multiple areas of interest represents a viable new approach to regional climateklimate change and data assimilation studies and applications. The multiple areas of interest, at least one within each global quadrant, include the major global mountains and major global monsoonal circulations over North America, South America, India-China, and Australia. They also can include the polar domains, and the European and African regions. The SG-approach provides an efficient regional downscaling to mesoscales, and it is an ideal tool for representing consistent interactions of globaYlarge- and regionallmeso- scales while preserving the high quality of global circulation. Basically, the SG-GCM simulations are no different from those of the traditional uniform-grid GCM simulations besides using a variable-resolution grid. Several existing SG-GCMs developed by major centers and groups are briefly described. The major discussion is based on the GEOS (Goddard Earth Observing System) SG-GCM regional climate simulations.

  17. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  18. Data for Figures and Tables in Journal Article Assessment of the Effects of Horizontal Grid Resolution on Long-Term Air Quality Trends using Coupled WRF-CMAQ Simulations, doi:10.1016/j.atmosenv.2016.02.036

    EPA Pesticide Factsheets

    The dataset represents the data depicted in the Figures and Tables of a Journal Manuscript with the following abstract: The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions.This dataset is associated with the following publication

  19. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    NASA Technical Reports Server (NTRS)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  20. Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector

    NASA Astrophysics Data System (ADS)

    Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2014-02-01

    A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.

  1. Fusing Unmanned Aerial Vehicle Imagery with High Resolution Hydrologic Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Pierini, N.; Schreiner-McGraw, A.; Anderson, C.; Saripalli, S.; Rango, A.

    2013-12-01

    After decades of development and applications, high resolution hydrologic models are now common tools in research and increasingly used in practice. More recently, high resolution imagery from unmanned aerial vehicles (UAVs) that provide information on land surface properties have become available for civilian applications. Fusing the two approaches promises to significantly advance the state-of-the-art in terms of hydrologic modeling capabilities. This combination will also challenge assumptions on model processes, parameterizations and scale as land surface characteristics (~0.1 to 1 m) may now surpass traditional model resolutions (~10 to 100 m). Ultimately, predictions from high resolution hydrologic models need to be consistent with the observational data that can be collected from UAVs. This talk will describe our efforts to develop, utilize and test the impact of UAV-derived topographic and vegetation fields on the simulation of two small watersheds in the Sonoran and Chihuahuan Deserts at the Santa Rita Experimental Range (Green Valley, AZ) and the Jornada Experimental Range (Las Cruces, NM). High resolution digital terrain models, image orthomosaics and vegetation species classification were obtained from a fixed wing airplane and a rotary wing helicopter, and compared to coarser analyses and products, including Light Detection and Ranging (LiDAR). We focus the discussion on the relative improvements achieved with UAV-derived fields in terms of terrain-hydrologic-vegetation analyses and summer season simulations using the TIN-based Real-time Integrated Basin Simulator (tRIBS) model. Model simulations are evaluated at each site with respect to a high-resolution sensor network consisting of six rain gauges, forty soil moisture and temperature profiles, four channel runoff flumes, a cosmic-ray soil moisture sensor and an eddy covariance tower over multiple summer periods. We also discuss prospects for the fusion of high resolution models with novel observations from UAVs, including synthetic aperture radar and multispectral imagery.

  2. Patch-Based Super-Resolution of MR Spectroscopic Images: Application to Multiple Sclerosis

    PubMed Central

    Jain, Saurabh; Sima, Diana M.; Sanaei Nezhad, Faezeh; Hangel, Gilbert; Bogner, Wolfgang; Williams, Stephen; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk

    2017-01-01

    Purpose: Magnetic resonance spectroscopic imaging (MRSI) provides complementary information to conventional magnetic resonance imaging. Acquiring high resolution MRSI is time consuming and requires complex reconstruction techniques. Methods: In this paper, a patch-based super-resolution method is presented to increase the spatial resolution of metabolite maps computed from MRSI. The proposed method uses high resolution anatomical MR images (T1-weighted and Fluid-attenuated inversion recovery) to regularize the super-resolution process. The accuracy of the method is validated against conventional interpolation techniques using a phantom, as well as simulated and in vivo acquired human brain images of multiple sclerosis subjects. Results: The method preserves tissue contrast and structural information, and matches well with the trend of acquired high resolution MRSI. Conclusions: These results suggest that the method has potential for clinically relevant neuroimaging applications. PMID:28197066

  3. APPLYING THE PATUXENT LANDSCAPE UNIT MODEL TO HUMAN DOMINATED ECOSYSTEMS: THE CASE OF AGRICULTURE. (R827169)

    EPA Science Inventory

    Non-spatial dynamics are core to landscape simulations. Unit models simulate system interactions aggregated within one space unit of resolution used within a spatial model. For unit models to be applicable to spatial simulations they have to be formulated in a general enough w...

  4. Simulating the directional, spectral and textural properties of a large-scale scene at high resolution using a MODIS BRDF product

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan; Goodenough, Adam A.; Schott, John R.

    2016-10-01

    Many remote sensing applications rely on simulated scenes to perform complex interaction and sensitivity studies that are not possible with real-world scenes. These applications include the development and validation of new and existing algorithms, understanding of the sensor's performance prior to launch, and trade studies to determine ideal sensor configurations. The accuracy of these applications is dependent on the realism of the modeled scenes and sensors. The Digital Image and Remote Sensing Image Generation (DIRSIG) tool has been used extensively to model the complex spectral and spatial texture variation expected in large city-scale scenes and natural biomes. In the past, material properties that were used to represent targets in the simulated scenes were often assumed to be Lambertian in the absence of hand-measured directional data. However, this assumption presents a limitation for new algorithms that need to recognize the anisotropic behavior of targets. We have developed a new method to model and simulate large-scale high-resolution terrestrial scenes by combining bi-directional reflectance distribution function (BRDF) products from Moderate Resolution Imaging Spectroradiometer (MODIS) data, high spatial resolution data, and hyperspectral data. The high spatial resolution data is used to separate materials and add textural variations to the scene, and the directional hemispherical reflectance from the hyperspectral data is used to adjust the magnitude of the MODIS BRDF. In this method, the shape of the BRDF is preserved since it changes very slowly, but its magnitude is varied based on the high resolution texture and hyperspectral data. In addition to the MODIS derived BRDF, target/class specific BRDF values or functions can also be applied to features of specific interest. The purpose of this paper is to discuss the techniques and the methodology used to model a forest region at a high resolution. The simulated scenes using this method for varying view angles show the expected variations in the reflectance due to the BRDF effects of the Harvard forest. The effectiveness of this technique to simulate real sensor data is evaluated by comparing the simulated data with the Landsat 8 Operational Land Image (OLI) data over the Harvard forest. Regions of interest were selected from the simulated and the real data for different targets and their Top-of-Atmospheric (TOA) radiance were compared. After adjusting for scaling correction due to the difference in atmospheric conditions between the simulated and the real data, the TOA radiance is found to agree within 5 % in the NIR band and 10 % in the visible bands for forest targets under similar illumination conditions. The technique presented in this paper can be extended for other biomes (e.g. desert regions and agricultural regions) by using the appropriate geographic regions. Since the entire scene is constructed in a simulated environment, parameters such as BRDF or its effects can be analyzed for general or target specific algorithm improvements. Also, the modeling and simulation techniques can be used as a baseline for the development and comparison of new sensor designs and to investigate the operational and environmental factors that affects the sensor constellations such as Sentinel and Landsat missions.

  5. Low-resolution simulations of vesicle suspensions in 2D

    NASA Astrophysics Data System (ADS)

    Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George

    2018-03-01

    Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.

  6. A multimodel intercomparison of resolution effects on precipitation: simulations and theory

    NASA Astrophysics Data System (ADS)

    Rauscher, Sara A.; O'Brien, Travis A.; Piani, Claudio; Coppola, Erika; Giorgi, Filippo; Collins, William D.; Lawston, Patricia M.

    2016-10-01

    An ensemble of six pairs of RCM experiments performed at 25 and 50 km for the period 1961-2000 over a large European domain is examined in order to evaluate the effects of resolution on the simulation of daily precipitation statistics. Application of the non-parametric two-sample Kolmorgorov-Smirnov test, which tests for differences in the location and shape of the probability distributions of two samples, shows that the distribution of daily precipitation differs between the pairs of simulations over most land areas in both summer and winter, with the strongest signal over southern Europe. Two-dimensional histograms reveal that precipitation intensity increases with resolution over almost the entire domain in both winter and summer. In addition, the 25 km simulations have more dry days than the 50 km simulations. The increase in dry days with resolution is indicative of an improvement in model performance at higher resolution, while the more intense precipitation exceeds observed values. The systematic increase in precipitation extremes with resolution across all models suggests that this response is fundamental to model formulation. Simple theoretical arguments suggest that fluid continuity, combined with the emergent scaling properties of the horizontal wind field, results in an increase in resolved vertical transport as grid spacing decreases. This increase in resolution-dependent vertical mass flux then drives an intensification of convergence and resolvable-scale precipitation as grid spacing decreases. This theoretical result could help explain the increasingly, and often anomalously, large stratiform contribution to total rainfall observed with increasing resolution in many regional and global models.

  7. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  8. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  9. Effects of whispering gallery mode in microsphere super-resolution imaging

    NASA Astrophysics Data System (ADS)

    Zhou, Song; Deng, Yongbo; Zhou, Wenchao; Yu, Muxin; Urbach, H. P.; Wu, Yihui

    2017-09-01

    Whispering Gallery modes have been presented in microscopic glass spheres or toruses with many applications. In this paper, the possible approaches to enhance the imaging resolution by Whispering Gallery modes are discussed, including evanescent waves coupling, transformed and illustration by Whispering Gallery modes. It shows that the high-order scattering modes play the dominant role in the reconstructed virtual image when the Whispering Gallery modes exist. Furthermore, we find that the high image resolution of electric dipoles can be achieved, when the out-of-phase components exist from the illustration of Whispering Gallery modes. Those results of our simulation could contribute to the knowledge of microsphere-assisted super-resolution imaging and its potential applications.

  10. Regional-scale integration of hydrological and geophysical data using Bayesian sequential simulation: application to field data

    NASA Astrophysics Data System (ADS)

    Ruggeri, Paolo; Irving, James; Gloaguen, Erwan; Holliger, Klaus

    2013-04-01

    Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches to the regional scale still represents a major challenge, yet is critically important for the development of groundwater flow and contaminant transport models. To address this issue, we have developed a regional-scale hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure. The objective is to simulate the regional-scale distribution of a hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, our approach first involves linking the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. We present the application of this methodology to a pertinent field scenario, where we consider collocated high-resolution measurements of the electrical conductivity, measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, estimated from EM flowmeter and slug test measurements, in combination with low-resolution exhaustive electrical conductivity estimates obtained from dipole-dipole ERT meausurements.

  11. Simulation of the Atmospheric Boundary Layer for Wind Energy Applications

    NASA Astrophysics Data System (ADS)

    Marjanovic, Nikola

    Energy production from wind is an increasingly important component of overall global power generation, and will likely continue to gain an even greater share of electricity production as world governments attempt to mitigate climate change and wind energy production costs decrease. Wind energy generation depends on wind speed, which is greatly influenced by local and synoptic environmental forcings. Synoptic forcing, such as a cold frontal passage, exists on a large spatial scale while local forcing manifests itself on a much smaller scale and could result from topographic effects or land-surface heat fluxes. Synoptic forcing, if strong enough, may suppress the effects of generally weaker local forcing. At the even smaller scale of a wind farm, upstream turbines generate wakes that decrease the wind speed and increase the atmospheric turbulence at the downwind turbines, thereby reducing power production and increasing fatigue loading that may damage turbine components, respectively. Simulation of atmospheric processes that span a considerable range of spatial and temporal scales is essential to improve wind energy forecasting, wind turbine siting, turbine maintenance scheduling, and wind turbine design. Mesoscale atmospheric models predict atmospheric conditions using observed data, for a wide range of meteorological applications across scales from thousands of kilometers to hundreds of meters. Mesoscale models include parameterizations for the major atmospheric physical processes that modulate wind speed and turbulence dynamics, such as cloud evolution and surface-atmosphere interactions. The Weather Research and Forecasting (WRF) model is used in this dissertation to investigate the effects of model parameters on wind energy forecasting. WRF is used for case study simulations at two West Coast North American wind farms, one with simple and one with complex terrain, during both synoptically and locally-driven weather events. The model's performance with different grid nesting configurations, turbulence closures, and grid resolutions is evaluated by comparison to observation data. Improvement to simulation results from the use of more computationally expensive high resolution simulations is only found for the complex terrain simulation during the locally-driven event. Physical parameters, such as soil moisture, have a large effect on locally-forced events, and prognostic turbulence kinetic energy (TKE) schemes are found to perform better than non-local eddy viscosity turbulence closure schemes. Mesoscale models, however, do not resolve turbulence directly, which is important at finer grid resolutions capable of resolving wind turbine components and their interactions with atmospheric turbulence. Large-eddy simulation (LES) is a numerical approach that resolves the largest scales of turbulence directly by separating large-scale, energetically important eddies from smaller scales with the application of a spatial filter. LES allows higher fidelity representation of the wind speed and turbulence intensity at the scale of a wind turbine which parameterizations have difficulty representing. Use of high-resolution LES enables the implementation of more sophisticated wind turbine parameterizations to create a robust model for wind energy applications using grid spacing small enough to resolve individual elements of a turbine such as its rotor blades or rotation area. Generalized actuator disk (GAD) and line (GAL) parameterizations are integrated into WRF to complement its real-world weather modeling capabilities and better represent wind turbine airflow interactions, including wake effects. The GAD parameterization represents the wind turbine as a two-dimensional disk resulting from the rotation of the turbine blades. Forces on the atmosphere are computed along each blade and distributed over rotating, annular rings intersecting the disk. While typical LES resolution (10-20 m) is normally sufficient to resolve the GAD, the GAL parameterization requires significantly higher resolution (1-3 m) as it does not distribute the forces from the blades over annular elements, but applies them along lines representing individual blades. In this dissertation, the GAL is implemented into WRF and evaluated against the GAD parameterization from two field campaigns that measured the inflow and near-wake regions of a single turbine. The data-sets are chosen to allow validation under the weakly convective and weakly stable conditions characterizing most turbine operations. The parameterizations are evaluated with respect to their ability to represent wake wind speed, variance, and vorticity by comparing fine-resolution GAD and GAL simulations along with coarse-resolution GAD simulations. Coarse-resolution GAD simulations produce aggregated wake characteristics similar to both GAD and GAL simulations (saving on computational cost), while the GAL parameterization enables resolution of near wake physics (such as vorticity shedding and wake expansion) for high fidelity applications. (Abstract shortened by ProQuest.).

  12. HIGH-RESOLUTION L(Y)SO DETECTORS USING PMT-QUADRANT-SHARING FOR HUMAN & ANIMAL PET CAMERAS

    PubMed Central

    Ramirez, Rocio A.; Liu, Shitao; Liu, Jiguo; Zhang, Yuxuan; Kim, Soonseok; Baghaei, Hossain; Li, Hongdi; Wang, Yu; Wong, Wai-Hoi

    2009-01-01

    We developed high resolution L(Y)SO detectors for human and animal PET applications using Photomultiplier-quadrant-sharing (PQS) technology. The crystal sizes were 1.27 × 1.27 × 10 mm3 for the animal PQS-blocks and 3.25 × 3.25 × 20 mm3 for human ones. Polymer mirror film patterns (PMR) were placed between crystals as reflector. The blocks were assembled together using optical grease and wrapped by Teflon tape. The blocks were coupled to regular round PMT’s of 19/51 mm in PQS configuration. List-mode data of Ga-68 source (511 KeV) were acquired with our high yield pileup-event recovery (HYPER) electronics and data acquisition software. The high voltage bias was 1100V. Crystal decoding maps and individual crystal energy resolutions were extracted from the data. To investigate the potential imaging resolution of the PET cameras with these blocks, we used GATE (Geant4 Application for Tomographic Emission) simulation package. GATE is a GEANT4 based software toolkit for realistic simulation of PET and SPECT systems. The packing fractions of these blocks were found to be 95.6% and 98.2%. From the decoding maps, all 196 and 225 crystals were clearly identified. The average energy resolutions were 14.0% and 15.6%. For small animal PET systems, the detector ring diameter was 16.5 cm with an axial field of view (AFOV) of 11.8 cm. The simulation data suggests that a reconstructed radial (tangential) spatial resolution of 1.24 (1.25) mm near the center is potentially achievable. For the wholebody human PET systems, the detector ring diameter was 86 cm. The simulation data suggests that a reconstructed radial (tangential) spatial resolution of 3.09(3.38) mm near the center is potentially achievable. From this study we can conclude that PQS design could achieve high spatial resolutions and excellent energy resolutions on human and animal PET systems with substantially lower production costs and inexpensive readout devices. PMID:19946463

  13. Scaling a Convection-Resolving RCM to Near-Global Scales

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Fuhrer, O.; Chadha, T.; Kwasniewski, G.; Hoefler, T.; Lapillonne, X.; Lüthi, D.; Osuna, C.; Schar, C.; Schulthess, T. C.; Vogt, H.

    2017-12-01

    In the recent years, first decade-long kilometer-scale resolution RCM simulations have been performed on continental-scale computational domains. However, the size of the planet Earth is still an order of magnitude larger and thus the computational implications of performing global climate simulations at this resolution are challenging. We explore the gap between the currently established RCM simulations and global simulations by scaling the GPU accelerated version of the COSMO model to a near-global computational domain. To this end, the evolution of an idealized moist baroclinic wave has been simulated over the course of 10 days with a grid spacing of up to 930 m. The computational mesh employs 36'000 x 16'001 x 60 grid points and covers 98.4% of the planet's surface. The code shows perfect weak scaling up to 4'888 Nodes of the Piz Daint supercomputer and yields 0.043 simulated years per day (SYPD) which is approximately one seventh of the 0.2-0.3 SYPD required to conduct AMIP-type simulations. However, at half the resolution (1.9 km) we've observed 0.23 SYPD. Besides formation of frontal precipitating systems containing embedded explicitly-resolved convective motions, the simulations reveal a secondary instability that leads to cut-off warm-core cyclonic vortices in the cyclone's core, once the grid spacing is refined to the kilometer scale. The explicit representation of embedded moist convection and the representation of the previously unresolved instabilities exhibit a physically different behavior in comparison to coarser-resolution simulations. The study demonstrates that global climate simulations using kilometer-scale resolution are imminent and serves as a baseline benchmark for global climate model applications and future exascale supercomputing systems.

  14. Curved crystal x-ray optics for monochromatic imaging with a clinical source.

    PubMed

    Bingölbali, Ayhan; MacDonald, C A

    2009-04-01

    Monochromatic x-ray imaging has been shown to increase contrast and reduce dose relative to conventional broadband imaging. However, clinical sources with very narrow energy bandwidth tend to have limited intensity and field of view. In this study, focused fan beam monochromatic radiation was obtained using doubly curved monochromator crystals. While these optics have been in use for microanalysis at synchrotron facilities for some time, this work is the first investigation of the potential application of curved crystal optics to clinical sources for medical imaging. The optics could be used with a variety of clinical sources for monochromatic slot scan imaging. The intensity was assessed and the resolution of the focused beam was measured using a knife-edge technique. A simulation model was developed and comparisons to the measured resolution were performed to verify the accuracy of the simulation to predict resolution for different conventional sources. A simple geometrical calculation was also developed. The measured, simulated, and calculated resolutions agreed well. Adequate resolution and intensity for mammography were predicted for appropriate source/optic combinations.

  15. Exploring a Variable-Resolution Approach for Simulating Regional Climate in the Rocky Mountain Region Using the VR-CESM

    NASA Astrophysics Data System (ADS)

    Wu, Chenglai; Liu, Xiaohong; Lin, Zhaohui; Rhoades, Alan M.; Ullrich, Paul A.; Zarzycki, Colin M.; Lu, Zheng; Rahimi-Esfarjani, Stefan R.

    2017-10-01

    The reliability of climate simulations and projections, particularly in the regions with complex terrains, is greatly limited by the model resolution. In this study we evaluate the variable-resolution Community Earth System Model (VR-CESM) with a high-resolution (0.125°) refinement over the Rocky Mountain region. The VR-CESM results are compared with observations, as well as CESM simulation at a quasi-uniform 1° resolution (UNIF) and Canadian Regional Climate Model version 5 (CRCM5) simulation at a 0.11° resolution. We find that VR-CESM is effective at capturing the observed spatial patterns of temperature, precipitation, and snowpack in the Rocky Mountains with the performance comparable to CRCM5, while UNIF is unable to do so. VR-CESM and CRCM5 simulate better the seasonal variations of precipitation than UNIF, although VR-CESM still overestimates winter precipitation whereas CRCM5 and UNIF underestimate it. All simulations distribute more winter precipitation along the windward (west) flanks of mountain ridges with the greatest overestimation in VR-CESM. VR-CESM simulates much greater snow water equivalent peaks than CRCM5 and UNIF, although the peaks are still 10-40% less than observations. Moreover, the frequency of heavy precipitation events (daily precipitation ≥ 25 mm) in VR-CESM and CRCM5 is comparable to observations, whereas the same events in UNIF are an order of magnitude less frequent. In addition, VR-CESM captures the observed occurrence frequency and seasonal variation of rain-on-snow days and performs better than UNIF and CRCM5. These results demonstrate the VR-CESM's capability in regional climate modeling over the mountainous regions and its promising applications for climate change studies.

  16. ESiWACE: A Center of Excellence for HPC applications to support cloud resolving earth system modelling

    NASA Astrophysics Data System (ADS)

    Biercamp, Joachim; Adamidis, Panagiotis; Neumann, Philipp

    2017-04-01

    With the exa-scale era approaching, length and time scales used for climate research on one hand and numerical weather prediction on the other hand blend into each other. The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) represents a European consortium comprising partners from climate, weather and HPC in their effort to address key scientific challenges that both communities have in common. A particular challenge is to reach global models with spatial resolutions that allow simulating convective clouds and small-scale ocean eddies. These simulations would produce better predictions of trends and provide much more fidelity in the representation of high-impact regional events. However, running such models in operational mode, i.e with sufficient throughput in ensemble mode clearly will require exa-scale computing and data handling capability. We will discuss the ESiWACE initiative and relate it to work-in-progress on high-resolution simulations in Europe. We present recent strong scalability measurements from ESiWACE to demonstrate current computability in weather and climate simulation. A special focus in this particular talk is on the Icosahedal Nonhydrostatic (ICON) model used for a comparison of high resolution regional and global simulations with high quality observation data. We demonstrate that close-to-optimal parallel efficiency can be achieved in strong scaling global resolution experiments on Mistral/DKRZ, e.g. 94% for 5km resolution simulations using 36k cores on Mistral/DKRZ. Based on our scalability and high-resolution experiments, we deduce and extrapolate future capabilities for ICON that are expected for weather and climate research at exascale.

  17. A multimodel intercomparison of resolution effects on precipitation: simulations and theory

    DOE PAGES

    Rauscher, Sara A.; O?Brien, Travis A.; Piani, Claudio; ...

    2016-02-27

    An ensemble of six pairs of RCM experiments performed at 25 and 50 km for the period 1961–2000 over a large European domain is examined in order to evaluate the effects of resolution on the simulation of daily precipitation statistics. Application of the non-parametric two-sample Kolmorgorov–Smirnov test, which tests for differences in the location and shape of the probability distributions of two samples, shows that the distribution of daily precipitation differs between the pairs of simulations over most land areas in both summer and winter, with the strongest signal over southern Europe. Two-dimensional histograms reveal that precipitation intensity increases with resolutionmore » over almost the entire domain in both winter and summer. In addition, the 25 km simulations have more dry days than the 50 km simulations. The increase in dry days with resolution is indicative of an improvement in model performance at higher resolution, while the more intense precipitation exceeds observed values. The systematic increase in precipitation extremes with resolution across all models suggests that this response is fundamental to model formulation. Simple theoretical arguments suggest that fluid continuity, combined with the emergent scaling properties of the horizontal wind field, results in an increase in resolved vertical transport as grid spacing decreases. This increase in resolution-dependent vertical mass flux then drives an intensification of convergence and resolvable-scale precipitation as grid spacing decreases. In conclusion, this theoretical result could help explain the increasingly, and often anomalously, large stratiform contribution to total rainfall observed with increasing resolution in many regional and global models.« less

  18. Observations and predictability of gap winds in a steep, narrow, fire-prone canyon in central Idaho, USA

    NASA Astrophysics Data System (ADS)

    Wagenbrenner, N. S.; Forthofer, J.; Gibson, C.; Lamb, B. K.

    2017-12-01

    Frequent strong gap winds were measured in a deep, steep, wildfire-prone river canyon of central Idaho, USA during July-September 2013. Analysis of archived surface pressure data indicate that the gap wind events were driven by regional scale surface pressure gradients. The events always occurred between 0400 and 1200 LT and typically lasted 3-4 hours. The timing makes these events particularly hazardous for wildland firefighting applications since the morning is typically a period of reduced fire activity and unsuspecting firefighters could be easily endangered by the onset of strong downcanyon winds. The gap wind events were not explicitly forecast by operational numerical weather prediction (NWP) models due to the small spatial scale of the canyon ( 1-2 km wide) compared to the horizontal resolution of operational NWP models (3 km or greater). Custom WRF simulations initialized with NARR data were run at 1 km horizontal resolution to assess whether higher resolution NWP could accurately simulate the observed gap winds. Here, we show that the 1 km WRF simulations captured many of the observed gap wind events, although the strength of the events was underpredicted. We also present evidence from these WRF simulations which suggests that the Salmon River Canyon is near the threshold of WRF-resolvable terrain features when the standard WRF coordinate system and discretization schemes are used. Finally, we show that the strength of the gap wind events can be predicted reasonably well as a function of the surface pressure gradient across the gap, which could be useful in the absence of high-resolution NWP. These are important findings for wildland firefighting applications in narrow gaps where routine forecasts may not provide warning for wind effects induced by high-resolution terrain features.

  19. Development of a high resolution voxelised head phantom for medical physics applications.

    PubMed

    Giacometti, V; Guatelli, S; Bazalova-Carter, M; Rosenfeld, A B; Schulte, R W

    2017-01-01

    Computational anthropomorphic phantoms have become an important investigation tool for medical imaging and dosimetry for radiotherapy and radiation protection. The development of computational phantoms with realistic anatomical features contribute significantly to the development of novel methods in medical physics. For many applications, it is desirable that such computational phantoms have a real-world physical counterpart in order to verify the obtained results. In this work, we report the development of a voxelised phantom, the HIGH_RES_HEAD, modelling a paediatric head based on the commercial phantom 715-HN (CIRS). HIGH_RES_HEAD is unique for its anatomical details and high spatial resolution (0.18×0.18mm 2 pixel size). The development of such a phantom was required to investigate the performance of a new proton computed tomography (pCT) system, in terms of detector technology and image reconstruction algorithms. The HIGH_RES_HEAD was used in an ad-hoc Geant4 simulation modelling the pCT system. The simulation application was previously validated with respect to experimental results. When compared to a standard spatial resolution voxelised phantom of the same paediatric head, it was shown that in pCT reconstruction studies, the use of the HIGH_RES_HEAD translates into a reduction from 2% to 0.7% of the average relative stopping power difference between experimental and simulated results thus improving the overall quality of the head phantom simulation. The HIGH_RES_HEAD can also be used for other medical physics applications such as treatment planning studies. A second version of the voxelised phantom was created that contains a prototypic base of skull tumour and surrounding organs at risk. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. The planetary hydraulics analysis based on a multi-resolution stereo DTMs and LISFLOOD-FP model: Case study in Mars

    NASA Astrophysics Data System (ADS)

    Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.

    2013-12-01

    Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough routes of water flows for extensive target areas. After then, refinements through hydraulics simulations with CTX DTMs (~12-18m resolution) and HiRISE DTMs (~1- 4m resolution) were conducted by employing the output of HRSC simulations as the initial conditions. Thus even a few high and very high resolution stereo DTMs coverage enabled the performance of a high precision hydraulics analysis for reconstructing a whole fluvial event. In this manner, useful information to identify the characteristics of martian fluvial activities, such as water depth along the time line, flow direction, and travel time, were successfully retrieved with each target tributary. Together with all above useful outputs of hydraulics analysis, the local roughness and photogrammetric control of the stereo DTMs appeared to be crucial elements for accurate fluvial simulation. The potential of this study should be further explored for its application to the other extraterrestrial bodies where fluvial activity once existed, as well as the major martian channel and valleys.

  1. : “Developing Regional Modeling Techniques Applicable for Simulating Future Climate Conditions in the Carolinas”

    EPA Science Inventory

    Global climate models (GCMs) are currently used to obtain information about future changes in the large-scale climate. However, such simulations are typically done at coarse spatial resolutions, with model grid boxes on the order of 100 km on a horizontal side. Therefore, techniq...

  2. Application of large-scale, multi-resolution watershed modeling framework using the Hydrologic and Water Quality System (HAWQS)

    USDA-ARS?s Scientific Manuscript database

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...

  3. Nested high-resolution large-eddy simulations in WRF to support wind power

    NASA Astrophysics Data System (ADS)

    Mirocha, J.; Kirkil, G.; Kosovic, B.; Lundquist, J. K.

    2009-12-01

    The WRF model’s grid nesting capability provides a potentially powerful framework for simulating flow over a wide range of scales. One such application is computation of realistic inflow boundary conditions for large eddy simulations (LES) by nesting LES domains within mesoscale domains. While nesting has been widely and successfully applied at GCM to mesoscale resolutions, the WRF model’s nesting behavior at the high-resolution (Δx < 1000m) end of the spectrum is less well understood. Nesting LES within msoscale domains can significantly improve turbulent flow prediction at the scale of a wind park, providing a basis for superior site characterization, or for improved simulation of turbulent inflows encountered by turbines. We investigate WRF’s grid nesting capability at high mesh resolutions using nested mesoscale and large-eddy simulations. We examine the spatial scales required for flow structures to equilibrate to the finer mesh as flow enters a nest, and how the process depends on several parameters, including grid resolution, turbulence subfilter stress models, relaxation zones at nest interfaces, flow velocities, surface roughnesses, terrain complexity and atmospheric stability. Guidance on appropriate domain sizes and turbulence models for LES in light of these results is provided This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 LLNL-ABS-416482

  4. Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.

    2015-02-01

    Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.

  5. A high resolution on-chip delay sensor with low supply-voltage sensitivity for high-performance electronic systems.

    PubMed

    Sheng, Duo; Lai, Hsiu-Fan; Chan, Sheng-Min; Hong, Min-Rong

    2015-02-13

    An all-digital on-chip delay sensor (OCDS) circuit with high delay-measurement resolution and low supply-voltage sensitivity for efficient detection and diagnosis in high-performance electronic system applications is presented. Based on the proposed delay measurement scheme, the quantization resolution of the proposed OCDS can be reduced to several picoseconds. Additionally, the proposed cascade-stage delay measurement circuit can enhance immunity to supply-voltage variations of the delay measurement resolution without extra self-biasing or calibration circuits. Simulation results show that the delay measurement resolution can be improved to 1.2 ps; the average delay resolution variation is 0.55% with supply-voltage variations of ±10%. Moreover, the proposed delay sensor can be implemented in an all-digital manner, making it very suitable for high-performance electronic system applications as well as system-level integration.

  6. Phase-field modeling of liquids splitting between separating surfaces and its application to high-resolution roll-based printing technologies

    NASA Astrophysics Data System (ADS)

    Hizir, F. E.; Hardt, D. E.

    2017-05-01

    An in-depth understanding of the liquid transport in roll-based printing systems is essential for advancing the roll-based printing technology and enhancing the performance of the printed products. In this study, phase-field simulations are performed to characterize the liquid transport in roll-based printing systems, and the phase-field method is shown to be an effective tool to simulate the liquid transport. In the phase-field simulations, the liquid transport through the ink transfer rollers is approximated as the stretching and splitting of liquid bridges with pinned or moving contact lines between vertically separating surfaces. First, the effect of the phase-field parameters and the mesh characteristics on the simulation results is examined. The simulation results show that a sharp interface limit is approached as the capillary width decreases while keeping the mobility proportional to the capillary width squared. Close to the sharp interface limit, the mobility changes over a specified range are observed to have no significant influence on the simulation results. Next, the ink transfer from the cells on the surface of an ink-metering roller to the surface of stamp features is simulated. Under negligible inertial effects and in the absence of gravity, the amount of liquid ink transferred from an axisymmetric cell with low surface wettability to a stamp with high surface wettability is found to increase as the cell sidewall steepness and the cell surface wettability decrease and the stamp surface wettability and the capillary number increase. Strategies for improving the resolution and quality of roll-based printing are derived based on an analysis of the simulation results. The application of novel materials that contain cells with irregular surface topography to stamp inking in high-resolution roll-based printing is assessed.

  7. Spatial and temporal variability of clouds and precipitation over Germany: multiscale simulations across the "gray zone"

    NASA Astrophysics Data System (ADS)

    Barthlott, C.; Hoose, C.

    2015-11-01

    This paper assesses the resolution dependance of clouds and precipitation over Germany by numerical simulations with the COnsortium for Small-scale MOdeling (COSMO) model. Six intensive observation periods of the HOPE (HD(CP)2 Observational Prototype Experiment) measurement campaign conducted in spring 2013 and 1 summer day of the same year are simulated. By means of a series of grid-refinement resolution tests (horizontal grid spacing 2.8, 1 km, 500, and 250 m), the applicability of the COSMO model to represent real weather events in the gray zone, i.e., the scale ranging between the mesoscale limit (no turbulence resolved) and the large-eddy simulation limit (energy-containing turbulence resolved), is tested. To the authors' knowledge, this paper presents the first non-idealized COSMO simulations in the peer-reviewed literature at the 250-500 m scale. It is found that the kinetic energy spectra derived from model output show the expected -5/3 slope, as well as a dependency on model resolution, and that the effective resolution lies between 6 and 7 times the nominal resolution. Although the representation of a number of processes is enhanced with resolution (e.g., boundary-layer thermals, low-level convergence zones, gravity waves), their influence on the temporal evolution of precipitation is rather weak. However, rain intensities vary with resolution, leading to differences in the total rain amount of up to +48 %. Furthermore, the location of rain is similar for the springtime cases with moderate and strong synoptic forcing, whereas significant differences are obtained for the summertime case with air mass convection. Domain-averaged liquid water paths and cloud condensate profiles are used to analyze the temporal and spatial variability of the simulated clouds. Finally, probability density functions of convection-related parameters are analyzed to investigate their dependance on model resolution and their impact on cloud formation and subsequent precipitation.

  8. Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain

    NASA Astrophysics Data System (ADS)

    Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.

    2018-04-01

    The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  9. Design, development, and application of LANDIS-II, a spatial landscape simulation model with flexible temporal and spatial resolution

    Treesearch

    Robert M. Scheller; James B. Domingo; Brian R. Sturtevant; Jeremy S. Williams; Arnold Rudy; Eric J. Gustafson; David J. Mladenoff

    2007-01-01

    We introduce LANDIS-II, a landscape model designed to simulate forest succession and disturbances. LANDIS-II builds upon and preserves the functionality of previous LANDIS forest landscape simulation models. LANDIS-II is distinguished by the inclusion of variable time steps for different ecological processes; our use of a rigorous development and testing process used...

  10. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  11. Synthesis of Multispectral Bands from Hyperspectral Data: Validation Based on Images Acquired by AVIRIS, Hyperion, ALI, and ETM+

    NASA Technical Reports Server (NTRS)

    Blonksi, Slawomir; Gasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki

    2001-01-01

    Multispectral data requirements for Earth science applications are not always studied rigorously studied before a new remote sensing system is designed. A study of the spatial resolution, spectral bandpasses, and radiometric sensitivity requirements of real-world applications would focus the design onto providing maximum benefits to the end-user community. To support systematic studies of multispectral data requirements, the Applications Research Toolbox (ART) has been developed at NASA's Stennis Space Center. The ART software allows users to create and assess simulated datasets while varying a wide range of system parameters. The simulations are based on data acquired by existing multispectral and hyperspectral instruments. The produced datasets can be further evaluated for specific end-user applications. Spectral synthesis of multispectral images from hyperspectral data is a key part of the ART software. In this process, hyperspectral image cubes are transformed into multispectral imagery without changes in spatial sampling and resolution. The transformation algorithm takes into account spectral responses of both the synthesized, broad, multispectral bands and the utilized, narrow, hyperspectral bands. To validate the spectral synthesis algorithm, simulated multispectral images are compared with images collected near-coincidentally by the Landsat 7 ETM+ and the EO-1 ALI instruments. Hyperspectral images acquired with the airborne AVIRIS instrument and with the Hyperion instrument onboard the EO-1 satellite were used as input data to the presented simulations.

  12. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  13. Research and applications: Artificial intelligence

    NASA Technical Reports Server (NTRS)

    Chaitin, L. J.; Duda, R. O.; Johanson, P. A.; Raphael, B.; Rosen, C. A.; Yates, R. A.

    1970-01-01

    The program is reported for developing techniques in artificial intelligence and their application to the control of mobile automatons for carrying out tasks autonomously. Visual scene analysis, short-term problem solving, and long-term problem solving are discussed along with the PDP-15 simulator, LISP-FORTRAN-MACRO interface, resolution strategies, and cost effectiveness.

  14. Elucidating the impact of micro-scale heterogeneous bacterial distribution on biodegradation

    NASA Astrophysics Data System (ADS)

    Schmidt, Susanne I.; Kreft, Jan-Ulrich; Mackay, Rae; Picioreanu, Cristian; Thullner, Martin

    2018-06-01

    Groundwater microorganisms hardly ever cover the solid matrix uniformly-instead they form micro-scale colonies. To which extent such colony formation limits the bioavailability and biodegradation of a substrate is poorly understood. We used a high-resolution numerical model of a single pore channel inhabited by bacterial colonies to simulate the transport and biodegradation of organic substrates. These high-resolution 2D simulation results were compared to 1D simulations that were based on effective rate laws for bioavailability-limited biodegradation. We (i) quantified the observed bioavailability limitations and (ii) evaluated the applicability of previously established effective rate concepts if microorganisms are heterogeneously distributed. Effective bioavailability reductions of up to more than one order of magnitude were observed, showing that the micro-scale aggregation of bacterial cells into colonies can severely restrict the bioavailability of a substrate and reduce in situ degradation rates. Effective rate laws proved applicable for upscaling when using the introduced effective colony sizes.

  15. Tropical Cyclone Activity in the High-Resolution Community Earth System Model and the Impact of Ocean Coupling

    NASA Astrophysics Data System (ADS)

    Li, Hui; Sriver, Ryan L.

    2018-01-01

    High-resolution Atmosphere General Circulation Models (AGCMs) are capable of directly simulating realistic tropical cyclone (TC) statistics, providing a promising approach for TC-climate studies. Active air-sea coupling in a coupled model framework is essential to capturing TC-ocean interactions, which can influence TC-climate connections on interannual to decadal time scales. Here we investigate how the choices of ocean coupling can affect the directly simulated TCs using high-resolution configurations of the Community Earth System Model (CESM). We performed a suite of high-resolution, multidecadal, global-scale CESM simulations in which the atmosphere (˜0.25° grid spacing) is configured with three different levels of ocean coupling: prescribed climatological sea surface temperature (SST) (ATM), mixed layer ocean (SLAB), and dynamic ocean (CPL). We find that different levels of ocean coupling can influence simulated TC frequency, geographical distributions, and storm intensity. ATM simulates more storms and higher overall storm intensity than the coupled simulations. It also simulates higher TC track density over the eastern Pacific and the North Atlantic, while TC tracks are relatively sparse within CPL and SLAB for these regions. Storm intensification and the maximum wind speed are sensitive to the representations of local surface flux feedbacks in different coupling configurations. Key differences in storm number and distribution can be attributed to variations in the modeled large-scale climate mean state and variability that arise from the combined effect of intrinsic model biases and air-sea interactions. Results help to improve our understanding about the representation of TCs in high-resolution coupled Earth system models, with important implications for TC-climate applications.

  16. Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation

    NASA Astrophysics Data System (ADS)

    Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong

    2018-02-01

    Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics.

  17. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 km or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed-shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  18. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 kin or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed- shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  19. Large-Eddy Simulations of Atmospheric Flows Over Complex Terrain Using the Immersed-Boundary Method in the Weather Research and Forecasting Model

    NASA Astrophysics Data System (ADS)

    Ma, Yulong; Liu, Heping

    2017-12-01

    Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.

  20. A Petascale Non-Hydrostatic Atmospheric Dynamical Core in the HOMME Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tufo, Henry

    The High-Order Method Modeling Environment (HOMME) is a framework for building scalable, conserva- tive atmospheric models for climate simulation and general atmospheric-modeling applications. Its spatial discretizations are based on Spectral-Element (SE) and Discontinuous Galerkin (DG) methods. These are local methods employing high-order accurate spectral basis-functions that have been shown to perform well on massively parallel supercomputers at any resolution and scale particularly well at high resolutions. HOMME provides the framework upon which the CAM-SE community atmosphere model dynamical-core is constructed. In its current incarnation, CAM-SE employs the hydrostatic primitive-equations (PE) of motion, which limits its resolution to simulations coarser thanmore » 0.1 per grid cell. The primary objective of this project is to remove this resolution limitation by providing HOMME with the capabilities needed to build nonhydrostatic models that solve the compressible Euler/Navier-Stokes equations.« less

  1. Theoretical analysis of microring resonator-based biosensor with high resolution and free of temperature influence

    NASA Astrophysics Data System (ADS)

    Jian, Aoqun; Zou, Lu; Tang, Haiquan; Duan, Qianqian; Ji, Jianlong; Zhang, Qianwu; Zhang, Xuming; Sang, Shengbo

    2017-06-01

    The issue of thermal effects is inevitable for the ultrahigh refractive index (RI) measurement. A biosensor with parallel-coupled dual-microring resonator configuration is proposed to achieve high resolution and free thermal effects measurement. Based on the coupled-resonator-induced transparency effect, the design and principle of the biosensor are introduced in detail, and the performance of the sensor is deduced by simulations. Compared to the biosensor based on a single-ring configuration, the designed biosensor has a 10-fold increased Q value according to the simulation results, thus the sensor is expected to achieve a particularly high resolution. In addition, the output signal of the mathematical model of the proposed sensor can eliminate the thermal influence by adopting an algorithm. This work is expected to have great application potentials in the areas of high-resolution RI measurement, such as biomedical discoveries, virus screening, and drinking water safety.

  2. High Resolution Simulations of Future Climate in West Africa Using a Variable-Resolution Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.

    2013-12-01

    In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.

  3. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE PAGES

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  4. LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation

    NASA Astrophysics Data System (ADS)

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2015-01-01

    Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which can have significant implications in preclinical and clinical ROI imaging applications.

  5. High-resolution stochastic generation of extreme rainfall intensity for urban drainage modelling applications

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2016-04-01

    Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.

  6. Materials characterisation by angle-resolved scanning transmission electron microscopy.

    PubMed

    Müller-Caspary, Knut; Oppermann, Oliver; Grieb, Tim; Krause, Florian F; Rosenauer, Andreas; Schowalter, Marco; Mehrtens, Thorsten; Beyer, Andreas; Volz, Kerstin; Potapov, Pavel

    2016-11-16

    Solid-state properties such as strain or chemical composition often leave characteristic fingerprints in the angular dependence of electron scattering. Scanning transmission electron microscopy (STEM) is dedicated to probe scattered intensity with atomic resolution, but it drastically lacks angular resolution. Here we report both a setup to exploit the explicit angular dependence of scattered intensity and applications of angle-resolved STEM to semiconductor nanostructures. Our method is applied to measure nitrogen content and specimen thickness in a GaN x As 1-x layer independently at atomic resolution by evaluating two dedicated angular intervals. We demonstrate contrast formation due to strain and composition in a Si- based metal-oxide semiconductor field effect transistor (MOSFET) with Ge x Si 1-x stressors as a function of the angles used for imaging. To shed light on the validity of current theoretical approaches this data is compared with theory, namely the Rutherford approach and contemporary multislice simulations. Inconsistency is found for the Rutherford model in the whole angular range of 16-255 mrad. Contrary, the multislice simulations are applicable for angles larger than 35 mrad whereas a significant mismatch is observed at lower angles. This limitation of established simulations is discussed particularly on the basis of inelastic scattering.

  7. A Coupled Surface Nudging Scheme for use in Retrospective ...

    EPA Pesticide Factsheets

    A surface analysis nudging scheme coupling atmospheric and land surface thermodynamic parameters has been implemented into WRF v3.8 (latest version) for use with retrospective weather and climate simulations, as well as for applications in air quality, hydrology, and ecosystem modeling. This scheme is known as the flux-adjusting surface data assimilation system (FASDAS) developed by Alapaty et al. (2008). This scheme provides continuous adjustments for soil moisture and temperature (via indirect nudging) and for surface air temperature and water vapor mixing ratio (via direct nudging). The simultaneous application of indirect and direct nudging maintains greater consistency between the soil temperature–moisture and the atmospheric surface layer mass-field variables. The new method, FASDAS, consistently improved the accuracy of the model simulations at weather prediction scales for different horizontal grid resolutions, as well as for high resolution regional climate predictions. This new capability has been released in WRF Version 3.8 as option grid_sfdda = 2. This new capability increased the accuracy of atmospheric inputs for use air quality, hydrology, and ecosystem modeling research to improve the accuracy of respective end-point research outcome. IMPACT: A new method, FASDAS, was implemented into the WRF model to consistently improve the accuracy of the model simulations at weather prediction scales for different horizontal grid resolutions, as wel

  8. Image formation analysis and high resolution image reconstruction for plenoptic imaging systems.

    PubMed

    Shroff, Sapna A; Berkner, Kathrin

    2013-04-01

    Plenoptic imaging systems are often used for applications like refocusing, multimodal imaging, and multiview imaging. However, their resolution is limited to the number of lenslets. In this paper we investigate paraxial, incoherent, plenoptic image formation, and develop a method to recover some of the resolution for the case of a two-dimensional (2D) in-focus object. This enables the recovery of a conventional-resolution, 2D image from the data captured in a plenoptic system. We show simulation results for a plenoptic system with a known response and Gaussian sensor noise.

  9. High-resolution digital holography with the aid of coherent diffraction imaging.

    PubMed

    Jiang, Zhilong; Veetil, Suhas P; Cheng, Jun; Liu, Cheng; Wang, Ling; Zhu, Jianqiang

    2015-08-10

    The image reconstructed in ordinary digital holography was unable to bring out desired resolution in comparison to photographic materials; thus making it less preferable for many interesting applications. A method is proposed to enhance the resolution of digital holography in all directions by placing a random phase plate between the specimen and the electronic camera and then using an iterative approach to do the reconstruction. With this method, the resolution is improved remarkably in comparison to ordinary digital holography. Theoretical analysis is supported by numerical simulation. The feasibility of the method is also studied experimentally.

  10. Sensitivity to grid resolution in the ability of a chemical transport model to simulate observed oxidant chemistry under high-isoprene conditions

    NASA Astrophysics Data System (ADS)

    Yu, Karen; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Miller, Christopher C.; Travis, Katherine R.; Zhu, Lei; Yantosca, Robert M.; Sulprizio, Melissa P.; Cohen, Ron C.; Dibb, Jack E.; Fried, Alan; Mikoviny, Tomas; Ryerson, Thomas B.; Wennberg, Paul O.; Wisthaler, Armin

    2016-04-01

    Formation of ozone and organic aerosol in continental atmospheres depends on whether isoprene emitted by vegetation is oxidized by the high-NOx pathway (where peroxy radicals react with NO) or by low-NOx pathways (where peroxy radicals react by alternate channels, mostly with HO2). We used mixed layer observations from the SEAC4RS aircraft campaign over the Southeast US to test the ability of the GEOS-Chem chemical transport model at different grid resolutions (0.25° × 0.3125°, 2° × 2.5°, 4° × 5°) to simulate this chemistry under high-isoprene, variable-NOx conditions. Observations of isoprene and NOx over the Southeast US show a negative correlation, reflecting the spatial segregation of emissions; this negative correlation is captured in the model at 0.25° × 0.3125° resolution but not at coarser resolutions. As a result, less isoprene oxidation takes place by the high-NOx pathway in the model at 0.25° × 0.3125° resolution (54 %) than at coarser resolution (59 %). The cumulative probability distribution functions (CDFs) of NOx, isoprene, and ozone concentrations show little difference across model resolutions and good agreement with observations, while formaldehyde is overestimated at coarse resolution because excessive isoprene oxidation takes place by the high-NOx pathway with high formaldehyde yield. The good agreement of simulated and observed concentration variances implies that smaller-scale non-linearities (urban and power plant plumes) are not important on the regional scale. Correlations of simulated vs. observed concentrations do not improve with grid resolution because finer modes of variability are intrinsically more difficult to capture. Higher model resolution leads to decreased conversion of NOx to organic nitrates and increased conversion to nitric acid, with total reactive nitrogen oxides (NOy) changing little across model resolutions. Model concentrations in the lower free troposphere are also insensitive to grid resolution. The overall low sensitivity of modeled concentrations to grid resolution implies that coarse resolution is adequate when modeling continental boundary layer chemistry for global applications.

  11. Effects of DEM source and resolution on WEPP hydrologic and erosion simulation: A case study of two forest watersheds in northern Idaho

    Treesearch

    J. X. Zhang; J. Q. Wu; K. Chang; W. J. Elliot; S. Dun

    2009-01-01

    The recent modification of the Water Erosion Prediction Project (WEPP) model has improved its applicability to hydrology and erosion modeling in forest watersheds. To generate reliable topographic and hydrologic inputs for the WEPP model, carefully selecting digital elevation models (DEMs) with appropriate resolution and accuracy is essential because topography is a...

  12. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2014-08-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation-type data from the European Space Agency (ESA) GlobCover project, and 30 arc-sec leaf area index and fraction of absorbed photosynthetically active radiation data from the ESA GlobCarbon project. Simulations are carried out for the metropolitan area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering three periods of time are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, grid resolution, topographic and land-use databases. Our comparisons show overall good agreement between simulated and observational data, mainly for the potential temperature and the wind speed fields, and clearly indicate that the use of high-resolution databases improves significantly our ability to predict the local atmospheric circulation.

  13. GATE Monte Carlo simulations for variations of an integrated PET/MR hybrid imaging system based on the Biograph mMR model

    NASA Astrophysics Data System (ADS)

    Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.

    2015-06-01

    A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall accuracy error of less than 10% can now be used for further MC simulation applications such as development of hardware components as well as for testing of new PET/MR software algorithms, such as assessment of point-spread function-based reconstruction algorithms.

  14. Advances in simulation of wave interactions with extended MHD phenomena

    NASA Astrophysics Data System (ADS)

    Batchelor, D.; Abla, G.; D'Azevedo, E.; Bateman, G.; Bernholdt, D. E.; Berry, L.; Bonoli, P.; Bramley, R.; Breslau, J.; Chance, M.; Chen, J.; Choi, M.; Elwasif, W.; Foley, S.; Fu, G.; Harvey, R.; Jaeger, E.; Jardin, S.; Jenkins, T.; Keyes, D.; Klasky, S.; Kruger, S.; Ku, L.; Lynch, V.; McCune, D.; Ramos, J.; Schissel, D.; Schnack, D.; Wright, J.

    2009-07-01

    The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: 1) recent improvements to the IPS, 2) application of the IPS for very high resolution simulations of ITER scenarios, 3) studies of resistive and ideal MHD stability in tokamk discharges using IPS facilities, and 4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.

  15. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  16. Efficient Approaches for Propagating Hydrologic Forcing Uncertainty: High-Resolution Applications Over the Western United States

    NASA Astrophysics Data System (ADS)

    Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.

    2017-12-01

    NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.

  17. Modelling space-based integral-field spectrographs and their application to Type Ia supernova cosmology

    NASA Astrophysics Data System (ADS)

    Shukla, Hemant; Bonissent, Alain

    2017-04-01

    We present the parameterized simulation of an integral-field unit (IFU) slicer spectrograph and its applications in spectroscopic studies, namely, for probing dark energy with type Ia supernovae. The simulation suite is called the fast-slicer IFU simulator (FISim). The data flow of FISim realistically models the optics of the IFU along with the propagation effects, including cosmological, zodiacal, instrumentation and detector effects. FISim simulates the spectrum extraction by computing the error matrix on the extracted spectrum. The applications for Type Ia supernova spectroscopy are used to establish the efficacy of the simulator in exploring the wider parametric space, in order to optimize the science and mission requirements. The input spectral models utilize the observables such as the optical depth and velocity of the Si II absorption feature in the supernova spectrum as the measured parameters for various studies. Using FISim, we introduce a mechanism for preserving the complete state of a system, called the partial p/partial f matrix, which allows for compression, reconstruction and spectrum extraction, we introduce a novel and efficient method for spectrum extraction, called super-optimal spectrum extraction, and we conduct various studies such as the optimal point spread function, optimal resolution, parameter estimation, etc. We demonstrate that for space-based telescopes, the optimal resolution lies in the region near R ˜ 117 for read noise of 1 e- and 7 e- using a 400 km s-1 error threshold on the Si II velocity.

  18. Magnetic resonance electrical impedance tomography (MREIT): simulation study of J-substitution algorithm.

    PubMed

    Kwon, Ohin; Woo, Eung Je; Yoon, Jeong-Rock; Seo, Jin Keun

    2002-02-01

    We developed a new image reconstruction algorithm for magnetic resonance electrical impedance tomography (MREIT). MREIT is a new EIT imaging technique integrated into magnetic resonance imaging (MRI) system. Based on the assumption that internal current density distribution is obtained using magnetic resonance imaging (MRI) technique, the new image reconstruction algorithm called J-substitution algorithm produces cross-sectional static images of resistivity (or conductivity) distributions. Computer simulations show that the spatial resolution of resistivity image is comparable to that of MRI. MREIT provides accurate high-resolution cross-sectional resistivity images making resistivity values of various human tissues available for many biomedical applications.

  19. Characterization of fiber Bragg grating-based sensor array for high resolution manometry

    NASA Astrophysics Data System (ADS)

    Becker, Martin; Rothhardt, Manfred; Schröder, Kerstin; Voigt, Sebastian; Mehner, Jan; Teubner, Andreas; Lüpke, Thomas; Thieroff, Christoph; Krüger, Matthias; Chojetzki, Christoph; Bartelt, Hartmut

    2012-04-01

    The combination of fiber Bragg grating arrays integrated in a soft plastic tube is promising for high resolution manometry (HRM) where pressure measurements are done with high spatial resolution. The application as a medical device and in vivo experiments have to be anticipated by characterization with a measurement setup that simulates natural conditions. Good results are achieved with a pressure chamber which applies a well-defined pressure with a soft tubular membrane. It is shown that the proposed catheter design reaches accuracies down to 1 mbar and 1 cm.

  20. Impact of bias correction and downscaling through quantile mapping on simulated climate change signal: a case study over Central Italy

    NASA Astrophysics Data System (ADS)

    Sangelantoni, Lorenzo; Russo, Aniello; Gennaretti, Fabio

    2018-02-01

    Quantile mapping (QM) represents a common post-processing technique used to connect climate simulations to impact studies at different spatial scales. Depending on the simulation-observation spatial scale mismatch, QM can be used for two different applications. The first application uses only the bias correction component, establishing transfer functions between observations and simulations at similar spatial scales. The second application includes a statistical downscaling component when point-scale observations are considered. However, knowledge of alterations to climate change signal (CCS) resulting from these two applications is limited. This study investigates QM impacts on the original temperature and precipitation CCSs when applied according to a bias correction only (BC-only) and a bias correction plus downscaling (BC + DS) application over reference stations in Central Italy. BC-only application is used to adjust regional climate model (RCM) simulations having the same resolution as the observation grid. QM BC + DS application adjusts the same simulations to point-wise observations. QM applications alter CCS mainly for temperature. BC-only application produces a CCS of the median 1 °C lower than the original ( 4.5 °C). BC + DS application produces CCS closer to the original, except over the summer 95th percentile, where substantial amplification of the original CCS resulted. The impacts of the two applications are connected to the ratio between the observed and the simulated standard deviation (STD) of the calibration period. For the precipitation, original CCS is essentially preserved in both applications. Yet, calibration period STD ratio cannot predict QM impact on the precipitation CCS when simulated STD and mean are similarly misrepresented.

  1. Satellite image time series simulation for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-11-01

    The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to generate satellite images in higher spatial and temporal details, which further enables satellite image time series simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and the temporal change is then projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only, and usage of costly high resolution data can be reduced as much as possible, and it presents an efficient solution with great cost performance to build up an economically operational monitoring service for environment, agriculture, forest, land use investigation, and other applications.

  2. Hydrologic Simulation in Mediterranean flood prone Watersheds using high-resolution quality data

    NASA Astrophysics Data System (ADS)

    Eirini Vozinaki, Anthi; Alexakis, Dimitrios; Pappa, Polixeni; Tsanis, Ioannis

    2015-04-01

    Flooding is a significant threat causing lots of inconveniencies in several societies, worldwide. The fact that the climatic change is already happening, increases the flooding risk, which is no longer a substantial menace to several societies and their economies. The improvement of spatial-resolution and accuracy of the topography and land use data due to remote sensing techniques could provide integrated flood inundation simulations. In this work hydrological analysis of several historic flood events in Mediterranean flood prone watersheds (island of Crete/Greece) takes place. Satellite images of high resolution are elaborated. A very high resolution (VHR) digital elevation model (DEM) is produced from a GeoEye-1 0.5-m-resolution satellite stereo pair and is used for floodplain management and mapping applications such as watershed delineation and river cross-section extraction. Sophisticated classification algorithms are implemented for improving Land Use/ Land Cover maps accuracy. In addition, soil maps are updated with means of Radar satellite images. The above high-resolution data are innovatively used to simulate and validate several historical flood events in Mediterranean watersheds, which have experienced severe flooding in the past. The hydrologic/hydraulic models used for flood inundation simulation in this work are HEC-HMS and HEC-RAS. The Natural Resource Conservation Service (NRCS) curve number (CN) approach is implemented to account for the effect of LULC and soil on the hydrologic response of the catchment. The use of high resolution data provides detailed validation results and results of high precision, accordingly. Furthermore, the meteorological forecasting data, which are also combined to the simulation model results, manage the development of an integrated flood forecasting and early warning system tool, which is capable of confronting or even preventing this imminent risk. The research reported in this paper was fully supported by the "ARISTEIA II" Action ("REINFORCE" program) of the "Operational Education and Life Long Learning programme" and is co-funded by the European Social Fund (ESF) and National Resources.

  3. Using force-based adaptive resolution simulations to calculate solvation free energies of amino acid sidechain analogues

    NASA Astrophysics Data System (ADS)

    Fiorentini, Raffaele; Kremer, Kurt; Potestio, Raffaello; Fogarty, Aoife C.

    2017-06-01

    The calculation of free energy differences is a crucial step in the characterization and understanding of the physical properties of biological molecules. In the development of efficient methods to compute these quantities, a promising strategy is that of employing a dual-resolution representation of the solvent, specifically using an accurate model in the proximity of a molecule of interest and a simplified description elsewhere. One such concurrent multi-resolution simulation method is the Adaptive Resolution Scheme (AdResS), in which particles smoothly change their resolution on-the-fly as they move between different subregions. Before using this approach in the context of free energy calculations, however, it is necessary to make sure that the dual-resolution treatment of the solvent does not cause undesired effects on the computed quantities. Here, we show how AdResS can be used to calculate solvation free energies of small polar solutes using Thermodynamic Integration (TI). We discuss how the potential-energy-based TI approach combines with the force-based AdResS methodology, in which no global Hamiltonian is defined. The AdResS free energy values agree with those calculated from fully atomistic simulations to within a fraction of kBT. This is true even for small atomistic regions whose size is on the order of the correlation length, or when the properties of the coarse-grained region are extremely different from those of the atomistic region. These accurate free energy calculations are possible because AdResS allows the sampling of solvation shell configurations which are equivalent to those of fully atomistic simulations. The results of the present work thus demonstrate the viability of the use of adaptive resolution simulation methods to perform free energy calculations and pave the way for large-scale applications where a substantial computational gain can be attained.

  4. Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation.

    PubMed

    Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong

    2018-02-01

    Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  5. Complex molecular assemblies at hand via interactive simulations.

    PubMed

    Delalande, Olivier; Férey, Nicolas; Grasseau, Gilles; Baaden, Marc

    2009-11-30

    Studying complex molecular assemblies interactively is becoming an increasingly appealing approach to molecular modeling. Here we focus on interactive molecular dynamics (IMD) as a textbook example for interactive simulation methods. Such simulations can be useful in exploring and generating hypotheses about the structural and mechanical aspects of biomolecular interactions. For the first time, we carry out low-resolution coarse-grain IMD simulations. Such simplified modeling methods currently appear to be more suitable for interactive experiments and represent a well-balanced compromise between an important gain in computational speed versus a moderate loss in modeling accuracy compared to higher resolution all-atom simulations. This is particularly useful for initial exploration and hypothesis development for rare molecular interaction events. We evaluate which applications are currently feasible using molecular assemblies from 1900 to over 300,000 particles. Three biochemical systems are discussed: the guanylate kinase (GK) enzyme, the outer membrane protease T and the soluble N-ethylmaleimide-sensitive factor attachment protein receptors complex involved in membrane fusion. We induce large conformational changes, carry out interactive docking experiments, probe lipid-protein interactions and are able to sense the mechanical properties of a molecular model. Furthermore, such interactive simulations facilitate exploration of modeling parameters for method improvement. For the purpose of these simulations, we have developed a freely available software library called MDDriver. It uses the IMD protocol from NAMD and facilitates the implementation and application of interactive simulations. With MDDriver it becomes very easy to render any particle-based molecular simulation engine interactive. Here we use its implementation in the Gromacs software as an example. Copyright 2009 Wiley Periodicals, Inc.

  6. The MassiveBlack-II simulation: The evolution of haloes and galaxies to z ~ 0

    DOE PAGES

    Khandai, Nishikanta; Di Matteo, Tiziana; Croft, Rupert; ...

    2015-04-24

    We investigate the properties and clustering of halos, galaxies and blackholes to z = 0 in the high resolution hydrodynamical simulation MassiveBlack-II (MBII). MBII evolves a ΛCDM cosmology in a cubical comoving volume V box = (100Mpc/h)³. It is the highest resolution simulation of this size which includes a self-consistent model for star formation, black hole accretion and associated feedback. We provide a simulation browser web application which enables interactive search and tagging of the halos, subhalos and their properties and publicly release our galaxy catalogs to the scientific community. Our analysis of the halo mass function in MBII revealsmore » that baryons have strong effects with changes in the halo abundance of 20–35% below the knee of the mass function (M halo 10 13.2 M ⊙ h at z = 0) when compared to dark-matter-only simulations. We provide a fitting function for the halo MF out to redshift z = 11 and discuss its limitations.« less

  7. Assessment of temperature and precipitation over Mediterranean Area and Black Sea with non hydrostatic high resolution regional climate model

    NASA Astrophysics Data System (ADS)

    Mercogliano, P.; Montesarchio, M.; Zollo, A.; Bucchignani, E.

    2012-12-01

    In the framework of the Italian GEMINA Project (program of expansion and development of the Euro-Mediterranean Center for Climate Change (CMCC), high resolution climate simulations have been performed, with the aim of furthering knowledge in the field of climate variability at regional scale, its causes and impacts. CMCC is a no profit centre whose aims are the promotion, research coordination and scientific activities in the field of climate changes. In this work, we show results of numerical simulation performed over a very wide area (13W-46E; 29-56N) at spatial resolution of 14 km, which includes all the Mediterranean Sea, using the regional climate model COSMO-CLM. It is a non-hydrostatic model for the simulation of atmospheric processes, developed by the DWD-Germany for weather forecast services; successively, the model has been updated by the CLM-Community, in order to develop climatic applications. It is the only documented numerical model system in Europe designed for spatial resolutions down to 1 km with a range of applicability encompassing operational numerical weather prediction, regional climate modelling the dispersion of trace gases and aerosol and idealised studies and applicable in all regions of the world for a wide range of available climate simulations from global climate and NWP models. Different reasons justify the development of a regional model: the first is the increasing number of works in literature asserting that regional models have also the features to provide more detailed description of the climate extremes, that are often more important then their mean values for natural and human systems. The second one is that high resolution modelling shows adequate features to provide information for impact assessment studies. At CMCC, regional climate modelling is a part of an integrated simulation system and it has been used in different European and African projects to provide qualitative and quantitative evaluation of the hydrogeological and public health risks. A simulation covering the period 1971-2000 and driven by ERA40 reanalysis has been performed, in order to assess the capability of the model to reproduce the present climate, with "perfect boundary conditions". A comparison, in terms of 2-metre temperature and precipitation, with EOBS dataset will be shown and discussed, in order to analyze the capabilities in simulating the main features of the observed climate over a wide area, at high spatial resolution. Then, a comparison between the results of COSMO-CLM driven by the global model CMCC-MED (whose atmospheric component is ECHAM5) and by ERA40 will be provided for a characterization of the errors induced by the global model. Finally, climate projections on the examined area for the XXI century, considering the RCP4.5 emission scenario for the future, will be provided. In this work a special emphasis will be issued to the analysis of the capability to reproduce not only the average climate patterns but also extremes of the present and future climate, in terms of temperature, precipitation and wind.

  8. High resolution simulations of energy absorption in dynamically loaded cellular structures

    NASA Astrophysics Data System (ADS)

    Winter, R. E.; Cotton, M.; Harris, E. J.; Eakins, D. E.; McShane, G.

    2017-03-01

    Cellular materials have potential application as absorbers of energy generated by high velocity impact. CTH, a Sandia National Laboratories Code which allows very severe strains to be simulated, has been used to perform very high resolution simulations showing the dynamic crushing of a series of two-dimensional, stainless steel metal structures with varying architectures. The structures are positioned to provide a cushion between a solid stainless steel flyer plate with velocities ranging from 300 to 900 m/s, and an initially stationary stainless steel target. Each of the alternative architectures under consideration was formed by an array of identical cells each of which had a constant volume and a constant density. The resolution of the simulations was maximised by choosing a configuration in which one-dimensional conditions persisted for the full period over which the specimen densified, a condition which is most readily met by impacting high density specimens at high velocity. It was found that the total plastic flow and, therefore, the irreversible energy dissipated in the fully densified energy absorbing cell, increase (a) as the structure becomes more rodlike and less platelike and (b) as the impact velocity increases. Sequential CTH images of the deformation processes show that the flow of the cell material may be broadly divided into macroscopic flow perpendicular to the compression direction and jetting-type processes (microkinetic flow) which tend to predominate in rod and rodlike configurations and also tend to play an increasing role at increased strain rates. A very simple analysis of a configuration in which a solid flyer impacts a solid target provides a baseline against which to compare and explain features seen in the simulations. The work provides a basis for the development of energy absorbing structures for application in the 200-1000 m/s impact regime.

  9. The Forest Vegetation Simulator: A review of its structure, content, and applications

    Treesearch

    Nicholas L. Crookston; Gary E. Dixon

    2005-01-01

    The Forest Vegetation Simulator (FVS) is a distance-independent, individual-tree forest growth model widely used in the United States to support management decisionmaking. Stands are the basic projection unit, but the spatial scope can be many thousands of stands. The temporal scope is several hundred years at a resolution of 5­10 years. Projections start with a...

  10. McStas 1.1: a tool for building neutron Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lefmann, K.; Nielsen, K.; Tennant, A.; Lake, B.

    2000-03-01

    McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron scattering instrument. The method compares well with the analytical calculations of Popovici.

  11. The fusion of satellite and UAV data: simulation of high spatial resolution band

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  12. 3D visualization of ultra-fine ICON climate simulation data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  13. Evaluation of a multi-scale WRF-CAM5 simulation during the 2010 East Asian Summer Monsoon

    DOE PAGES

    Campbell, Patrick; Zhang, Yang; Wang, Kai; ...

    2017-09-08

    The Weather Research and Forecasting model with Chemistry (WRF-Chem) with the physics package of the Community Atmosphere Model Version 5 (CAM5) has been applied at multiple scales over Eastern China (EC) and the Yangtze River Delta (YRD) to evaluate how increased horizontal resolution with physics designed for a coarser resolution climate model impacts aerosols and clouds, and the resulting precipitation characteristics and performance during the 2010 East Asian Summer Monsoon (EASM). Despite large underpredictions in surface aerosol concentrations and aerosol optical depth, there is good spatial agreement with surface observations of chemical predictions, and increasing spatial resolution tends to improvemore » performance. Model bias and normalized root mean square values for precipitation predictions are relatively small, but there are significant differences when comparing modeled and observed probability density functions for precipitation in EC and YRD. Increasing model horizontal resolution tends to reduce model bias and error for precipitation predictions. The surface and column aerosol loading is maximized between about 32°N and 42°N in early to mid-May during the 2010 EASM, and then shifts north while decreasing in magnitude during July and August. Changing model resolution moderately changes the spatiotemporal relationships between aerosols, cloud properties, and precipitation during the EASM, thus demonstrating the importance of model grid resolution in simulating EASM circulation and rainfall patterns over EC and the YRD. In conclusion, results from this work demonstrate the capability and limitations in the aerosol, cloud, and precipitation representation of WRF-CAM5 for regional-scale applications down to relatively fine horizontal resolutions. Further WRF-CAM5 model development and application in this area is needed.« less

  14. High quality transmission Kikuchi diffraction analysis of deformed alloys - Case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokarski, Tomasz, E-mail: tokarski@agh.edu.pl

    Modern scanning electron microscopes (SEM) equipped with thermally assisted field emission guns (Schottky FEG) are capable of imaging with a resolution in the range of several nanometers or better. Simultaneously, the high electron beam current can be used, which enables fast chemical and crystallographic analysis with a higher resolution than is normally offered by SEM with a tungsten cathode. The current resolution that limits the EDS and EBSD analysis is related to materials' physics, particularly to the electron-specimen interaction volume. The application of thin, electron-transparent specimens, instead of bulk samples, improves the resolution and allows for the detailed analysis ofmore » very fine microstructural features. Beside the typical imaging mode, it is possible to use a standard EBSD camera in such a configuration that only transmitted and scattered electrons are detected. This modern approach was successfully applied to various materials giving rise to significant resolution improvement, especially for the light element magnesium based alloys. This paper presents an insight into the application of the transmission Kikuchi diffraction (TKD) technique applied to the most troublesome, heavily-deformed materials. In particular, the values of the highest possible acquisition rates for high resolution and high quality mapping were estimated within typical imaging conditions of stainless steel and magnesium-yttrium alloy. - Highlights: •Monte Carlo simulations were used to simulate EBSD camera intensity for various measuring conditions. •Transmission Kikuchi diffraction parameters were evaluated for highly deformed, light and heavy elements based alloys. •High quality maps with 20 nm spatial resolution were acquired for Mg and Fe based alloys. •High speed TKD measurements were performed at acquisition rates comparable to the reflection EBSD.« less

  15. Evaluation of a multi-scale WRF-CAM5 simulation during the 2010 East Asian Summer Monsoon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Patrick; Zhang, Yang; Wang, Kai

    The Weather Research and Forecasting model with Chemistry (WRF-Chem) with the physics package of the Community Atmosphere Model Version 5 (CAM5) has been applied at multiple scales over Eastern China (EC) and the Yangtze River Delta (YRD) to evaluate how increased horizontal resolution with physics designed for a coarser resolution climate model impacts aerosols and clouds, and the resulting precipitation characteristics and performance during the 2010 East Asian Summer Monsoon (EASM). Despite large underpredictions in surface aerosol concentrations and aerosol optical depth, there is good spatial agreement with surface observations of chemical predictions, and increasing spatial resolution tends to improvemore » performance. Model bias and normalized root mean square values for precipitation predictions are relatively small, but there are significant differences when comparing modeled and observed probability density functions for precipitation in EC and YRD. Increasing model horizontal resolution tends to reduce model bias and error for precipitation predictions. The surface and column aerosol loading is maximized between about 32N and 42N in early to mid-May during the 2010 EASM, and then shifts north while decreasing in magnitude during July and August. Changing model resolution moderately changes the spatiotemporal relationships between aerosols, cloud properties, and precipitation during the EASM, thus demonstrating the importance of model grid resolution in simulating EASM circulation and rainfall patterns over EC and the YRD. Results from this work demonstrate the capability and limitations in the aerosol, cloud, and precipitation representation of WRF-CAM5 for regional-scale applications down to relatively fine horizontal resolutions. Further WRF-CAM5 model development and application in this area is needed.« less

  16. Evaluation of a multi-scale WRF-CAM5 simulation during the 2010 East Asian Summer Monsoon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Patrick; Zhang, Yang; Wang, Kai

    The Weather Research and Forecasting model with Chemistry (WRF-Chem) with the physics package of the Community Atmosphere Model Version 5 (CAM5) has been applied at multiple scales over Eastern China (EC) and the Yangtze River Delta (YRD) to evaluate how increased horizontal resolution with physics designed for a coarser resolution climate model impacts aerosols and clouds, and the resulting precipitation characteristics and performance during the 2010 East Asian Summer Monsoon (EASM). Despite large underpredictions in surface aerosol concentrations and aerosol optical depth, there is good spatial agreement with surface observations of chemical predictions, and increasing spatial resolution tends to improvemore » performance. Model bias and normalized root mean square values for precipitation predictions are relatively small, but there are significant differences when comparing modeled and observed probability density functions for precipitation in EC and YRD. Increasing model horizontal resolution tends to reduce model bias and error for precipitation predictions. The surface and column aerosol loading is maximized between about 32°N and 42°N in early to mid-May during the 2010 EASM, and then shifts north while decreasing in magnitude during July and August. Changing model resolution moderately changes the spatiotemporal relationships between aerosols, cloud properties, and precipitation during the EASM, thus demonstrating the importance of model grid resolution in simulating EASM circulation and rainfall patterns over EC and the YRD. In conclusion, results from this work demonstrate the capability and limitations in the aerosol, cloud, and precipitation representation of WRF-CAM5 for regional-scale applications down to relatively fine horizontal resolutions. Further WRF-CAM5 model development and application in this area is needed.« less

  17. Fast I/O for Massively Parallel Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew T.

    1996-01-01

    The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.

  18. The predictive power of SIMION/SDS simulation software for modeling ion mobility spectrometry instruments

    NASA Astrophysics Data System (ADS)

    Lai, Hanh; McJunkin, Timothy R.; Miller, Carla J.; Scott, Jill R.; Almirall, José R.

    2008-09-01

    The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOSFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene, 2,7-dinitrofluorene, and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: (1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctly predict the ion drift times; (2) a drift gas composition study evaluates the accuracy in predicting the resolution; (3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.

  19. The Predictive Power of SIMION/SDS Simulation Software for Modeling Ion Mobility Spectrometry Instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanh Lai; Timothy R. McJunkin; Carla J. Miller

    2008-09-01

    The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: 1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctlymore » predict the ion drift times; 2) a drift gas composition study evaluates the accuracy in predicting the resolution; and 3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.« less

  20. Millimeter spatial resolution in vivo sodium MRI of the human eye at 7 T using a dedicated radiofrequency transceiver array.

    PubMed

    Wenz, Daniel; Kuehne, Andre; Huelnhagen, Till; Nagel, Armin M; Waiczies, Helmar; Weinberger, Oliver; Oezerdem, Celal; Stachs, Oliver; Langner, Soenke; Seeliger, Erdmann; Flemming, Bert; Hodge, Russell; Niendorf, Thoralf

    2018-08-01

    The aim of this study was to achieve millimeter spatial resolution sodium in vivo MRI of the human eye at 7 T using a dedicated six-channel transceiver array. We present a detailed description of the radiofrequency coil design, along with electromagnetic field and specific absorption ratio simulations, data validation, and in vivo application. Electromagnetic field and specific absorption ratio simulations were performed. Transmit field uniformity was optimized by using a multi-objective genetic algorithm. Transmit field mapping was conducted using a phase-sensitive method. An in vivo feasibility study was carried out with 3-dimensional density-adapted projection reconstruction imaging technique. Measured transmit field distribution agrees well with the one obtained from simulations. The specific absorption ratio simulations confirm that the radiofrequency coil is safe for clinical use. Our radiofrequency coil is light and conforms to an average human head. High spatial resolution (nominal 1.4 and 1.0 mm isotropic) sodium in vivo images of the human eye were acquired within scan times suitable for clinical applications (∼ 10 min). Three most important eye compartments in the context of sodium physiology were clearly delineated in all of the images: the vitreous humor, the aqueous humor, and the lens. Our results provide encouragement for further clinical studies. The implications for research into eye diseases including ocular melanoma, cataract, and glaucoma are discussed. Magn Reson Med 80:672-684, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Hybrid particle-continuum simulations coupling Brownian dynamics and local dynamic density functional theory.

    PubMed

    Qi, Shuanhu; Schmid, Friederike

    2017-11-08

    We present a multiscale hybrid particle-field scheme for the simulation of relaxation and diffusion behavior of soft condensed matter systems. It combines particle-based Brownian dynamics and field-based local dynamics in an adaptive sense such that particles can switch their level of resolution on the fly. The switching of resolution is controlled by a tuning function which can be chosen at will according to the geometry of the system. As an application, the hybrid scheme is used to study the kinetics of interfacial broadening of a polymer blend, and is validated by comparing the results to the predictions from pure Brownian dynamics and pure local dynamics calculations.

  2. Development of High-Resolution UV-VIS Diagnostics for Space Plasma Simulation

    NASA Astrophysics Data System (ADS)

    Taylor, Andrew; Batishchev, Oleg

    2012-10-01

    Non-invasive far-UV-VIS plasma emission allows remote diagnostics of plasma, which is particularly important for space application. Accurate vacuum tank space plasma simulations require monochromators with high spectral resolution (better than 0.01A) to capture important details of atomic and ionic lines, such as Ly-alpha, etc. We are building a new system based on the previous work [1], and will discuss the development of a spectrometry system that combines a single-pass vacuum far-UV-NIR spectrometer and a tunable Fabry-Perot etalon. [4pt] [1] O. Batishchev and J.L. Cambier, Experimental Study of the Mini-Helicon Thruster, Air Force Research Laboratory Report, AFRL-RZ-ED-TR-2009-0020, 2009.

  3. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    NASA Astrophysics Data System (ADS)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  4. Probabilistic Downscaling of Remote Sensing Data with Applications for Multi-Scale Biogeochemical Flux Modeling.

    PubMed

    Stoy, Paul C; Quaife, Tristan

    2015-01-01

    Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.

  5. Probabilistic Downscaling of Remote Sensing Data with Applications for Multi-Scale Biogeochemical Flux Modeling

    PubMed Central

    Stoy, Paul C.; Quaife, Tristan

    2015-01-01

    Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes. PMID:26067835

  6. A new framework for the analysis of continental-scale convection-resolving climate simulations

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Charpilloz, C.; Arteaga, A.; Ban, N.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Schulthess, T. C.; Christoph, S.

    2017-12-01

    High-resolution climate simulations at horizontal resolution of O(1-4 km) allow explicit treatment of deep convection (thunderstorms and rain showers). Explicitly treating convection by the governing equations reduces uncertainties associated with parametrization schemes and allows a model formulation closer to physical first principles [1,2]. But kilometer-scale climate simulations with long integration periods and large computational domains are expensive and data storage becomes unbearably voluminous. Hence new approaches to perform analysis are required. In the crCLIM project we propose a new climate modeling framework that allows scientists to conduct analysis at high spatial and temporal resolution. We tackle the computational cost by using the largest available supercomputers such as hybrid CPU-GPU architectures. For this the COSMO model has been adapted to run on such architectures [2]. We then alleviate the I/O-bottleneck by employing a simulation data-virtualizer (SDaVi) that allows to trade-off storage (space) for computational effort (time). This is achieved by caching the simulation outputs and efficiently launching re-simulations in case of cache misses. All this is done transparently from the analysis applications [3]. For the re-runs this approach requires a bit-reproducible version of COSMO. That is to say a model that produces identical results on different architectures to ensure coherent recomputation of the requested data [4]. In this contribution we present a version of SDaVi, a first performance model, and a strategy to obtain bit-reproducibility across hardware architectures.[1] N. Ban, J. Schmidli, C. Schär. Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 7889-7907, 2014.[2] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, C. Schär. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19. Geosci. Model Dev, 3393-3412, 2016.[3] S. Di Girolamo, P. Schmid, T. Schulthess, T. Hoefler. Virtualized Big Data: Reproducing Simulation Output on Demand. Submit. to the 23rd ACM Symposium on PPoPP 18, Vienna, Austria.[4] A. Arteaga, O. Fuhrer, T. Hoefler. Designing Bit-Reproducible Portable High-Performance Applications. IEEE 28th IPDPS, 2014.

  7. Accelerating cross-validation with total variation and its application to super-resolution imaging

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Ikeda, Shiro; Akiyama, Kazunori; Kabashima, Yoshiyuki

    2017-12-01

    We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ_1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

  8. Simulation of Anomalous Regional Climate Events with a Variable Resolution Stretched Grid GCM

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.

    1999-01-01

    The stretched-grid approach provides an efficient down-scaling and consistent interactions between global and regional scales due to using one variable-resolution model for integrations. It is a workable alternative to the widely used nested-grid approach introduced over a decade ago as a pioneering step in regional climate modeling. A variable-resolution General Circulation Model (GCM) employing a stretched grid, with enhanced resolution over the US as the area of interest, is used for simulating two anomalous regional climate events, the US summer drought of 1988 and flood of 1993. The special mode of integration using a stretched-grid GCM and data assimilation system is developed that allows for imitating the nested-grid framework. The mode is useful for inter-comparison purposes and for underlining the differences between these two approaches. The 1988 and 1993 integrations are performed for the two month period starting from mid May. Regional resolutions used in most of the experiments is 60 km. The major goal and the result of the study is obtaining the efficient down-scaling over the area of interest. The monthly mean prognostic regional fields for the stretched-grid integrations are remarkably close to those of the verifying analyses. Simulated precipitation patterns are successfully verified against gauge precipitation observations. The impact of finer 40 km regional resolution is investigated for the 1993 integration and an example of recovering subregional precipitation is presented. The obtained results show that the global variable-resolution stretched-grid approach is a viable candidate for regional and subregional climate studies and applications.

  9. Optimization of Collision Detection in Surgical Simulations

    NASA Astrophysics Data System (ADS)

    Custură-Crăciun, Dan; Cochior, Daniel; Neagu, Corneliu

    2014-11-01

    Just like flight and spaceship simulators already represent a standard, we expect that soon enough, surgical simulators should become a standard in medical applications. A simulations quality is strongly related to the image quality as well as the degree of realism of the simulation. Increased quality requires increased resolution, increased representation speed but more important, a larger amount of mathematical equations. To make it possible, not only that we need more efficient computers, but especially more calculation process optimizations. A simulator executes one of the most complex sets of calculations each time it detects a contact between the virtual objects, therefore optimization of collision detection is fatal for the work-speed of a simulator and hence in its quality

  10. Application of the CloudSat and NEXRAD Radars Toward Improvements in High Resolution Operational Forecasts

    NASA Technical Reports Server (NTRS)

    Molthan, A. L.; Haynes, J. A.; Case, J. L.; Jedlovec, G. L.; Lapenta, W. M.

    2008-01-01

    As computational power increases, operational forecast models are performing simulations with higher spatial resolution allowing for the transition from sub-grid scale cloud parameterizations to an explicit forecast of cloud characteristics and precipitation through the use of single- or multi-moment bulk water microphysics schemes. investments in space-borne and terrestrial remote sensing have developed the NASA CloudSat Cloud Profiling Radar and the NOAA National Weather Service NEXRAD system, each providing observations related to the bulk properties of clouds and precipitation through measurements of reflectivity. CloudSat and NEXRAD system radars observed light to moderate snowfall in association with a cold-season, midlatitude cyclone traversing the Central United States in February 2007. These systems are responsible for widespread cloud cover and various types of precipitation, are of economic consequence, and pose a challenge to operational forecasters. This event is simulated with the Weather Research and Forecast (WRF) Model, utilizing the NASA Goddard Cumulus Ensemble microphysics scheme. Comparisons are made between WRF-simulated and observed reflectivity available from the CloudSat and NEXRAD systems. The application of CloudSat reflectivity is made possible through the QuickBeam radiative transfer model, with cautious application applied in light of single scattering characteristics and spherical target assumptions. Significant differences are noted within modeled and observed cloud profiles, based upon simulated reflectivity, and modifications to the single-moment scheme are tested through a supplemental WRF forecast that incorporates a temperature dependent snow crystal size distribution.

  11. Improvement of the energy resolution of pixelated CdTe detectors for applications in 0νββ searches

    NASA Astrophysics Data System (ADS)

    Gleixner, T.; Anton, G.; Filipenko, M.; Seller, P.; Veale, M. C.; Wilson, M. D.; Zang, A.; Michel, T.

    2015-07-01

    Experiments trying to detect 0νββ are very challenging. Their requirements include a good energy resolution and a good detection efficiency. With current fine pixelated CdTe detectors there is a trade off between the energy resolution and the detection efficiency, which limits their performance. It will be shown with simulations that this problem can be mostly negated by analysing the cathode signal which increases the optimal sensor thickness. We will compare different types of fine pixelated CdTe detectors (Timepix, Dosepix, HEXITEC) from this point of view.

  12. The Effect of Rainfall Measurement Technique and Its Spatiotemporal Resolution on Discharge Predictions in the Netherlands

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Brauer, C.; Overeem, A.; Sassi, M.; Rios Gaona, M. F.

    2014-12-01

    Several rainfall measurement techniques are available for hydrological applications, each with its own spatial and temporal resolution. We investigated the effect of these spatiotemporal resolutions on discharge simulations in lowland catchments by forcing a novel rainfall-runoff model (WALRUS) with rainfall data from gauges, radars and microwave links. The hydrological model used for this analysis is the recently developed Wageningen Lowland Runoff Simulator (WALRUS). WALRUS is a rainfall-runoff model accounting for hydrological processes relevant to areas with shallow groundwater (e.g. groundwater-surface water feedback). Here, we used WALRUS for case studies in a freely draining lowland catchment and a polder with controlled water levels. We used rain gauge networks with automatic (hourly resolution but low spatial density) and manual gauges (high spatial density but daily resolution). Operational (real-time) and climatological (gauge-adjusted) C-band radar products and country-wide rainfall maps derived from microwave link data from a cellular telecommunication network were also used. Discharges simulated with these different inputs were compared to observations. We also investigated the effect of spatiotemporal resolution with a high-resolution X-band radar data set for catchments with different sizes. Uncertainty in rainfall forcing is a major source of uncertainty in discharge predictions, both with lumped and with distributed models. For lumped rainfall-runoff models, the main source of input uncertainty is associated with the way in which (effective) catchment-average rainfall is estimated. When catchments are divided into sub-catchments, rainfall spatial variability can become more important, especially during convective rainfall events, leading to spatially varying catchment wetness and spatially varying contribution of quick flow routes. Improving rainfall measurements and their spatiotemporal resolution can improve the performance of rainfall-runoff models, indicating their potential for reducing flood damage through real-time control.

  13. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2013-12-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation type data from the European Space Agency (ESA) GlobCover Project, and 30 arc-sec Leaf Area Index and Fraction of Absorbed Photosynthetically Active Radiation data from the ESA GlobCarbon Project. Simulations are carried out for the Metropolitan Area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers with depths of 0.01 and 1.0 m are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering the period from 6 to 7 September 2007 are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, topographic and land-use databases and grid resolution. Our comparisons show overall good agreement between simulated and observed data and also indicate that the low resolution of the 30 arc-sec soil database from United States Geological Survey, the soil moisture and skin temperature initial conditions assimilated from the GFS analyses and the synoptic forcing on the lateral boundaries of the finer grids may affect an adequate spatial description of the meteorological variables.

  14. Parameter uncertainty in simulations of extreme precipitation and attribution studies.

    NASA Astrophysics Data System (ADS)

    Timmermans, B.; Collins, W. D.; O'Brien, T. A.; Risser, M. D.

    2017-12-01

    The attribution of extreme weather events, such as heavy rainfall, to anthropogenic influence involves the analysis of their probability in simulations of climate. The climate models used however, such as the Community Atmosphere Model (CAM), employ approximate physics that gives rise to "parameter uncertainty"—uncertainty about the most accurate or optimal values of numerical parameters within the model. In particular, approximate parameterisations for convective processes are well known to be influential in the simulation of precipitation extremes. Towards examining the impact of this source of uncertainty on attribution studies, we investigate the importance of components—through their associated tuning parameters—of parameterisations relating to deep and shallow convection, and cloud and aerosol microphysics in CAM. We hypothesise that as numerical resolution is increased the change in proportion of variance induced by perturbed parameters associated with the respective components is consistent with the decreasing applicability of the underlying hydrostatic assumptions. For example, that the relative influence of deep convection should diminish as resolution approaches that where convection can be resolved numerically ( 10 km). We quantify the relationship between the relative proportion of variance induced and numerical resolution by conducting computer experiments that examine precipitation extremes over the contiguous U.S. In order to mitigate the enormous computational burden of running ensembles of long climate simulations, we use variable-resolution CAM and employ both extreme value theory and surrogate modelling techniques ("emulators"). We discuss the implications of the relationship between parameterised convective processes and resolution both in the context of attribution studies and progression towards models that fully resolve convection.

  15. Development of a Compton camera for safeguards applications in a pyroprocessing facility

    NASA Astrophysics Data System (ADS)

    Park, Jin Hyung; Kim, Young Su; Kim, Chan Hyeong; Seo, Hee; Park, Se-Hwan; Kim, Ho-Dong

    2014-11-01

    The Compton camera has a potential to be used for localizing nuclear materials in a large pyroprocessing facility due to its unique Compton kinematics-based electronic collimation method. Our R&D group, KAERI, and Hanyang University have made an effort to develop a scintillation-detector-based large-area Compton camera for safeguards applications. In the present study, a series of Monte Carlo simulations was performed with Geant4 in order to examine the effect of the detector parameters and the feasibility of using a Compton camera to obtain an image of the nuclear material distribution. Based on the simulation study, experimental studies were performed to assess the possibility of Compton imaging in accordance with the type of the crystal. Two different types of Compton cameras were fabricated and tested with a pixelated type of LYSO (Ce) and a monolithic type of NaI(Tl). The conclusions of this study as a design rule for a large-area Compton camera can be summarized as follows: 1) The energy resolution, rather than position resolution, of the component detector was the limiting factor for the imaging resolution, 2) the Compton imaging system needs to be placed as close as possible to the source location, and 3) both pixelated and monolithic types of crystals can be utilized; however, the monolithic types, require a stochastic-method-based position-estimating algorithm for improving the position resolution.

  16. The Feasibility of Conformal Thermal Therapy with Transurethral Ultrasound Heating Applicators and MR Temperature Feedback

    NASA Astrophysics Data System (ADS)

    Choy, Vanessa; Tang, Kee; Wachsmuth, Jeff; Chopra, Rajiv; Bronskill, Michael

    2006-05-01

    Transurethral thermal therapy offers a minimally invasive alternative for the treatment of prostate diseases including benign prostate hyperplasia (BPH) and prostate cancer. Accurate heating of a targeted region of the gland can be achieved through the use of a rotating directional heating source incorporating planar ultrasound transducers, and the implementation of active temperature feedback along the beam direction during heating provided by magnetic resonance (MR) thermometry. The performance of this control method with practical spatial, temporal, and temperature resolution (such as angular alignment, spatial resolution, update rate for temperature feedback (imaging time), and the presence of noise) for thermal feedback using a clinical 1.5 T MR scanner was investigated in simulations. As expected, the control algorithm was most sensitive to the presence of noise, with noticeable degradation in its performance above ±2°C of temperature uncertainty. With respect to temporal resolution, acceptable performance was achieved at update rates of 5s or faster. The control algorithm was relatively insensitive to reduced spatial resolution due to the broad nature of the heating pattern produced by the heating applicator, this provides an opportunity to improve signal-to-noise ratio (SNR). The overall simulation results confirm that existing clinical 1.5T MR imagers are capable of providing adequate temperature feedback for transurethral thermal therapy without special pulse sequences or enhanced imaging hardware.

  17. The GMAO OSSE for Weather Analysis and Prediction Using the High-Resolution GEOS-5 Nature Run

    NASA Technical Reports Server (NTRS)

    Errico, Ronald; Prive, Nikki; Da Silva Carvalho, David

    2017-01-01

    Applications of OSSEs:1. Estimate effects of proposed instruments (and their competing designs)on analysis skill by exploiting simulated environment, and 2. Evaluate present and proposed techniques for data assimilation by exploiting known truth.

  18. A generic framework for internet-based interactive applications of high-resolution 3-D medical image data.

    PubMed

    Liu, Danzhou; Hua, Kien A; Sugaya, Kiminobu

    2008-09-01

    With the advances in medical imaging devices, large volumes of high-resolution 3-D medical image data have been produced. These high-resolution 3-D data are very large in size, and severely stress storage systems and networks. Most existing Internet-based 3-D medical image interactive applications therefore deal with only low- or medium-resolution image data. While it is possible to download the whole 3-D high-resolution image data from the server and perform the image visualization and analysis at the client site, such an alternative is infeasible when the high-resolution data are very large, and many users concurrently access the server. In this paper, we propose a novel framework for Internet-based interactive applications of high-resolution 3-D medical image data. Specifically, we first partition the whole 3-D data into buckets, remove the duplicate buckets, and then, compress each bucket separately. We also propose an index structure for these buckets to efficiently support typical queries such as 3-D slicer and region of interest, and only the relevant buckets are transmitted instead of the whole high-resolution 3-D medical image data. Furthermore, in order to better support concurrent accesses and to improve the average response time, we also propose techniques for efficient query processing, incremental transmission, and client sharing. Our experimental study in simulated and realistic environments indicates that the proposed framework can significantly reduce storage and communication requirements, and can enable real-time interaction with remote high-resolution 3-D medical image data for many concurrent users.

  19. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  20. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  1. A Theoretical Study and Numerical Simulation of a Quasi-Distributed Sensor Based on the Low-Finesse Fabry-Perot Interferometer: Frequency-Division Multiplexing

    PubMed Central

    Guillen Bonilla, José Trinidad; Guillen Bonilla, Alex; Rodríguez Betancourtt, Verónica M.; Guillen Bonilla, Héctor; Casillas Zamora, Antonio

    2017-01-01

    The application of the sensor optical fibers in the areas of scientific instrumentation and industrial instrumentation is very attractive due to its numerous advantages. In the industry of civil engineering for example, quasi-distributed sensors made with optical fiber are used for reliable strain and temperature measurements. Here, a quasi-distributed sensor in the frequency domain is discussed. The sensor consists of a series of low-finesse Fabry-Perot interferometers where each Fabry-Perot interferometer acts as a local sensor. Fabry-Perot interferometers are formed by pairs of identical low reflective Bragg gratings imprinted in a single mode fiber. All interferometer sensors have different cavity length, provoking frequency-domain multiplexing. The optical signal represents the superposition of all interference patterns which can be decomposed using the Fourier transform. The frequency spectrum was analyzed and sensor’s properties were defined. Following that, a quasi-distributed sensor was numerically simulated. Our sensor simulation considers sensor properties, signal processing, noise system, and instrumentation. The numerical results show the behavior of resolution vs. signal-to-noise ratio. From our results, the Fabry-Perot sensor has high resolution and low resolution. Both resolutions are conceivable because the Fourier Domain Phase Analysis (FDPA) algorithm elaborates two evaluations of Bragg wavelength shift. PMID:28420083

  2. 3D noninvasive ultrasound Joule heat tomography based on acousto-electric effect using unipolar pulses: a simulation study

    PubMed Central

    Yang, Renhuan; Li, Xu; Song, Aiguo; He, Bin; Yan, Ruqiang

    2012-01-01

    Electrical properties of biological tissues are highly sensitive to their physiological and pathological status. Thus it is of importance to image electrical properties of biological tissues. However, spatial resolution of conventional electrical impedance tomography (EIT) is generally poor. Recently, hybrid imaging modalities combining electric conductivity contrast and ultrasonic resolution based on acouto-electric effect has attracted considerable attention. In this study, we propose a novel three-dimensional (3D) noninvasive ultrasound Joule heat tomography (UJHT) approach based on acouto-electric effect using unipolar ultrasound pulses. As the Joule heat density distribution is highly dependent on the conductivity distribution, an accurate and high resolution mapping of the Joule heat density distribution is expected to give important information that is closely related to the conductivity contrast. The advantages of the proposed ultrasound Joule heat tomography using unipolar pulses include its simple inverse solution, better performance than UJHT using common bipolar pulses and its independence of any priori knowledge of the conductivity distribution of the imaging object. Computer simulation results show that using the proposed method, it is feasible to perform a high spatial resolution Joule heat imaging in an inhomogeneous conductive media. Application of this technique on tumor scanning is also investigated by a series of computer simulations. PMID:23123757

  3. A Theoretical Study and Numerical Simulation of a Quasi-Distributed Sensor Based on the Low-Finesse Fabry-Perot Interferometer: Frequency-Division Multiplexing.

    PubMed

    Guillen Bonilla, José Trinidad; Guillen Bonilla, Alex; Rodríguez Betancourtt, Verónica M; Guillen Bonilla, Héctor; Casillas Zamora, Antonio

    2017-04-14

    The application of the sensor optical fibers in the areas of scientific instrumentation and industrial instrumentation is very attractive due to its numerous advantages. In the industry of civil engineering for example, quasi-distributed sensors made with optical fiber are used for reliable strain and temperature measurements. Here, a quasi-distributed sensor in the frequency domain is discussed. The sensor consists of a series of low-finesse Fabry-Perot interferometers where each Fabry-Perot interferometer acts as a local sensor. Fabry-Perot interferometers are formed by pairs of identical low reflective Bragg gratings imprinted in a single mode fiber. All interferometer sensors have different cavity length, provoking frequency-domain multiplexing. The optical signal represents the superposition of all interference patterns which can be decomposed using the Fourier transform. The frequency spectrum was analyzed and sensor's properties were defined. Following that, a quasi-distributed sensor was numerically simulated. Our sensor simulation considers sensor properties, signal processing, noise system, and instrumentation. The numerical results show the behavior of resolution vs. signal-to-noise ratio. From our results, the Fabry-Perot sensor has high resolution and low resolution. Both resolutions are conceivable because the Fourier Domain Phase Analysis (FDPA) algorithm elaborates two evaluations of Bragg wavelength shift.

  4. Molecular dynamics simulations of large macromolecular complexes.

    PubMed

    Perilla, Juan R; Goh, Boon Chong; Cassidy, C Keith; Liu, Bo; Bernardi, Rafael C; Rudack, Till; Yu, Hang; Wu, Zhe; Schulten, Klaus

    2015-04-01

    Connecting dynamics to structural data from diverse experimental sources, molecular dynamics simulations permit the exploration of biological phenomena in unparalleled detail. Advances in simulations are moving the atomic resolution descriptions of biological systems into the million-to-billion atom regime, in which numerous cell functions reside. In this opinion, we review the progress, driven by large-scale molecular dynamics simulations, in the study of viruses, ribosomes, bioenergetic systems, and other diverse applications. These examples highlight the utility of molecular dynamics simulations in the critical task of relating atomic detail to the function of supramolecular complexes, a task that cannot be achieved by smaller-scale simulations or existing experimental approaches alone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. WaterWorld, a spatial hydrological model applied at scales from local to global: key challenges to local application

    NASA Astrophysics Data System (ADS)

    Burke, Sophia; Mulligan, Mark

    2017-04-01

    WaterWorld is a widely used spatial hydrological policy support system. The last user census indicates regular use by 1029 institutions across 141 countries. A key feature of WaterWorld since 2001 is that it comes pre-loaded with all of the required data for simulation anywhere in the world at a 1km or 1 ha resolution. This means that it can be easily used, without specialist technical ability, to examine baseline hydrology and the impacts of scenarios for change or management interventions to support policy formulation, hence its labelling as a policy support system. WaterWorld is parameterised by an extensive global gridded database of more than 600 variables, developed from many sources, since 1998, the so-called simTerra database. All of these data are available globally at 1km resolution and some variables (terrain, land cover, urban areas, water bodies) are available globally at 1ha resolution. If users have access to better data than is pre-loaded, they can upload their own data. WaterWorld is generally applied at the national or basin scale at 1km resolution, or locally (for areas of <10,000km2) at 1ha resolution, though continental (1km resolution) and global (10km resolution) applications are possible so it is a model with local to global applications. WaterWorld requires some 140 maps to run including monthly climate data, land cover and use, terrain, population, water bodies and more. Whilst publically-available terrain and land cover data are now well developed for local scale application, climate and land use data remain a challenge, with most global products being available at 1km or 10km resolution or worse, which is rather coarse for local application. As part of the EartH2Observe project we have used WFDEI (WATCH Forcing Data methodology applied to ERA-Interim data) at 1km resolution to provide an alternative input to WaterWorld's preloaded climate data. Here we examine the impacts of that on key hydrological outputs: water balance, water quality and outline the remaining challenges of using datasets like these for local scale application.

  6. Application of satellite estimates of rainfall distribution to simulate the potential for malaria transmission in Africa

    NASA Astrophysics Data System (ADS)

    Yamana, T. K.; Eltahir, E. A.

    2009-12-01

    The Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS) is a mechanistic model developed to assess malaria risk in areas where the disease is water-limited. This model relies on precipitation inputs as its primary forcing. Until now, applications of the model have used ground-based precipitation observations. However, rain gauge networks in the areas most affected by malaria are often sparse. The increasing availability of satellite based rainfall estimates could greatly extend the range of the model. The minimum temporal resolution of precipitation data needed was determined to be one hour. The CPC Morphing technique (CMORPH ) distributed by NOAA fits this criteria, as it provides 30-minute estimates at 8km resolution. CMORPH data were compared to ground observations in four West African villages, and calibrated to reduce overestimation and false alarm biases. The calibrated CMORPH data were used to force HYDREMATS, resulting in outputs for mosquito populations, vectorial capacity and malaria transmission.

  7. Dynamical downscaling of wind fields for wind power applications

    NASA Astrophysics Data System (ADS)

    Mengelkamp, H.-T.; Huneke, S.; Geyer, J.

    2010-09-01

    Dynamical downscaling of wind fields for wind power applications H.-T. Mengelkamp*,**, S. Huneke**, J, Geyer** *GKSS Research Center Geesthacht GmbH **anemos Gesellschaft für Umweltmeteorologie mbH Investments in wind power require information on the long-term mean wind potential and its temporal variations on daily to annual and decadal time scales. This information is rarely available at specific wind farm sites. Short-term on-site measurements usually are only performed over a 12 months period. These data have to be set into the long-term perspective through correlation to long-term consistent wind data sets. Preliminary wind information is often asked for to select favourable wind sites over regional and country wide scales. Lack of high-quality wind measurements at weather stations was the motivation to start high resolution wind field simulations The simulations are basically a refinement of global scale reanalysis data by means of high resolution simulations with an atmospheric mesoscale model using high-resolution terrain and land-use data. The 3-dimensional representation of the atmospheric state available every six hours at 2.5 degree resolution over the globe, known as NCAR/NCEP reanalysis data, forms the boundary conditions for continuous simulations with the non-hydrostatic atmospheric mesoscale model MM5. MM5 is nested in itself down to a horizontal resolution of 5 x 5 km². The simulation is performed for different European countries and covers the period 2000 to present and is continuously updated. Model variables are stored every 10 minutes for various heights. We have analysed the wind field primarily. The wind data set is consistent in space and time and provides information on the regional distribution of the long-term mean wind potential, the temporal variability of the wind potential, the vertical variation of the wind potential, and the temperature, and pressure distribution (air density). In the context of wind power these data are used • as an initial estimate of wind and energy potential • for the long-term correlation of wind measurements and turbine production data • to provide wind potential maps on a regional to country wide scale • to provide input data sets for simulation models • to determine the spatial correlation of the wind field in portfolio calculations • to calculate the wind turbine energy loss during prescribed downtimes • to provide information on the temporal variations of the wind and wind turbine energy production The time series of wind speed and wind direction are compared to measurements at offshore and onshore locations.

  8. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  9. Re-engineering the stereoscope for the 21st Century

    NASA Astrophysics Data System (ADS)

    Kollin, Joel S.; Hollander, Ari J.

    2007-02-01

    While discussing the current state of stereo head-mounted and 3D projection displays, the authors came to the realization that flat-panel LCD displays offer higher resolution than projection for stereo display at a low (and continually dropping) cost. More specifically, where head-mounted displays of moderate resolution and field-of-view cost tens of thousands of dollars, we can achieve an angular resolution approaching that of the human eye with a field-of-view (FOV) greater than 90° for less than $1500. For many immersive applications head tracking is unnecessary and sometimes even undesirable, and a low cost/high quality wide FOV display may significantly increase the application space for 3D display. After outlining the problem and potential of this solution we describe the initial construction of a simple Wheatstone stereoscope using 24" LCD displays and then show engineering improvements that increase the FOV and usability of the system. The applicability of a high-immersion, high-resolution display for art, entertainment, and simulation is presented along with a content production system that utilizes the capabilities of the system. We then discuss the potential use of the system for VR pain control therapy, treatment of post-traumatic stress disorders and other serious games applications.

  10. Simulation of Wind Profile Perturbations for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    2004-01-01

    Ideally, a statistically representative sample of measured high-resolution wind profiles with wavelengths as small as tens of meters is required in design studies to establish aerodynamic load indicator dispersions and vehicle control system capability. At most potential launch sites, high- resolution wind profiles may not exist. Representative samples of Rawinsonde wind profiles to altitudes of 30 km are more likely to be available from the extensive network of measurement sites established for routine sampling in support of weather observing and forecasting activity. Such a sample, large enough to be statistically representative of relatively large wavelength perturbations, would be inadequate for launch vehicle design assessments because the Rawinsonde system accurately measures wind perturbations with wavelengths no smaller than 2000 m (1000 m altitude increment). The Kennedy Space Center (KSC) Jimsphere wind profiles (150/month and seasonal 2 and 3.5-hr pairs) are the only adequate samples of high resolution profiles approx. 150 to 300 m effective resolution, but over-sampled at 25 m intervals) that have been used extensively for launch vehicle design assessments. Therefore, a simulation process has been developed for enhancement of measured low-resolution Rawinsonde profiles that would be applicable in preliminary launch vehicle design studies at launch sites other than KSC.

  11. Ambiguity resolution in systems using Omega for position location

    NASA Technical Reports Server (NTRS)

    Frenkel, G.; Gan, D. G.

    1974-01-01

    The lane ambiguity problem prevents the utilization of the Omega system for many applications such as locating buoys and balloons. The method of multiple lines of position introduced herein uses signals from four or more Omega stations for ambiguity resolution. The coordinates of the candidate points are determined first through the use of the Newton iterative procedure. Subsequently, a likelihood function is generated for each point, and the ambiguity is resolved by selecting the most likely point. The method was tested through simulation.

  12. Partial Ambiguity Resolution for Ground and Space-Based Applications in a GPS+Galileo scenario: A simulation study

    NASA Astrophysics Data System (ADS)

    Nardo, A.; Li, B.; Teunissen, P. J. G.

    2016-01-01

    Integer Ambiguity Resolution (IAR) is the key to fast and precise GNSS positioning. The proper diagnostic metric for successful IAR is provided by the ambiguity success rate being the probability of correct integer estimation. In this contribution we analyse the performance of different GPS+Galileo models in terms of number of epochs needed to reach a pre-determined success rate, for various ground and space-based applications. The simulation-based controlled model environment enables us to gain insight into the factors contributing to the ambiguity resolution strength of the different GPS+Galileo models. Different scenarios of modernized GPS+Galileo are studied, encompassing the long baseline ground case as well as the medium dynamics case (airplane) and the space-based Low Earth Orbiter (LEO) case. In our analyses of these models the capabilities of partial ambiguity resolution (PAR) are demonstrated and compared to the limitations of full ambiguity resolution (FAR). The results show that PAR is generally a more efficient way than FAR to reduce the time needed to achieve centimetre-level positioning precision. For long single baselines, PAR can achieve time reductions of fifty percent to achieve such precision levels, while for multiple baselines it even becomes more effective, reaching reductions up to eighty percent for four station networks. For a LEO, the rapidly changing observation geometry does not even allow FAR, while PAR is then still possible for both dual- and triple-frequency scenarios. With the triple-frequency GPS+Galileo model the availability of precise positioning improves by fifteen percent with respect to the dual-frequency scenario.

  13. Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim

    2018-06-01

    We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.

  14. Complexity, accuracy and practical applicability of different biogeochemical model versions

    NASA Astrophysics Data System (ADS)

    Los, F. J.; Blaas, M.

    2010-04-01

    The construction of validated biogeochemical model applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced complexity might enhance veracity, accuracy and credibility. However, very complex models are not necessarily effective or efficient forecast tools. In this paper, models of varying degrees of complexity are evaluated with respect to their forecast skills. In total 11 biogeochemical model variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included models range from 15 year old applications with relatively simple physics up to present state of the art 3D models. With all applications the same year, 2003, has been simulated. During the model intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different models. This results in models obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of model skill on the entire model domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in model functioning. Using the target diagrams it is demonstrated that recent models are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular year cannot be judged due to the low sampling frequency of the traditional monitoring data at hand. Specifically, the overall results for chlorophyll- a are rather consistent throughout all models, but regionally recent models are better; resolution is crucial for the accuracy of transport and more important than the nature of the forcing of the transport; SPM strongly affects the biomass simulation and species composition, but even the most recent SPM results do not yet obtain a good overall score; coloured dissolved organic matter (CDOM) should be included in the calculation of the light regime; more complexity in the phytoplankton model improves the chlorophyll- a simulation, but the simulated species composition needs further improvement for some of the functional groups.

  15. FASTPM: a new scheme for fast simulations of dark matter and haloes

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick

    2016-12-01

    We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.

  16. Low drift and high resolution miniature optical fiber combined pressure- and temperature sensor for cardio-vascular and urodynamic applications

    NASA Astrophysics Data System (ADS)

    Poeggel, Sven; Tosi, Daniele; Duraibabu, Dineshbabu; Sannino, Simone; Lupoli, Laura; Ippolito, Juliet; Fusco, Fernando; Mirone, Vincenzo; Leen, Gabriel; Lewis, Elfed

    2014-05-01

    The all-glass optical fibre pressure and temperature sensor (OFPTS), present here is a combination of an extrinsic Fabry Perot Interferometer (EFPI) and an fiber Bragg gratings (FBG), which allows a simultaneously measurement of both pressure and temperature. Thermal effects experienced by the EFPI can be compensated by using the FBG. The sensor achieved a pressure measurement resolution of 0.1mmHg with a frame-rate of 100Hz and a low drift rate of < 1 mmHg/hour drift. The sensor has been evaluated using a cardiovascular simulator and additionally has been evaluated in-vivo in a urodynamics application under medical supervision.

  17. GATE Monte Carlo simulation of GE Discovery 600 and a uniformity phantom

    NASA Astrophysics Data System (ADS)

    Sheen, Heesoon; Im, Ki Chun; Choi, Yong; Shin, Hanback; Han, Youngyih; Chung, Kwangzoo; Cho, Junsang; Ahn, Sang Hee

    2014-12-01

    GATE (Geant4 Application Tomography Emission) Monte Carlo simulations have been successful in the application of emission tomography for precise modeling of various physical processes. Most previous studies on Monte Carlo simulations have only involved performance assessments using virtual phantoms. Although that allows the performance of simulated positron emission tomography (PET) to be evaluated, it does not reflect the reality of practical conditions. This restriction causes substantial drawbacks in GATE simulations of real situations. To overcome the described limitation and to provide a method to enable simulation research relevant to clinically important issues, we conducted a GATE simulation using real data from a scanner rather than a virtual phantom and evaluated the scanner is performance. For that purpose, the system and the geometry of a commercial GE PET/ CT (computed tomography) scanner, BGO-based Discovery 600 (D600), was developed for the first time. The performance of the modeled PET system was evaluated by using the National Electrical Manufacturers Association NEMA NU 2-2007 protocols and results were compared with those of the reference data. The sensitivity, scatter fraction, noise-equivalent count rate (NECR), and resolution were estimated by using the protocol of the NEMA NU2-2007. Sensitivities were 9.01 cps/kBq at 0 cm and 9.43 cps/kBq at 10 cm. Scatter fractions were 39.5%. The NECR peak was 89.7 kcps @ 14.7 kBq/cc. Resolutions were 4.8 mm in the transaxial plane and 5.9 mm in the axial plane at 1 cm, and 6.2 mm in the transaxial plane and 6.4 mm in the axial plane at 10 cm. The resolutions exceeded the limited value provided by the manufacturer. The uniformity phantom was simulated using the CT and the PET data. The output data in a ROOT format were converted and then reconstructed by using the C program and STIR (Software for Tomographic Image Reconstruction). The reconstructed images of the simulated uniformity phantom data had comparable quality even though improvement in the quality is still required. In conclusion, we have demonstrated a successful simulation of a PET system by using scanned data. In future studies, the parameters that alter the imaging conditions, such as patient movement and physiological change, need to be studied.

  18. Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging.

    PubMed

    Hardie, Russell C; Barnard, Kenneth J; Ordonez, Raul

    2011-12-19

    Fast nonuniform interpolation based super-resolution (SR) has traditionally been limited to applications with translational interframe motion. This is in part because such methods are based on an underlying assumption that the warping and blurring components in the observation model commute. For translational motion this is the case, but it is not true in general. This presents a problem for applications such as airborne imaging where translation may be insufficient. Here we present a new Fourier domain analysis to show that, for many image systems, an affine warping model with limited zoom and shear approximately commutes with the point spread function when diffraction effects are modeled. Based on this important result, we present a new fast adaptive Wiener filter (AWF) SR algorithm for non-translational motion and study its performance with affine motion. The fast AWF SR method employs a new smart observation window that allows us to precompute all the needed filter weights for any type of motion without sacrificing much of the full performance of the AWF. We evaluate the proposed algorithm using simulated data and real infrared airborne imagery that contains a thermal resolution target allowing for objective resolution analysis.

  19. Performance evaluation of a non-hydrostatic regional climate model over the Mediterranean/Black Sea area and climate projections for the XXI century

    NASA Astrophysics Data System (ADS)

    Mercogliano, Paola; Bucchignani, Edoardo; Montesarchio, Myriam; Zollo, Alessandra Lucia

    2013-04-01

    In the framework of the Work Package 4 (Developing integrated tools for environmental assessment) of PERSEUS Project, high resolution climate simulations have been performed, with the aim of furthering knowledge in the field of climate variability at regional scale, its causes and impacts. CMCC is a no profit centre whose aims are the promotion, research coordination and scientific activities in the field of climate changes. In this work, we show results of numerical simulation performed over a very wide area (13W-46E; 29-56N) at spatial resolution of 14 km, which includes the Mediterranean and Black Seas, using the regional climate model COSMO-CLM. It is a non-hydrostatic model for the simulation of atmospheric processes, developed by the DWD-Germany for weather forecast services; successively, the model has been updated by the CLM-Community, in order to develop climatic applications. It is the only documented numerical model system in Europe designed for spatial resolutions down to 1 km with a range of applicability encompassing operational numerical weather prediction, regional climate modelling the dispersion of trace gases and aerosol and idealised studies and applicable in all regions of the world for a wide range of available climate simulations from global climate and NWP models. Different reasons justify the development of a regional model: the first is the increasing number of works in literature asserting that regional models have also the features to provide more detailed description of the climate extremes, that are often more important then their mean values for natural and human systems. The second one is that high resolution modelling shows adequate features to provide information for impact assessment studies. At CMCC, regional climate modelling is a part of an integrated simulation system and it has been used in different European and African projects to provide qualitative and quantitative evaluation of the hydrogeological and public health risks. A simulation covering the period 1971-2000 and driven by ERA40 reanalysis has been performed, in order to assess the capability of the model to reproduce the present climate, with "perfect boundary conditions". A comparison, in terms of 2-metre temperature and precipitation, with EOBS dataset will be shown and discussed, in order to analyze the capabilities in simulating the main features of the observed climate over a wide area, at high spatial resolution. Then, a comparison between the results of COSMO-CLM driven by the global model CMCC-MED (whose atmospheric component is ECHAM5) and by ERA40 will be provided for a characterization of the errors induced by the global model. Finally, climate projections on the examined area for the XXI century, considering the RCP4.5 emission scenario for the future, will be provided. In this work a special emphasis will be issued to the analysis of the capability to reproduce not only the average climate trend but also extremes of the present and future climate, in terms of temperature, precipitation and wind.

  20. Exploring the impacts of physics and resolution on aqua-planet simulations from a nonhydrostatic global variable-resolution modeling framework: IMPACTS OF PHYSICS AND RESOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Chun; Leung, L. Ruby; Park, Sang-Hun

    Advances in computing resources are gradually moving regional and global numerical forecasting simulations towards sub-10 km resolution, but global high resolution climate simulations remain a challenge. The non-hydrostatic Model for Prediction Across Scales (MPAS) provides a global framework to achieve very high resolution using regional mesh refinement. Previous studies using the hydrostatic version of MPAS (H-MPAS) with the physics parameterizations of Community Atmosphere Model version 4 (CAM4) found notable resolution dependent behaviors. This study revisits the resolution sensitivity using the non-hydrostatic version of MPAS (NH-MPAS) with both CAM4 and CAM5 physics. A series of aqua-planet simulations at global quasi-uniform resolutionsmore » ranging from 240 km to 30 km and global variable resolution simulations with a regional mesh refinement of 30 km resolution over the tropics are analyzed, with a primary focus on the distinct characteristics of NH-MPAS in simulating precipitation, clouds, and large-scale circulation features compared to H-MPAS-CAM4. The resolution sensitivity of total precipitation and column integrated moisture in NH-MPAS is smaller than that in H-MPAS-CAM4. This contributes importantly to the reduced resolution sensitivity of large-scale circulation features such as the inter-tropical convergence zone and Hadley circulation in NH-MPAS compared to H-MPAS. In addition, NH-MPAS shows almost no resolution sensitivity in the simulated westerly jet, in contrast to the obvious poleward shift in H-MPAS with increasing resolution, which is partly explained by differences in the hyperdiffusion coefficients used in the two models that influence wave activity. With the reduced resolution sensitivity, simulations in the refined region of the NH-MPAS global variable resolution configuration exhibit zonally symmetric features that are more comparable to the quasi-uniform high-resolution simulations than those from H-MPAS that displays zonal asymmetry in simulations inside the refined region. Overall, NH-MPAS with CAM5 physics shows less resolution sensitivity compared to CAM4. These results provide a reference for future studies to further explore the use of NH-MPAS for high-resolution climate simulations in idealized and realistic configurations.« less

  1. Generation of a frequency comb and applications thereof

    DOEpatents

    Hagmann, Mark J; Yarotski, Dmitry A

    2013-12-03

    Apparatus for generating a microwave frequency comb (MFC) in the DC tunneling current of a scanning tunneling microscope (STM) by fast optical rectification, cause by nonlinearity of the DC current vs. voltage curve for the tunneling junction, of regularly-spaced, short pulses of optical radiation from a focused mode-locked, ultrafast laser, directed onto the tunneling junction, is described. Application of the MFC to high resolution dopant profiling in semiconductors is simulated. Application of the MFC to other measurements is described.

  2. Optimization Model for Web Based Multimodal Interactive Simulations.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  3. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  4. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  5. On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Jan, A.; Painter, S. L.; Coon, E. T.

    2017-12-01

    Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the DOE Office of Science.

  6. Arctic storms simulated in atmospheric general circulation models under uniform high, uniform low, and variable resolutions

    NASA Astrophysics Data System (ADS)

    Roesler, E. L.; Bosler, P. A.; Taylor, M.

    2016-12-01

    The impact of strong extratropical storms on coastal communities is large, and the extent to which storms will change with a warming Arctic is unknown. Understanding storms in reanalysis and in climate models is important for future predictions. We know that the number of detected Arctic storms in reanalysis is sensitive to grid resolution. To understand Arctic storm sensitivity to resolution in climate models, we describe simulations designed to identify and compare Arctic storms at uniform low resolution (1 degree), at uniform high resolution (1/8 degree), and at variable resolution (1 degree to 1/8 degree). High-resolution simulations resolve more fine-scale structure and extremes, such as storms, in the atmosphere than a uniform low-resolution simulation. However, the computational cost of running a globally uniform high-resolution simulation is often prohibitive. The variable resolution tool in atmospheric general circulation models permits regional high-resolution solutions at a fraction of the computational cost. The storms are identified using the open-source search algorithm, Stride Search. The uniform high-resolution simulation has over 50% more storms than the uniform low-resolution and over 25% more storms than the variable resolution simulations. Storm statistics from each of the simulations is presented and compared with reanalysis. We propose variable resolution as a cost-effective means of investigating physics/dynamics coupling in the Arctic environment. Future work will include comparisons with observed storms to investigate tuning parameters for high resolution models. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2016-7402 A

  7. A Variable-Resolution Stretched-Grid General Circulation Model and Data Assimilation System with Multiple Areas of Interest: Studying the Anomalous Regional Climate Events of 1998

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence; Govindaraju, Ravi C.; Atlas, Robert (Technical Monitor)

    2002-01-01

    The new stretched-grid design with multiple (four) areas of interest, one at each global quadrant, is implemented into both a stretched-grid GCM (general circulation model) and a stretched-grid data assimilation system (DAS). The four areas of interest include: the U.S./Northern Mexico, the El Nino area/Central South America, India/China, and the Eastern Indian Ocean/Australia. Both the stretched-grid GCM and DAS annual (November 1997 through December 1998) integrations are performed with 50 km regional resolution. The efficient regional down-scaling to mesoscales is obtained for each of the four areas of interest while the consistent interactions between regional and global scales and the high quality of global circulation, are preserved. This is the advantage of the stretched-grid approach. The global variable resolution DAS incorporating the stretched-grid GCM has been developed and tested as an efficient tool for producing regional analyses and diagnostics with enhanced mesoscale resolution. The anomalous regional climate events of 1998 that occurred over the U.S., Mexico, South America, China, India, African Sahel, and Australia are investigated in both simulation and data assimilation modes. Tree assimilated products are also used, along with gauge precipitation data, for validating the simulation results. The obtained results show that the stretched-grid GCM and DAS are capable of producing realistic high quality simulated and assimilated products at mesoscale resolution for regional climate studies and applications.

  8. Parallel detecting super-resolution microscopy using correlation based image restoration

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Zhu, Dazhao; Kuang, Cuifang; Liu, Xu

    2017-12-01

    A novel approach to achieve the image restoration is proposed in which each detector's relative position in the detector array is no longer a necessity. We can identify each detector's relative location by extracting a certain area from one of the detector's image and scanning it on other detectors' images. According to this location, we can generate the point spread functions (PSF) for each detector and perform deconvolution for image restoration. Equipped with this method, the microscope with discretionally designed detector array can be easily constructed without the concern of exact relative locations of detectors. The simulated results and experimental results show the total improvement in resolution with a factor of 1.7 compared to conventional confocal fluorescence microscopy. With the significant enhancement in resolution and easiness for application of this method, this novel method should have potential for a wide range of application in fluorescence microscopy based on parallel detecting.

  9. On the application of hybrid meshes in hydraulic machinery CFD simulations

    NASA Astrophysics Data System (ADS)

    Schlipf, M.; Tismer, A.; Riedelbauch, S.

    2016-11-01

    The application of two different hybrid mesh types for the simulation of a Francis runner for automated optimization processes without user input is investigated. Those mesh types are applied to simplified test cases such as flow around NACA airfoils to identify the special mesh resolution effects with reduced complexity, like rotating cascade flows, as they occur in a turbomachine runner channel. The analysis includes the application of those different meshes on the geometries by keeping defined quality criteria and exploring the influences on the simulation results. All results are compared with reference values gained by simulations with blockstructured hexahedron meshes and the same numerical scheme. This avoids additional inaccuracies caused by further numerical and experimental measurement methods. The results show that a simulation with hybrid meshes built up by a blockstructured domain with hexahedrons around the blade in combination with a tetrahedral far field in the channel is sufficient to get results which are almost as accurate as the results gained by the reference simulation. Furthermore this method is robust enough for automated processes without user input and enables comparable meshes in size, distribution and quality for different similar geometries as occurring in optimization processes.

  10. Resolution of ab initio shapes determined from small-angle scattering.

    PubMed

    Tuukkanen, Anne T; Kleywegt, Gerard J; Svergun, Dmitri I

    2016-11-01

    Spatial resolution is an important characteristic of structural models, and the authors of structures determined by X-ray crystallography or electron cryo-microscopy always provide the resolution upon publication and deposition. Small-angle scattering of X-rays or neutrons (SAS) has recently become a mainstream structural method providing the overall three-dimensional structures of proteins, nucleic acids and complexes in solution. However, no quantitative resolution measure is available for SAS-derived models, which significantly hampers their validation and further use. Here, a method is derived for resolution assessment for ab initio shape reconstruction from scattering data. The inherent variability of the ab initio shapes is utilized and it is demonstrated how their average Fourier shell correlation function is related to the model resolution. The method is validated against simulated data for proteins with known high-resolution structures and its efficiency is demonstrated in applications to experimental data. It is proposed that henceforth the resolution be reported in publications and depositions of ab initio SAS models.

  11. Resolution of ab initio shapes determined from small-angle scattering

    PubMed Central

    Tuukkanen, Anne T.; Kleywegt, Gerard J.; Svergun, Dmitri I.

    2016-01-01

    Spatial resolution is an important characteristic of structural models, and the authors of structures determined by X-ray crystallography or electron cryo-microscopy always provide the resolution upon publication and deposition. Small-angle scattering of X-rays or neutrons (SAS) has recently become a mainstream structural method providing the overall three-dimensional structures of proteins, nucleic acids and complexes in solution. However, no quantitative resolution measure is available for SAS-derived models, which significantly hampers their validation and further use. Here, a method is derived for resolution assessment for ab initio shape reconstruction from scattering data. The inherent variability of the ab initio shapes is utilized and it is demonstrated how their average Fourier shell correlation function is related to the model resolution. The method is validated against simulated data for proteins with known high-resolution structures and its efficiency is demonstrated in applications to experimental data. It is proposed that henceforth the resolution be reported in publications and depositions of ab initio SAS models. PMID:27840683

  12. An advanced stochastic weather generator for simulating 2-D high-resolution climate variables

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Fatichi, Simone; Paschalis, Athanasios; Molnar, Peter; Burlando, Paolo

    2017-07-01

    A new stochastic weather generator, Advanced WEather GENerator for a two-dimensional grid (AWE-GEN-2d) is presented. The model combines physical and stochastic approaches to simulate key meteorological variables at high spatial and temporal resolution: 2 km × 2 km and 5 min for precipitation and cloud cover and 100 m × 100 m and 1 h for near-surface air temperature, solar radiation, vapor pressure, atmospheric pressure, and near-surface wind. The model requires spatially distributed data for the calibration process, which can nowadays be obtained by remote sensing devices (weather radar and satellites), reanalysis data sets and ground stations. AWE-GEN-2d is parsimonious in terms of computational demand and therefore is particularly suitable for studies where exploring internal climatic variability at multiple spatial and temporal scales is fundamental. Applications of the model include models of environmental systems, such as hydrological and geomorphological models, where high-resolution spatial and temporal meteorological forcing is crucial. The weather generator was calibrated and validated for the Engelberg region, an area with complex topography in the Swiss Alps. Model test shows that the climate variables are generated by AWE-GEN-2d with a level of accuracy that is sufficient for many practical applications.

  13. Performance Evaluation of the COBRA GEM for the Application of the TPC

    NASA Astrophysics Data System (ADS)

    Terasaki, Kohei; Hamagaki, Hideki; Gunji, Taku; Yamaguchi, Yorito

    2014-09-01

    Suppression of the back-drifting ions from avalanche region to drift space (IBF: Ion Backflow) is the key for a Time Projection Chamber (TPC) since IBF easily distorts the drift field. To suppress IBF, Gating Grid system is widely used for the TPC but this limits the data taking rate. Gas Electron Multiplier (GEM) has advantages in the reduction of IBF and high rate capability. By adopting GEM, it is possible to run a TPC continuously under high rate and high multiplicity conditions. Motivated by the study of IBF reduction for RICH with Thick COBRA, which has been developed by F. A. Amero et al., we developed COBRA GEMs for the application of a TPC. With a stack configuration, IBF reaches about 0.1 ~ 0.5%, which is ×5--10 better IBF than the standard GEMs. However, the measured energy resolution with COBRA is 20% (σ) and this is much worse than the resolution with standard GEMs. Measurement of long-time stability of gain indicates that gain of COBRA varies significantly due to charging up effect. Simulation studies based on Garfield++ are performed for understanding quantitatively the reasons of worse energy resolution and instability of gain. In this presentation, we will report the simulation studies together with the measured performance of the COBRA GEM.

  14. A high-resolution physically-based global flood hazard map

    NASA Astrophysics Data System (ADS)

    Kaheil, Y.; Begnudelli, L.; McCollum, J.

    2016-12-01

    We present the results from a physically-based global flood hazard model. The model uses a physically-based hydrologic model to simulate river discharges, and 2D hydrodynamic model to simulate inundation. The model is set up such that it allows the application of large-scale flood hazard through efficient use of parallel computing. For hydrology, we use the Hillslope River Routing (HRR) model. HRR accounts for surface hydrology using Green-Ampt parameterization. The model is calibrated against observed discharge data from the Global Runoff Data Centre (GRDC) network, among other publicly-available datasets. The parallel-computing framework takes advantage of the river network structure to minimize cross-processor messages, and thus significantly increases computational efficiency. For inundation, we implemented a computationally-efficient 2D finite-volume model with wetting/drying. The approach consists of simulating flood along the river network by forcing the hydraulic model with the streamflow hydrographs simulated by HRR, and scaled up to certain return levels, e.g. 100 years. The model is distributed such that each available processor takes the next simulation. Given an approximate criterion, the simulations are ordered from most-demanding to least-demanding to ensure that all processors finalize almost simultaneously. Upon completing all simulations, the maximum envelope of flood depth is taken to generate the final map. The model is applied globally, with selected results shown from different continents and regions. The maps shown depict flood depth and extent at different return periods. These maps, which are currently available at 3 arc-sec resolution ( 90m) can be made available at higher resolutions where high resolution DEMs are available. The maps can be utilized by flood risk managers at the national, regional, and even local levels to further understand their flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs.

  15. Some Psychometric and Design Implications of Game-Based Learning Analytics

    ERIC Educational Resources Information Center

    Gibson, David; Clarke-Midura, Jody

    2013-01-01

    The rise of digital game and simulation-based learning applications has led to new approaches in educational measurement that take account of patterns in time, high resolution paths of action, and clusters of virtual performance artifacts. The new approaches, which depart from traditional statistical analyses, include data mining, machine…

  16. Study of a high-resolution PET system using a Silicon detector probe

    NASA Astrophysics Data System (ADS)

    Brzeziński, K.; Oliver, J. F.; Gillam, J.; Rafecas, M.

    2014-10-01

    A high-resolution silicon detector probe, in coincidence with a conventional PET scanner, is expected to provide images of higher quality than those achievable using the scanner alone. Spatial resolution should improve due to the finer pixelization of the probe detector, while increased sensitivity in the probe vicinity is expected to decrease noise. A PET-probe prototype is being developed utilizing this principle. The system includes a probe consisting of ten layers of silicon detectors, each a 80 × 52 array of 1 × 1 × 1 mm3 pixels, to be operated in coincidence with a modern clinical PET scanner. Detailed simulation studies of this system have been performed to assess the effect of the additional probe information on the quality of the reconstructed images. A grid of point sources was simulated to study the contribution of the probe to the system resolution at different locations over the field of view (FOV). A resolution phantom was used to demonstrate the effect on image resolution for two probe positions. A homogeneous source distribution with hot and cold regions was used to demonstrate that the localized improvement in resolution does not come at the expense of the overall quality of the image. Since the improvement is constrained to an area close to the probe, breast imaging is proposed as a potential application for the novel geometry. In this sense, a simplified breast phantom, adjacent to heart and torso compartments, was simulated and the effect of the probe on lesion detectability, through measurements of the local contrast recovery coefficient-to-noise ratio (CNR), was observed. The list-mode ML-EM algorithm was used for image reconstruction in all cases. As expected, the point spread function of the PET-probe system was found to be non-isotropic and vary with position, offering improvement in specific regions. Increase in resolution, of factors of up to 2, was observed in the region close to the probe. Images of the resolution phantom showed visible improvement in resolution when including the probe in the simulations. The image quality study demonstrated that contrast and spill-over ratio in other areas of the FOV were not sacrificed for this enhancement. The CNR study performed on the breast phantom indicates increased lesion detectability provided by the probe.

  17. Convergence behavior of idealized convection-resolving simulations of summertime deep moist convection over land

    NASA Astrophysics Data System (ADS)

    Panosetti, Davide; Schlemmer, Linda; Schär, Christoph

    2018-05-01

    Convection-resolving models (CRMs) can explicitly simulate deep convection and resolve interactions between convective updrafts. They are thus increasingly used in numerous weather and climate applications. However, the truncation of the continuous energy cascade at scales of O (1 km) poses a serious challenge, as in kilometer-scale simulations the size and properties of the simulated convective cells are often determined by the horizontal grid spacing (Δ x ).In this study, idealized simulations of deep moist convection over land are performed to assess the convergence behavior of a CRM at Δ x = 8, 4, 2, 1 km and 500 m. Two types of convergence estimates are investigated: bulk convergence addressing domain-averaged and integrated variables related to the water and energy budgets, and structural convergence addressing the statistics and scales of individual clouds and updrafts. Results show that bulk convergence generally begins at Δ x =4 km, while structural convergence is not yet fully achieved at the kilometer scale, despite some evidence that the resolution sensitivity of updraft velocities and convective mass fluxes decreases at finer resolution. In particular, at finer grid spacings the maximum updraft velocity generally increases, and the size of the smallest clouds is mostly determined by Δ x . A number of different experiments are conducted, and it is found that the presence of orography and environmental vertical wind shear yields more energetic structures at scales much larger than Δ x , sometimes reducing the resolution sensitivity. Overall the results lend support to the use of kilometer-scale resolutions in CRMs, despite the inability of these models to fully resolve the associated cloud field.

  18. A Melting Layer Model for Passive/Active Microwave Remote Sensing Applications. Part 2; Simulation of TRMM Observations

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Bauer, Peter; Kummerow, Christian D.; Tao, Wei-Kuo

    2000-01-01

    The one-dimensional, steady-state melting layer model developed in Part I of this study is used to calculate both the microphysical and radiative properties of melting precipitation, based upon the computed concentrations of snow and graupel just above the freezing level at applicable horizontal gridpoints of 3-dimensional cloud resolving model simulations. The modified 3-dimensional distributions of precipitation properties serve as input to radiative transfer calculations of upwelling radiances and radar extinction/reflectivities at the TRMM Microwave Imager (TMI) and Precipitation Radar (PR) frequencies, respectively. At the resolution of the cloud resolving model grids (approx. 1 km), upwelling radiances generally increase if mixed-phase precipitation is included in the model atmosphere. The magnitude of the increase depends upon the optical thickness of the cloud and precipitation, as well as the scattering characteristics of ice-phase precipitation aloft. Over the set of cloud resolving model simulations utilized in this study, maximum radiance increases of 43, 28, 18, and 10 K are simulated at 10.65, 19.35 GHz, 37.0, and 85.5 GHz, respectively. The impact of melting on TMI-measured radiances is determined not only by the physics of the melting particles but also by the horizontal extent of the melting precipitation, since the lower-frequency channels have footprints that extend over 10''s of kilometers. At TMI resolution, the maximum radiance increases are 16, 15, 12, and 9 K at the same frequencies. Simulated PR extinction and reflectivities in the melting layer can increase dramatically if mixed-phase precipitation is included, a result consistent with previous studies. Maximum increases of 0.46 (-2 dB) in extinction optical depth and 5 dBZ in reflectivity are simulated based upon the set of cloud resolving model simulations.

  19. Simulations of Tornadoes, Tropical Cyclones, MJOs, and QBOs, using GFDL's multi-scale global climate modeling system

    NASA Astrophysics Data System (ADS)

    Lin, Shian-Jiann; Harris, Lucas; Chen, Jan-Huey; Zhao, Ming

    2014-05-01

    A multi-scale High-Resolution Atmosphere Model (HiRAM) is being developed at NOAA/Geophysical Fluid Dynamics Laboratory. The model's dynamical framework is the non-hydrostatic extension of the vertically Lagrangian finite-volume dynamical core (Lin 2004, Monthly Wea. Rev.) constructed on a stretchable (via Schmidt transformation) cubed-sphere grid. Physical parametrizations originally designed for IPCC-type climate predictions are in the process of being modified and made more "scale-aware", in an effort to make the model suitable for multi-scale weather-climate applications, with horizontal resolution ranging from 1 km (near the target high-resolution region) to as low as 400 km (near the antipodal point). One of the main goals of this development is to enable simulation of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously thought impossible. We will present preliminary results, covering a very wide spectrum of temporal-spatial scales, ranging from simulation of tornado genesis (hours), Madden-Julian Oscillations (intra-seasonal), topical cyclones (seasonal), to Quasi Biennial Oscillations (intra-decadal), using the same global multi-scale modeling system.

  20. Inference of turbulence parameters from a ROMS simulation using the k-ε closure scheme

    NASA Astrophysics Data System (ADS)

    Thyng, Kristen M.; Riley, James J.; Thomson, Jim

    2013-12-01

    Comparisons between high resolution turbulence data from Admiralty Inlet, WA (USA), and a 65-meter horizontal grid resolution simulation using the hydrostatic ocean modelling code, Regional Ocean Modeling System (ROMS), show that the model's k-ε turbulence closure scheme performs reasonably well. Turbulent dissipation rates and Reynolds stresses agree within a factor of two, on average. Turbulent kinetic energy (TKE) also agrees within a factor of two, but only for motions within the observed inertial sub-range of frequencies (i.e., classic approximately isotropic turbulence). TKE spectra from the observations indicate that there is significant energy at lower frequencies than the inertial sub-range; these scales are not captured by the model closure scheme nor the model grid resolution. To account for scales not present in the model, the inertial sub-range is extrapolated to lower frequencies and then integrated to obtain an inferred, diagnostic total TKE, with improved agreement with the observed total TKE. The realistic behavior of the dissipation rate and Reynolds stress, combined with the adjusted total TKE, imply that ROMS simulations can be used to understand and predict spatial and temporal variations in turbulence. The results are suggested for application to siting tidal current turbines.

  1. Extended-Range High-Resolution Dynamical Downscaling over a Continental-Scale Domain

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    High-resolution mesoscale simulations, when applied for downscaling meteorological fields over large spatial domains and for extended time periods, can provide valuable information for many practical application scenarios including the weather-dependent renewable energy industry. In the present study, a strategy has been proposed to dynamically downscale coarse-resolution meteorological fields from Environment Canada's regional analyses for a period of multiple years over the entire Canadian territory. The study demonstrates that a continuous mesoscale simulation over the entire domain is the most suitable approach in this regard. Large-scale deviations in the different meteorological fields pose the biggest challenge for extended-range simulations over continental scale domains, and the enforcement of the lateral boundary conditions is not sufficient to restrict such deviations. A scheme has therefore been developed to spectrally nudge the simulated high-resolution meteorological fields at the different model vertical levels towards those embedded in the coarse-resolution driving fields derived from the regional analyses. A series of experiments were carried out to determine the optimal nudging strategy including the appropriate nudging length scales, nudging vertical profile and temporal relaxation. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil-moisture, and snow conditions, towards their expected values obtained from a high-resolution offline surface scheme was also devised to limit any considerable deviation in the evolving surface fields due to extended-range temporal integrations. The study shows that ensuring large-scale atmospheric similarities helps to deliver near-surface statistical scores for temperature, dew point temperature and horizontal wind speed that are better or comparable to the operational regional forecasts issued by Environment Canada. Furthermore, the meteorological fields resulting from the proposed downscaling strategy have significantly improved spatiotemporal variance compared to those from the operational forecasts, and any time series generated from the downscaled fields do not suffer from discontinuities due to switching between the consecutive forecasts.

  2. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  3. Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications

    PubMed Central

    Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2016-01-01

    We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 μm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 μm in x,y-plane and ~0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 μm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 μm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods. PMID:23079763

  4. Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications.

    PubMed

    Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2012-11-21

    We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 µm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 µm in x,y-plane and ∼0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 µm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 µm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods.

  5. Large-screen display technology assessment for military applications

    NASA Astrophysics Data System (ADS)

    Blaha, Richard J.

    1990-08-01

    Full-color, large screen display systems can enhance military applications that require group presentation, coordinated decisions, or interaction between decision makers. The technology already plays an important role in operations centers, simulation facilities, conference rooms, and training centers. Some applications display situational, status, or briefing information, while others portray instructional material for procedural training or depict realistic panoramic scenes that are used in simulators. While each specific application requires unique values of luminance, resolution, response time, reliability, and the video interface, suitable performance can be achieved with available commercial large screen displays. Advances in the technology of large screen displays are driven by the commercial applications because the military applications do not provide the significant market share enjoyed by high definition television (HDTV), entertainment, advertisement, training, and industrial applications. This paper reviews the status of full-color, large screen display technologies and includes the performance and cost metrics of available systems. For this discussion, performance data is based upon either measurements made by our personnel or extractions from vendors' data sheets.

  6. Simulations of Astrophysical Jets in Dense Environments

    NASA Astrophysics Data System (ADS)

    Krause, Martin; Gaibler, Volker; Camenzind, Max

    We have simulated the interaction of jets with a galactic wind at high resolution using the magnetohydrodynamics code NIRVANA on the NEC SX-6 at the HLRS. This setup may describe a typical situation for the starbursting radio galaxies of the early universe. The results show a clear resolution dependence in the expected way, but the formed clumps are denser than expected from linear extrapolation. We also report our recent progress in the adaptation of the magnetic part of NIRVANA to the SX-6. The code is now fully tuned to the machine and reached more than 3 Gflops. We plan to use this new code version to extend our study of magnetized jets down to very low jet densities. This should be especially applicable to the conditions in the young universe.

  7. Cloud-Based Tools to Support High-Resolution Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Jones, N.; Nelson, J.; Swain, N.; Christensen, S.

    2013-12-01

    The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482

  8. Effect of climate data on simulated carbon and nitrogen balances for Europe

    NASA Astrophysics Data System (ADS)

    Blanke, Jan Hendrik; Lindeskog, Mats; Lindström, Johan; Lehsten, Veiko

    2016-05-01

    In this study, we systematically assess the spatial variability in carbon and nitrogen balance simulations related to the choice of global circulation models (GCMs), representative concentration pathways (RCPs), spatial resolutions, and the downscaling methods used as calculated with LPJ-GUESS. We employed a complete factorial design and performed 24 simulations for Europe with different climate input data sets and different combinations of these four factors. Our results reveal that the variability in simulated output in Europe is moderate with 35.6%-93.5% of the total variability being common among all combinations of factors. The spatial resolution is the most important factor among the examined factors, explaining 1.5%-10.7% of the total variability followed by GCMs (0.3%-7.6%), RCPs (0%-6.3%), and downscaling methods (0.1%-4.6%). The higher-order interactions effect that captures nonlinear relations between the factors and random effects is pronounced and accounts for 1.6%-45.8% to the total variability. The most distinct hot spots of variability include the mountain ranges in North Scandinavia and the Alps, and the Iberian Peninsula. Based on our findings, we advise to conduct the application of models such as LPJ-GUESS at a reasonably high spatial resolution which is supported by the model structure. There is no notable gain in simulations of ecosystem carbon and nitrogen stocks and fluxes from using regionally downscaled climate in preference to bias-corrected, bilinearly interpolated CMIP5 projections.

  9. Real-world hydrologic assessment of a fully-distributed hydrological model in a parallel computing environment

    NASA Astrophysics Data System (ADS)

    Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.

    2011-10-01

    SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.

  10. Instrumental resolution of the chopper spectrometer 4SEASONS evaluated by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kajimoto, Ryoichi; Sato, Kentaro; Inamura, Yasuhiro; Fujita, Masaki

    2018-05-01

    We performed simulations of the resolution function of the 4SEASONS spectrometer at J-PARC by using the Monte Carlo simulation package McStas. The simulations showed reasonably good agreement with analytical calculations of energy and momentum resolutions by using a simplified description. We implemented new functionalities in Utsusemi, the standard data analysis tool used in 4SEASONS, to enable visualization of the simulated resolution function and predict its shape for specific experimental configurations.

  11. Development of a Spot-Application Tool for Rapid, High-Resolution Simulation of Wave-Driven Nearshore Hydrodynamics

    DTIC Science & Technology

    2013-09-30

    flow models, such as Delft3D, with our developed Boussinesq -type model. The vision of this project is to develop an operational tool for the...situ measurements or large-scale wave models. This information will be used to drive the offshore wave boundary condition. • Execute the Boussinesq ...model to match with the Boussinesq -type theory would be one which can simulate sheared and stratified currents due to large-scale (non-wave) forcings

  12. Techniques and resources for storm-scale numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Droegemeier, Kelvin; Grell, Georg; Doyle, James; Soong, Su-Tzai; Skamarock, William; Bacon, David; Staniforth, Andrew; Crook, Andrew; Wilhelmson, Robert

    1993-01-01

    The topics discussed include the following: multiscale application of the 5th-generation PSU/NCAR mesoscale model, the coupling of nonhydrostatic atmospheric and hydrostatic ocean models for air-sea interaction studies; a numerical simulation of cloud formation over complex topography; adaptive grid simulations of convection; an unstructured grid, nonhydrostatic meso/cloud scale model; efficient mesoscale modeling for multiple scales using variable resolution; initialization of cloud-scale models with Doppler radar data; and making effective use of future computing architectures, networks, and visualization software.

  13. Resolution requirements for numerical simulations of transition

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Krist, Steven E.; Hussaini, M. Yousuff

    1989-01-01

    The resolution requirements for direct numerical simulations of transition to turbulence are investigated. A reliable resolution criterion is determined from the results of several detailed simulations of channel and boundary-layer transition.

  14. STOCK: Structure mapper and online coarse-graining kit for molecular simulations

    DOE PAGES

    Bevc, Staš; Junghans, Christoph; Praprotnik, Matej

    2015-03-15

    We present a web toolkit STructure mapper and Online Coarse-graining Kit for setting up coarse-grained molecular simulations. The kit consists of two tools: structure mapping and Boltzmann inversion tools. The aim of the first tool is to define a molecular mapping from high, e.g. all-atom, to low, i.e. coarse-grained, resolution. Using a graphical user interface it generates input files, which are compatible with standard coarse-graining packages, e.g. VOTCA and DL_CGMAP. Our second tool generates effective potentials for coarse-grained simulations preserving the structural properties, e.g. radial distribution functions, of the underlying higher resolution model. The required distribution functions can be providedmore » by any simulation package. Simulations are performed on a local machine and only the distributions are uploaded to the server. The applicability of the toolkit is validated by mapping atomistic pentane and polyalanine molecules to a coarse-grained representation. Effective potentials are derived for systems of TIP3P (transferable intermolecular potential 3 point) water molecules and salt solution. The presented coarse-graining web toolkit is available at http://stock.cmm.ki.si.« less

  15. Evaluation of High Resolution Rapid Refresh-Smoke (HRRR-Smoke) model products for a case study using surface PM2.5 observations

    NASA Astrophysics Data System (ADS)

    Deanes, L. N.; Ahmadov, R.; McKeen, S. A.; Manross, K.; Grell, G. A.; James, E.

    2016-12-01

    Wildfires are increasing in number and size in the western United States as climate change contributes to warmer and drier conditions in this region. These fires lead to poor air quality and diminished visibility. The High Resolution Rapid Refresh-Smoke modeling system (HRRR-Smoke) is designed to simulate fire emissions and smoke transport with high resolution. The model is based on the Weather Research and Forecasting model, coupled with chemistry (WRF-Chem) and uses fire detection data from the Visible Infrared and Imaging Radiometer Suite (VIIRS) satellite instrument to simulate wildfire emissions and their plume rise. HRRR-Smoke is used in both real-time applications and case studies. In this study, we evaluate the HRRR-Smoke for August 2015, during one of the worst wildfire seasons on record in the United States, by focusing on wildfires that occurred in the northwestern US. We compare HRRR-Smoke simulations with hourly fine particulate matter (PM2.5) observations from the Air Quality System (https://www.epa.gov/aqs) from multiple air quality monitoring sites in Washington state. PM2.5 data includes measurements from urban, suburban and remote sites in the state. We discuss the model performance in capturing large PM2.5 enhancements detected at surface sites due to wildfires. We present various statistical parameters to demonstrate HRRR-Smoke's performance in simulating surface PM2.5 levels.

  16. Microscale anthropogenic pollution modelling in a small tropical island during weak trade winds: Lagrangian particle dispersion simulations using real nested LES meteorological fields

    NASA Astrophysics Data System (ADS)

    Cécé, Raphaël; Bernard, Didier; Brioude, Jérome; Zahibo, Narcisse

    2016-08-01

    Tropical islands are characterized by thermal and orographical forcings which may generate microscale air mass circulations. The Lesser Antilles Arc includes small tropical islands (width lower than 50 km) where a total of one-and-a-half million people live. Air quality over this region is affected by anthropogenic and volcanic emissions, or saharan dust. To reduce risks for the population health, the atmospheric dispersion of emitted pollutants must be predicted. In this study, the dispersion of anthropogenic nitrogen oxides (NOx) is numerically modelled over the densely populated area of the Guadeloupe archipelago under weak trade winds, during a typical case of severe pollution. The main goal is to analyze how microscale resolutions affect air pollution in a small tropical island. Three resolutions of domain grid are selected: 1 km, 333 m and 111 m. The Weather Research and Forecasting model (WRF) is used to produce real nested microscale meteorological fields. Then the weather outputs initialize the Lagrangian Particle Dispersion Model (FLEXPART). The forward simulations of a power plant plume showed good ability to reproduce nocturnal peaks recorded by an urban air quality station. The increase in resolution resulted in an improvement of model sensitivity. The nesting to subkilometer grids helped to reduce an overestimation bias mainly because the LES domains better simulate the turbulent motions governing nocturnal flows. For peaks observed at two air quality stations, the backward sensitivity outputs identified realistic sources of NOx in the area. The increase in resolution produced a sharper inverse plume with a more accurate source area. This study showed the first application of the FLEXPART-WRF model to microscale resolutions. Overall, the coupling model WRF-LES-FLEXPART is useful to simulate the pollutant dispersion during a real case of calm wind regime over a complex terrain area. The forward and backward simulation results showed clearly that the subkilometer resolution of 333 m is necessary to reproduce realistic air pollution patterns in this case of short-range transport over a complex terrain area. Globally, this work contributes to enrich the sparsely documented domain of real nested microscale air pollution modelling. This study dealing with the determination of the proper resolution grid and proper turbulence scheme, is of significant interest to the near-source and complex terrain air quality research community.

  17. Evaluating hourly rainfall characteristics over the U.S. Great Plains in dynamically downscaled climate model simulations using NASA-Unified WRF

    NASA Astrophysics Data System (ADS)

    Lee, Huikyo; Waliser, Duane E.; Ferraro, Robert; Iguchi, Takamichi; Peters-Lidard, Christa D.; Tian, Baijun; Loikith, Paul C.; Wright, Daniel B.

    2017-07-01

    Accurate simulation of extreme precipitation events remains a challenge in climate models. This study utilizes hourly precipitation data from ground stations and satellite instruments to evaluate rainfall characteristics simulated by the NASA-Unified Weather Research and Forecasting (NU-WRF) regional climate model at horizontal resolutions of 4, 12, and 24 km over the Great Plains of the United States. We also examined the sensitivity of the simulated precipitation to different spectral nudging approaches and the cumulus parameterizations. The rainfall characteristics in the observations and simulations were defined as an hourly diurnal cycle of precipitation and a joint probability distribution function (JPDF) between duration and peak intensity of precipitation events over the Great Plains in summer. We calculated a JPDF for each data set and the overlapping area between observed and simulated JPDFs to measure the similarity between the two JPDFs. Comparison of the diurnal precipitation cycles between observations and simulations does not reveal the added value of high-resolution simulations. However, the performance of NU-WRF simulations measured by the JPDF metric strongly depends on horizontal resolution. The simulation with the highest resolution of 4 km shows the best agreement with the observations in simulating duration and intensity of wet spells. Spectral nudging does not affect the JPDF significantly. The effect of cumulus parameterizations on the JPDFs is considerable but smaller than that of horizontal resolution. The simulations with lower resolutions of 12 and 24 km show reasonable agreement but only with the high-resolution observational data that are aggregated into coarse resolution and spatially averaged.

  18. smoothG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Andrew T.; Gelever, Stephan A.; Lee, Chak S.

    2017-12-12

    smoothG is a collection of parallel C++ classes/functions that algebraically constructs reduced models of different resolutions from a given high-fidelity graph model. In addition, smoothG also provides efficient linear solvers for the reduced models. Other than pure graph problem, the software finds its application in subsurface flow and power grid simulations in which graph Laplacians are found

  19. The Art of a Deal: A Kyoto Protocol Simulation

    ERIC Educational Resources Information Center

    Cowlishaw, Richard; Hunter, Charles; Coy, Jason; Tessmer, Michael

    2007-01-01

    In this case study, groups of students represent countries as they negotiate an agreement to limit greenhouse-gas emissions. While initially developed for and used in an environmental-science course for first-year college students, the case could be applicable to other courses dealing with conflict resolution such as public policy, international…

  20. Western Lake Erie Basin: Soft-data-constrained, NHDPlus resolution watershed modeling and exploration of applicable conservation scenarios

    USDA-ARS?s Scientific Manuscript database

    Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relat...

  1. Computer simulation of protein—carbohydrate complexes: application to arabinose-binding protein and pea lectin

    NASA Astrophysics Data System (ADS)

    Rao, V. S. R.; Biswas, Margaret; Mukhopadhyay, Chaitali; Balaji, P. V.

    1989-03-01

    The CCEM method (Contact Criteria and Energy Minimisation) has been developed and applied to study protein-carbohydrate interactions. The method uses available X-ray data even on the native protein at low resolution (above 2.4 Å) to generate realistic models of a variety of proteins with various ligands. The two examples discussed in this paper are arabinose-binding protein (ABP) and pea lectin. The X-ray crystal structure data reported on ABP-β- L-arabinose complex at 2.8, 2.4 and 1.7 Å resolution differ drastically in predicting the nature of the interactions between the protein and ligand. It is shown that, using the data at 2.4 Å resolution, the CCEM method generates complexes which are as good as the higher (1.7 Å) resolution data. The CCEM method predicts some of the important hydrogen bonds between the ligand and the protein which are missing in the interpretation of the X-ray data at 2.4 Å resolution. The theoretically predicted hydrogen bonds are in good agreement with those reported at 1.7 Å resolution. Pea lectin has been solved only in the native form at 3 Å resolution. Application of the CCEM method also enables us to generate complexes of pea lectin with methyl-α- D-glucopyranoside and methyl-2,3-dimethyl-α- D-glucopyranoside which explain well the available experimental data in solution.

  2. Hadronic energy resolution of a highly granular scintillator-steel hadron calorimeter using software compensation techniques

    NASA Astrophysics Data System (ADS)

    Adloff, C.; Blaha, J.; Blaising, J.-J.; Drancourt, C.; Espargilière, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S. T.; Sosebee, M.; White, A. P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N. K.; Goto, T.; Mavromanolakis, G.; Thomson, M. A.; Ward, D. R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Benyamna, M.; Cârloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G. C.; Dyshkant, A.; Lima, J. G. R.; Zutshi, V.; Hostachy, J.-Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Göttlicher, P.; Günter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.-I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.-Ch; Shen, W.; Stamen, R.; Tadday, A.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G. W.; Kawagoe, K.; Dauncey, P. D.; Magnan, A.-M.; Wing, M.; Salvatore, F.; Calvo Alamillo, E.; Fouz, M.-C.; Puerta-Pelayo, J.; Balagura, V.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Dolgoshein, B.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Smirnov, S.; Kiesling, C.; Pfau, S.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Bonis, J.; Bouquet, B.; Callier, S.; Cornebise, P.; Doublet, Ph; Dulucq, F.; Faucci Giannelli, M.; Fleury, J.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch; Pöschl, R.; Raux, L.; Seguin-Moreau, N.; Wicek, F.; Anduze, M.; Boudry, V.; Brient, J.-C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Sauer, J.; Weber, S.; Zeitnitz, C.

    2012-09-01

    The energy resolution of a highly granular 1 m3 analogue scintillator-steel hadronic calorimeter is studied using charged pions with energies from 10 GeV to 80 GeV at the CERN SPS. The energy resolution for single hadrons is determined to be approximately 58%/√E/GeV. This resolution is improved to approximately 45%/√E/GeV with software compensation techniques. These techniques take advantage of the event-by-event information about the substructure of hadronic showers which is provided by the imaging capabilities of the calorimeter. The energy reconstruction is improved either with corrections based on the local energy density or by applying a single correction factor to the event energy sum derived from a global measure of the shower energy density. The application of the compensation algorithms to geant4 simulations yield resolution improvements comparable to those observed for real data.

  3. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  4. Simulating Virtual Terminal Area Weather Data Bases for Use in the Wake Vortex Avoidance System (Wake VAS) Prediction Algorithm

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2004-01-01

    During the research project, sounding datasets were generated for the region surrounding 9 major airports, including Dallas, TX, Boston, MA, New York, NY, Chicago, IL, St. Louis, MO, Atlanta, GA, Miami, FL, San Francico, CA, and Los Angeles, CA. The numerical simulation of winter and summer environments during which no instrument flight rule impact was occurring at these 9 terminals was performed using the most contemporary version of the Terminal Area PBL Prediction System (TAPPS) model nested from 36 km to 6 km to 1 km horizontal resolution and very detailed vertical resolution in the planetary boundary layer. The soundings from the 1 km model were archived at 30 minute time intervals for a 24 hour period and the vertical dependent variables as well as derived quantities, i.e., 3-dimensional wind components, temperatures, pressures, mixing ratios, turbulence kinetic energy and eddy dissipation rates were then interpolated to 5 m vertical resolution up to 1000 m elevation above ground level. After partial validation against field experiment datasets for Dallas as well as larger scale and much coarser resolution observations at the other 8 airports, these sounding datasets were sent to NASA for use in the Virtual Air Space and Modeling program. The application of these datasets being to determine representative airport weather environments to diagnose the response of simulated wake vortices to realistic atmospheric environments. These virtual datasets are based on large scale observed atmospheric initial conditions that are dynamically interpolated in space and time. The 1 km nested-grid simulated datasets providing a very coarse and highly smoothed representation of airport environment meteorological conditions. Details concerning the airport surface forcing are virtually absent from these simulated datasets although the observed background atmospheric processes have been compared to the simulated fields and the fields were found to accurately replicate the flows surrounding the airport where coarse verification data were available as well as where airport scale datasets were available.

  5. Computational analysis of high resolution unsteady airloads for rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.

    1994-01-01

    The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.

  6. Using radiative transfer models to study the atmospheric water vapor content and to eliminate telluric lines from high-resolution optical spectra

    NASA Astrophysics Data System (ADS)

    Gardini, A.; Maíz Apellániz, J.; Pérez, E.; Quesada, J. A.; Funke, B.

    2013-05-01

    The Radiative Transfer Model (RTM) and the retrieval algorithm, incorporated in the SCIATRAN 2.2 software package developed at the Institute of Remote Sensing/Institute of Enviromental Physics of Bremen University (Germany), allows to simulate, among other things, radiance/irradiance spectra in the 2400--24 000 Å range. In this work we present applications of RTM to two case studies. In the first case the RTM was used to simulate direct solar irradiance spectra, with different water vapor amounts, for the study of the water vapor content in the atmosphere above Sierra Nevada Observatory. Simulated spectra were compared with those measured with a spectrometer operating in the 8000--10 000 Å range. In the second case the RTM was used to generate telluric model spectra to subtract the atmospheric contribution and correct high-resolution stellar spectra from atmospheric water vapor and oxygen lines. The results of both studies are discussed.

  7. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  8. Retrieved Products from Simulated Hyperspectral Observations of a Hurricane

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John

    2015-01-01

    Demonstrate via Observing System Simulation Experiments (OSSEs) the potential utility of flying high spatial resolution AIRS class IR sounders on future LEO and GEO missions.The study simulates and analyzes radiances for 3 sounders with AIRS spectral and radiometric properties on different orbits with different spatial resolutions: 1) Control run 13 kilometers AIRS spatial resolution at nadir on LEO in Aqua orbit; 2) 2 kilometer spatial resolution LEO sounder at nadir ARIES; 3) 5 kilometers spatial resolution sounder on a GEO orbit, radiances simulated every 72 minutes.

  9. Evaluation of the dosimetric properties of a diode detector for small field proton radiosurgery.

    PubMed

    McAuley, Grant A; Teran, Anthony V; Slater, Jerry D; Slater, James M; Wroe, Andrew J

    2015-11-08

    The small fields and sharp gradients typically encountered in proton radiosurgery require high spatial resolution dosimetric measurements, especially below 1-2 cm diameters. Radiochromic film provides high resolution, but requires postprocessing and special handling. Promising alternatives are diode detectors with small sensitive volumes (SV) that are capable of high resolution and real-time dose acquisition. In this study we evaluated the PTW PR60020 proton dosimetry diode using radiation fields and beam energies relevant to radiosurgery applications. Energies of 127 and 157 MeV (9.7 to 15 cm range) and initial diameters of 8, 10, 12, and 20mm were delivered using single-stage scattering and four modulations (0, 15, 30, and 60mm) to a water tank in our treatment room. Depth dose and beam profile data were compared with PTW Markus N23343 ionization chamber, EBT2 Gafchromic film, and Monte Carlo simulations. Transverse dose profiles were measured using the diode in "edge-on" orientation or EBT2 film. Diode response was linear with respect to dose, uniform with dose rate, and showed an orientation-dependent (i.e., beam parallel to, or perpendicular to, detector axis) response of less than 1%. Diodevs. Markus depth-dose profiles, as well as Markus relative dose ratio vs. simulated dose-weighted average lineal energy plots, suggest that any LET-dependent diode response is negligible from particle entrance up to the very distal portion of the SOBP for the energies tested. Finally, while not possible with the ionization chamber due to partial volume effects, accurate diode depth-dose measurements of 8, 10, and 12 mm diameter beams were obtained compared to Monte Carlo simulations. Because of the small SV that allows measurements without partial volume effects and the capability of submillimeter resolution (in edge-on orientation) that is crucial for small fields and high-dose gradients (e.g., penumbra, distal edge), as well as negligible LET dependence over nearly the full the SOBP, the PTW proton diode proved to be a useful high-resolution, real-time metrology device for small proton field radiation measurements such as would be encountered in radiosurgery applications.

  10. Piezo-based, high dynamic range, wide bandwidth steering system for optical applications

    NASA Astrophysics Data System (ADS)

    Karasikov, Nir; Peled, Gal; Yasinov, Roman; Feinstein, Alan

    2017-05-01

    Piezoelectric motors and actuators are characterized by direct drive, fast response, high positioning resolution and high mechanical power density. These properties are beneficial for optical devices such as gimbals, optical image stabilizers and mirror angular positioners. The range of applications includes sensor pointing systems, image stabilization, laser steering and more. This paper reports on the construction, properties and operation of three types of piezo based building blocks for optical steering applications: a small gimbal and a two-axis OIS (Optical Image Stabilization) mechanism, both based on piezoelectric motors, and a flexure-assisted piezoelectric actuator for mirror angular positioning. The gimbal weighs less than 190 grams, has a wide angular span (solid angle of > 2π) and allows for a 80 micro-radian stabilization with a stabilization frequency up to 25 Hz. The OIS is an X-Y, closed loop, platform having a lateral positioning resolution better than 1 μm, a stabilization frequency up to 25 Hz and a travel of +/-2 mm. It is used for laser steering or positioning of the image sensor, based on signals from a MEMS Gyro sensor. The actuator mirror positioner is based on three piezoelectric actuation axes for tip tilt (each providing a 50 μm motion range), has a positioning resolution of 10 nm and is capable of a 1000 Hz response. A combination of the gimbal with the mirror positioner or the OIS stage is explored by simulations, indicating a <10 micro-radian stabilization capability under substantial perturbation. Simulations and experimental results are presented for a combined device facilitating both wide steering angle range and bandwidth.

  11. Cloud properties inferred from 8-12 micron data

    NASA Technical Reports Server (NTRS)

    Strabala, Kathleen I.; Ackerman, Steven A.; Menzel, W. Paul

    1994-01-01

    A trispectral combination of observations at 8-, 11-, and 12-micron bands is suggested for detecting cloud and cloud properties in the infrared. Atmospheric ice and water vapor absorption peak in opposite halves of the window region so that positive 8-minus-11-micron brightness temperature differences indicate cloud, while near-zero or negative differences indicate clear regions. The absorption coefficient for water increases more between 11 and 12 microns than between 8 and 11 microns, while for ice, the reverse is true. Cloud phases is determined by a scatter diagram of 8-minus-11-micron versus 11-minus-12-micron brightness temperature differences; ice cloud shows a slope greater than 1 and water cloud less than 1. The trispectral brightness temperature method was tested upon high-resolution interferometer data resulting in clear-cloud and cloud-phase delineation. Simulations using differing 8-micron bandwidths revealed no significant degradation of cloud property detection. Thus, the 8-micron bandwidth for future satellites can be selected based on the requirements of other applications, such as surface characterization studies. Application of the technique to current polar-orbiting High-Resolution Infrared Sounder (HIRS)-Advanced Very High Resolution Radiometer (AVHRR) datasets is constrained by the nonuniformity of the cloud scenes sensed within the large HIRS field of view. Analysis of MAS (MODIS Airborne Simulator) high-spatial resolution (500 m) data with all three 8-, 11-, and 12-micron bands revealed sharp delineation of differing cloud and background scenes, from which a simple automated threshold technique was developed. Cloud phase, clear-sky, and qualitative differences in cloud emissivity and cloud height were identified on a case study segment from 24 November 1991, consistent with the scene. More rigorous techniques would allow further cloud parameter clarification. The opportunities for global cloud delineation with the Moderate-Resolution Imaging Spectrometer (MODIS) appear excellent. The spectral selection, the spatial resolution, and the global coverage are all well suited for significant advances.

  12. The influence of model spatial resolution on simulated ozone and fine particulate matter for Europe: implications for health impact assessments

    NASA Astrophysics Data System (ADS)

    Fenech, Sara; Doherty, Ruth M.; Heaviside, Clare; Vardoulakis, Sotiris; Macintyre, Helen L.; O'Connor, Fiona M.

    2018-04-01

    We examine the impact of model horizontal resolution on simulated concentrations of surface ozone (O3) and particulate matter less than 2.5 µm in diameter (PM2.5), and the associated health impacts over Europe, using the HadGEM3-UKCA chemistry-climate model to simulate pollutant concentrations at a coarse (˜ 140 km) and a finer (˜ 50 km) resolution. The attributable fraction (AF) of total mortality due to long-term exposure to warm season daily maximum 8 h running mean (MDA8) O3 and annual-average PM2.5 concentrations is then calculated for each European country using pollutant concentrations simulated at each resolution. Our results highlight a seasonal variation in simulated O3 and PM2.5 differences between the two model resolutions in Europe. Compared to the finer resolution results, simulated European O3 concentrations at the coarse resolution are higher on average in winter and spring (˜ 10 and ˜ 6 %, respectively). In contrast, simulated O3 concentrations at the coarse resolution are lower in summer and autumn (˜ -1 and ˜ -4 %, respectively). These differences may be partly explained by differences in nitrogen dioxide (NO2) concentrations simulated at the two resolutions. Compared to O3, we find the opposite seasonality in simulated PM2.5 differences between the two resolutions. In winter and spring, simulated PM2.5 concentrations are lower at the coarse compared to the finer resolution (˜ -8 and ˜ -6 %, respectively) but higher in summer and autumn (˜ 29 and ˜ 8 %, respectively). Simulated PM2.5 values are also mostly related to differences in convective rainfall between the two resolutions for all seasons. These differences between the two resolutions exhibit clear spatial patterns for both pollutants that vary by season, and exert a strong influence on country to country variations in estimated AF for the two resolutions. Warm season MDA8 O3 levels are higher in most of southern Europe, but lower in areas of northern and eastern Europe when simulated at the coarse resolution compared to the finer resolution. Annual-average PM2.5 concentrations are higher across most of northern and eastern Europe but lower over parts of southwest Europe at the coarse compared to the finer resolution. Across Europe, differences in the AF associated with long-term exposure to population-weighted MDA8 O3 range between -0.9 and +2.6 % (largest positive differences in southern Europe), while differences in the AF associated with long-term exposure to population-weighted annual mean PM2.5 range from -4.7 to +2.8 % (largest positive differences in eastern Europe) of the total mortality. Therefore this study, with its unique focus on Europe, demonstrates that health impact assessments calculated using modelled pollutant concentrations, are sensitive to a change in model resolution by up to ˜ ±5 % of the total mortality across Europe.

  13. Simulation, optimization and testing of a novel high spatial resolution X-ray imager based on Zinc Oxide nanowires in Anodic Aluminium Oxide membrane using Geant4

    NASA Astrophysics Data System (ADS)

    Esfandi, F.; Saramad, S.

    2015-07-01

    In this work, a new generation of scintillator based X-ray imagers based on ZnO nanowires in Anodized Aluminum Oxide (AAO) nanoporous template is characterized. The optical response of ordered ZnO nanowire arrays in porous AAO template under low energy X-ray illumination is simulated by the Geant4 Monte Carlo code and compared with experimental results. The results show that for 10 keV X-ray photons, by considering the light guiding properties of zinc oxide inside the AAO template and suitable selection of detector thickness and pore diameter, the spatial resolution less than one micrometer and the detector detection efficiency of 66% are accessible. This novel nano scintillator detector can have many advantages for medical applications in the future.

  14. Thermophysical modelling for high-resolution digital terrain models

    NASA Astrophysics Data System (ADS)

    Pelivan, I.

    2018-07-01

    A method is presented for efficiently calculating surface temperatures for highly resolved celestial body shapes. A thorough investigation of the necessary conditions leading to reach model convergence shows that the speed of surface temperature convergence depends on factors such as the quality of initial boundary conditions, thermal inertia, illumination conditions, and resolution of the numerical depth grid. The optimization process to shorten the simulation time while increasing or maintaining the accuracy of model results includes the introduction of facet-specific boundary conditions such as pre-computed temperature estimates and pre-evaluated simulation times. The individual facet treatment also allows for assigning other facet-specific properties such as local thermal inertia. The approach outlined in this paper is particularly useful for very detailed digital terrain models in combination with unfavourable illumination conditions such as little-to-no sunlight at all for a period of time as experienced locally on comet 67P/Churyumov-Gerasimenko. Possible science applications include thermal analysis of highly resolved local (landing) sites experiencing seasonal, environment, and lander shadowing. In combination with an appropriate roughness model, the method is very suitable for application to disc-integrated and disc-resolved data. Further applications are seen where the complexity of the task has led to severe shape or thermophysical model simplifications such as in studying surface activity or thermal cracking.

  15. Thermophysical modeling for high-resolution digital terrain models

    NASA Astrophysics Data System (ADS)

    Pelivan, I.

    2018-04-01

    A method is presented for efficiently calculating surface temperatures for highly resolved celestial body shapes. A thorough investigation of the necessary conditions leading to reach model convergence shows that the speed of surface temperature convergence depends on factors such as the quality of initial boundary conditions, thermal inertia, illumination conditions, and resolution of the numerical depth grid. The optimization process to shorten the simulation time while increasing or maintaining the accuracy of model results includes the introduction of facet-specific boundary conditions such as pre-computed temperature estimates and pre-evaluated simulation times. The individual facet treatment also allows for assigning other facet-specific properties such as local thermal inertia. The approach outlined in this paper is particularly useful for very detailed digital terrain models in combination with unfavorable illumination conditions such as little to no sunlight at all for a period of time as experienced locally on comet 67P/Churyumov-Gerasimenko. Possible science applications include thermal analysis of highly resolved local (landing) sites experiencing seasonal, environment and lander shadowing. In combination with an appropriate roughness model, the method is very suitable for application to disk-integrated and disk-resolved data. Further applications are seen where the complexity of the task has led to severe shape or thermophysical model simplifications such as in studying surface activity or thermal cracking.

  16. Sprocket- Chain Simulation: Modelling and Simulation of a Multi Physics problem by sequentially coupling MotionSolve and nanoFluidX

    NASA Astrophysics Data System (ADS)

    Jayanthi, Aditya; Coker, Christopher

    2016-11-01

    In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.

  17. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    NASA Astrophysics Data System (ADS)

    Burns, Kimberly Ann

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems in high-resolution gamma-ray spectroscopy applications. One of the primary motivators for using the coupled method over pure Monte Carlo method is the potential for significantly lower computational times. For the code-to-code comparison cases, the run times for RADSAT were approximately 25--500 times shorter than for MCNP, as shown in Table 1. This was assuming a 40 mCi 252Cf neutron source and 600 seconds of "real-world" measurement time. The only variance reduction technique implemented in the MCNP calculation was forward biasing of the source toward the sample target. Improved MCNP runtimes could be achieved with the addition of more advanced variance reduction techniques.

  18. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  19. NASA's Earth Science Use of Commercially Availiable Remote Sensing Datasets: Cover Image

    NASA Technical Reports Server (NTRS)

    Underwood, Lauren W.; Goward, Samuel N.; Fearon, Matthew G.; Fletcher, Rose; Garvin, Jim; Hurtt, George

    2008-01-01

    The cover image incorporates high resolution stereo pairs acquired from the DigitalGlobe(R) QuickBird sensor. It shows a digital elevation model of Meteor Crater, Arizona at approximately 1.3 meter point-spacing. Image analysts used the Leica Photogrammetry Suite to produce the DEM. The outside portion was computed from two QuickBird panchromatic scenes acquired October 2006, while an Optech laser scan dataset was used for the crater s interior elevations. The crater s terrain model and image drape were created in a NASA Constellation Program project focused on simulating lunar surface environments for prototyping and testing lunar surface mission analysis and planning tools. This work exemplifies NASA s Scientific Data Purchase legacy and commercial high resolution imagery applications, as scientists use commercial high resolution data to examine lunar analog Earth landscapes for advanced planning and trade studies for future lunar surface activities. Other applications include landscape dynamics related to volcanism, hydrologic events, climate change, and ice movement.

  20. Hydrological Applications of a High-Resolution Radar Precipitation Data Base for Sweden

    NASA Astrophysics Data System (ADS)

    Olsson, Jonas; Berg, Peter; Norin, Lars; Simonsson, Lennart

    2017-04-01

    There is an increasing need for high-resolution observations of precipitation on local, regional, national and even continental level. Urbanization and other environmental changes often make societies more vulnerable to intense short-duration rainfalls (cloudbursts) and their consequences in terms of e.g. flooding and landslides. Impact and forecasting models of these hazards put very high demands on the rainfall input in terms of both resolution and accuracy. Weather radar systems obviously have a great potential in this context, but also limitations with respect to e.g. conversion algorithms and various error sources that may have a significant impact on the subsequent hydrological modelling. In Sweden, the national weather radar network has been in operation for nearly three decades, but until recently the hydrological applications have been very limited. This is mainly because of difficulties in managing the different errors and biases in the radar precipitation product, which made it hard to demonstrate any distinct added value as compared with gauge-based precipitation products. In the last years, however, in light of distinct progress in developing error correction procedures, substantial efforts have been made to develop a national gauge-adjusted radar precipitation product - HIPRAD (High-Resolution Precipitation from Gauge-Adjusted Weather Radar). In HIPRAD, the original radar precipitation data are scaled to match the monthly accumulations in a national grid (termed PTHBV) created by optimal interpolation of corrected daily gauge observations, with the intention to attain both a high spatio-temporal resolution and accurate long-term accumulations. At present, HIPRAD covers the period 2000-present with resolutions 15 min and 2×2 km2. A key motivation behind the development of HIPRAD is the intention to increase the temporal resolution in the national flood forecasting system from 1 day to 1 hour. Whereas a daily time step is sufficient to describe the rainfall-runoff process in large, slow river basins, which traditionally has been the main focus in the national forecasting, an hourly time step (or preferably even shorter) is required to simulate the flow in fast-responding basins. At the daily scale, the PTHBV product is used for model initialization prior to the forecasts but with its daily resolution it is not applicable at the hourly scale. For this purpose, a real-time version of HIPRAD has been developed which is currently running operationally. HIPRAD is also being used for historical simulations with an hourly time step, which is important for e.g. water quality assessment. Finally, we will use HIPRAD to gain an improved knowledge of the short-duration precipitation climate in Sweden. Currently there are many open issues with respect to e.g. geographical differences, spatial correlations and areal extremes. Here we will show and discuss selected results from the ongoing development and validation of HIPRAD as well as its various applications for hydrological forecasting and risk assessment. Further, web resources containing radar-based observation and forecasting for hydrological applications will be demonstrated. Finally, some future research directions will be outlined. Fast responding hydrological catchments require fine spatial and temporal resolution of the precipitation input data to provide realistic results.

  1. Practical algorithms for simulation and reconstruction of digital in-line holograms.

    PubMed

    Latychevskaia, Tatiana; Fink, Hans-Werner

    2015-03-20

    Here we present practical methods for simulation and reconstruction of in-line digital holograms recorded with plane and spherical waves. The algorithms described here are applicable to holographic imaging of an object exhibiting absorption as well as phase-shifting properties. Optimal parameters, related to distances, sampling rate, and other factors for successful simulation and reconstruction of holograms are evaluated and criteria for the achievable resolution are worked out. Moreover, we show that the numerical procedures for the reconstruction of holograms recorded with plane and spherical waves are identical under certain conditions. Experimental examples of holograms and their reconstructions are also discussed.

  2. Finding the best resolution for the Kingman-Tajima coalescent: theory and applications.

    PubMed

    Sainudiin, Raazesh; Stadler, Tanja; Véber, Amandine

    2015-05-01

    Many summary statistics currently used in population genetics and in phylogenetics depend only on a rather coarse resolution of the underlying tree (the number of extant lineages, for example). Hence, for computational purposes, working directly on these resolutions appears to be much more efficient. However, this approach seems to have been overlooked in the past. In this paper, we describe six different resolutions of the Kingman-Tajima coalescent together with the corresponding Markov chains, which are essential for inference methods. Two of the resolutions are the well-known n-coalescent and the lineage death process due to Kingman. Two other resolutions were mentioned by Kingman and Tajima, but never explicitly formalized. Another two resolutions are novel, and complete the picture of a multi-resolution coalescent. For all of them, we provide the forward and backward transition probabilities, the probability of visiting a given state as well as the probability of a given realization of the full Markov chain. We also provide a description of the state-space that highlights the computational gain obtained by working with lower-resolution objects. Finally, we give several examples of summary statistics that depend on a coarser resolution of Kingman's coalescent, on which simulations are usually based.

  3. The impact of vertical resolution in mesoscale model AROME forecasting of radiation fog

    NASA Astrophysics Data System (ADS)

    Philip, Alexandre; Bergot, Thierry; Bouteloup, Yves; Bouyssel, François

    2015-04-01

    Airports short-term forecasting of fog has a security and economic impact. Numerical simulations have been performed with the mesoscale model AROME (Application of Research to Operations at Mesoscale) (Seity et al. 2011). Three vertical resolutions (60, 90 and 156 levels) are used to show the impact of radiation fog on numerical forecasting. Observations at Roissy Charles De Gaulle airport are compared to simulations. Significant differences in the onset, evolution and dissipation of fog were found. The high resolution simulation is in better agreement with observations than a coarser one. The surface boundary layer and incoming long-wave radiations are better represented. A more realistic behaviour of liquid water content evolution allows a better anticipation of low visibility procedures (ceiling < 60m and/or visibility < 600m). The case study of radiation fog shows that it is necessary to have a well defined vertical grid to better represent local phenomena. A statistical study over 6 months (October 2011 - March 2012 ) using different configurations was carried out. Statistically, results were the same as in the case study of radiation fog. Seity Y., P. Brousseau, S. Malardel, G. Hello, P. Bénard, F. Bouttier, C. Lac, V. Masson, 2011: The AROME-France convective scale operational model. Mon.Wea.Rev., 139, 976-991.

  4. Evaluation of cool season precipitation event characteristics over the Northeast US in a suite of downscaled climate model hindcasts

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Waliser, Duane E.; Kim, Jinwon; Ferraro, Robert

    2017-08-01

    Cool season precipitation event characteristics are evaluated across a suite of downscaled climate models over the northeastern US. Downscaled hindcast simulations are produced by dynamically downscaling the Modern-Era Retrospective Analysis for Research and Applications version 2 (MERRA2) using the National Aeronautics and Space Administration (NASA)-Unified Weather Research and Forecasting (WRF) regional climate model (RCM) and the Goddard Earth Observing System Model, Version 5 (GEOS-5) global climate model. NU-WRF RCM simulations are produced at 24, 12, and 4-km horizontal resolutions using a range of spectral nudging schemes while the MERRA2 global downscaled run is provided at 12.5-km. All model runs are evaluated using four metrics designed to capture key features of precipitation events: event frequency, event intensity, even total, and event duration. Overall, the downscaling approaches result in a reasonable representation of many of the key features of precipitation events over the region, however considerable biases exist in the magnitude of each metric. Based on this evaluation there is no clear indication that higher resolution simulations result in more realistic results in general, however many small-scale features such as orographic enhancement of precipitation are only captured at higher resolutions suggesting some added value over coarser resolution. While the differences between simulations produced using nudging and no nudging are small, there is some improvement in model fidelity when nudging is introduced, especially at a cutoff wavelength of 600 km compared to 2000 km. Based on the results of this evaluation, dynamical regional downscaling using NU-WRF results in a more realistic representation of precipitation event climatology than the global downscaling of MERRA2 using GEOS-5.

  5. High-resolution precipitation data derived from dynamical downscaling using the WRF model for the Heihe River Basin, northwest China

    NASA Astrophysics Data System (ADS)

    Zhang, Xuezhen; Xiong, Zhe; Zheng, Jingyun; Ge, Quansheng

    2018-02-01

    The community of climate change impact assessments and adaptations research needs regional high-resolution (spatial) meteorological data. This study produced two downscaled precipitation datasets with spatial resolutions of as high as 3 km by 3 km for the Heihe River Basin (HRB) from 2011 to 2014 using the Weather Research and Forecast (WRF) model nested with Final Analysis (FNL) from the National Center for Environmental Prediction (NCEP) and ERA-Interim from the European Centre for Medium-Range Weather Forecasts (ECMWF) (hereafter referred to as FNLexp and ERAexp, respectively). Both of the downscaling simulations generally reproduced the observed spatial patterns of precipitation. However, users should keep in mind that the two downscaled datasets are not exactly the same in terms of observations. In comparison to the remote sensing-based estimation, the FNLexp produced a bias of heavy precipitation centers. In comparison to the ground gauge-based measurements, for the warm season (May to September), the ERAexp produced more precipitation (root-mean-square error (RMSE) = 295.4 mm, across the 43 sites) and more heavy rainfall days, while the FNLexp produced less precipitation (RMSE = 115.6 mm) and less heavy rainfall days. Both the ERAexp and FNLexp produced considerably more precipitation for the cold season (October to April) with RMSE values of 119.5 and 32.2 mm, respectively, and more heavy precipitation days. Along with simulating a higher number of heavy precipitation days, both the FNLexp and ERAexp also simulated stronger extreme precipitation. Sensitivity experiments show that the bias of these simulations is much more sensitive to micro-physical parameterizations than to the spatial resolution of topography data. For the HRB, application of the WSM3 scheme may improve the performance of the WRF model.

  6. A multi-purpose readout electronics for CdTe and CZT detectors for x-ray imaging applications

    NASA Astrophysics Data System (ADS)

    Yue, X. B.; Deng, Z.; Xing, Y. X.; Liu, Y. N.

    2017-09-01

    A multi-purpose readout electronics based on the DPLMS digital filter has been developed for CdTe and CZT detectors for X-ray imaging applications. Different filter coefficients can be synthesized optimized either for high energy resolution at relatively low counting rate or for high rate photon-counting with reduced energy resolution. The effects of signal width constraints, sampling rate and length were numerical studied by Mento Carlo simulation with simple CRRC shaper input signals. The signal width constraint had minor effect and the ENC was only increased by 6.5% when the signal width was shortened down to 2 τc. The sampling rate and length depended on the characteristic time constants of both input and output signals. For simple CR-RC input signals, the minimum number of the filter coefficients was 12 with 10% increase in ENC when the output time constant was close to the input shaping time. A prototype readout electronics was developed for demonstration, using a previously designed analog front ASIC and a commercial ADC card. Two different DPLMS filters were successfully synthesized and applied for high resolution and high counting rate applications respectively. The readout electronics was also tested with a linear array CdTe detector. The energy resolutions of Am-241 59.5 keV peak were measured to be 6.41% in FWHM for the high resolution filter and to be 13.58% in FWHM for the high counting rate filter with 160 ns signal width constraint.

  7. Simulating the dispersion of NOx and CO2 in the city of Zurich at building resolving scale

    NASA Astrophysics Data System (ADS)

    Brunner, Dominik; Berchet, Antoine; Emmenegger, Lukas; Henne, Stephan; Müller, Michael

    2017-04-01

    Cities are emission hotspots for both greenhouse gases and air pollutants. They contribute about 70% of global greenhouse gas emissions and are home to a growing number of people potentially suffering from poor air quality in the urban environment. High-resolution atmospheric transport modelling of greenhouse gases and air pollutants at the city scale has, therefore, several important applications such as air pollutant exposure assessment, air quality forecasting, or urban planning and management. When combined with observations, it also has the potential to quantify emissions and monitor their long-term trends, which is the main motivation for the deployment of urban greenhouse gas monitoring networks. We have developed a comprehensive atmospheric modeling model system for the city of Zurich, Switzerland ( 600,000 inhabitants including suburbs), which is composed of the mesoscale model GRAMM simulating the flow in a larger domain around Zurich at 100 m resolution, and the nested high-resolution model GRAL simulating the flow and air pollutant dispersion in the city at building resolving (5-10 m) scale. Based on an extremely detailed emission inventory provided by the municipality of Zurich, we have simulated two years of hourly NOx and CO2 concentration fields across the entire city. Here, we present a detailed evaluation of the simulations against a comprehensive network of continuous monitoring sites and passive samplers for NOx and analyze the sensitivity of the results to the temporal variability of the emissions. Furthermore, we present first simulations of CO2 and investigate the challenges associated with CO2 sources not covered by the inventory such as human respiration and exchange fluxes with urban vegetation.

  8. Multi-year application of WRF-CAM5 over East Asia-Part I: Comprehensive evaluation and formation regimes of O 3 and PM 2.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jian; Zhang, Yang; Wang, Kai

    Accurate simulations of air quality and climate require robust model parameterizations on regional and global scales. The Weather Research and Forecasting model with Chemistry version 3.4.1 has been coupled with physics packages from the Community Atmosphere Model version 5 (CAM5) (WRF-CAM5) to assess the robustness of the CAM5 physics package for regional modeling at higher grid resolutions than typical grid resolutions used in global modeling. In this two-part study, Part I describes the application and evaluation of WRF-CAM5 over East Asia at a horizontal resolution of 36-km for six years: 2001, 2005, 2006, 2008, 2010, and 2011. The simulations aremore » evaluated comprehensively with a variety of datasets from surface networks, satellites, and aircraft. The results show that meteorology is relatively well simulated by WRF-CAM5. However, cloud variables are largely or moderately underpredicted, indicating uncertainties in the model treatments of dynamics, thermodynamics, and microphysics of clouds/ices as well as aerosol-cloud interactions. For chemical predictions, the tropospheric column abundances of CO, NO2, and O3 are well simulated, but those of SO2 and HCHO are moderately overpredicted, and the column HCHO/NO2 indicator is underpredicted. Large biases exist in the surface concentrations of CO, NO2, and PM10 due to uncertainties in the emissions as well as vertical mixing. The underpredictions of NO lead to insufficient O3 titration, thus O3 overpredictions. The model can generally reproduce the observed O3 and PM indicators. These indicators suggest to control NOx emissions throughout the year, and VOCs emissions in summer in big cities and in winter over North China Plain, North/South Korea, and Japan to reduce surface O3, and to control SO2, NH3, and NOx throughout the year to reduce inorganic surface PM.« less

  9. Sensitivity of chemical transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, S.; Martin, R. V.; Keller, C. A.

    2015-11-01

    Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  10. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.

    2004-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).

  11. A High-Resolution WRF Tropical Channel Simulation Driven by a Global Reanalysis

    NASA Astrophysics Data System (ADS)

    Holland, G.; Leung, L.; Kuo, Y.; Hurrell, J.

    2006-12-01

    Since 2003, NCAR has invested in the development and application of Nested Regional Climate Model (NRCM) based on the Weather Research and Forecasting (WRF) model and the Community Climate System Model, as a key component of the Prediction Across Scales Initiative. A prototype tropical channel model has been developed to investigate scale interactions and the influence of tropical convection on large scale circulation and tropical modes. The model was developed based on the NCAR Weather Research and Forecasting Model (WRF), configured as a tropical channel between 30 ° S and 45 ° N, wide enough to allow teleconnection effects over the mid-latitudes. Compared to the limited area domain that WRF is typically applied over, the channel mode alleviates issues with reflection of tropical modes that could result from imposing east/west boundaries. Using a large amount of available computing resources on a supercomputer (Blue Vista) during its bedding in period, a simulation has been completed with the tropical channel applied at 36 km horizontal resolution for 5 years from 1996 to 2000, with large scale circulation provided by the NCEP/NCAR global reanalysis at the north/south boundaries. Shorter simulations of 2 years and 6 months have also been performed to include two-way nests at 12 km and 4 km resolution, respectively, over the western Pacific warm pool, to explicitly resolve tropical convection in the Maritime Continent. The simulations realistically captured the large-scale circulation including the trade winds over the tropical Pacific and Atlantic, the Australian and Asian monsoon circulation, and hurricane statistics. Preliminary analysis and evaluation of the simulations will be presented.

  12. High-Performance Tiled WMS and KML Web Server

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juffmann, Thomas; Koppell, Stewart A.; Klopfer, Brannon B.

    Feynman once asked physicists to build better electron microscopes to be able to watch biology at work. While electron microscopes can now provide atomic resolution, electron beam induced specimen damage precludes high resolution imaging of sensitive materials, such as single proteins or polymers. Here, we use simulations to show that an electron microscope based on a multi-pass measurement protocol enables imaging of single proteins, without averaging structures over multiple images. While we demonstrate the method for particular imaging targets, the approach is broadly applicable and is expected to improve resolution and sensitivity for a range of electron microscopy imaging modalities,more » including, for example, scanning and spectroscopic techniques. The approach implements a quantum mechanically optimal strategy which under idealized conditions can be considered interaction-free.« less

  14. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    NASA Astrophysics Data System (ADS)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-12-01

    This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.

  15. A reduced-order modeling approach to represent subgrid-scale hydrological dynamics for land-surface simulations: application in a polygonal tundra landscape

    DOE PAGES

    Pau, G. S. H.; Bisht, G.; Riley, W. J.

    2014-09-17

    Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less

  16. Introducing CGOLS: The Cholla Galactic Outflow Simulation Suite

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2018-06-01

    We present the Cholla Galactic OutfLow Simulations (CGOLS) suite, a set of extremely high resolution global simulations of isolated disk galaxies designed to clarify the nature of multiphase structure in galactic winds. Using the GPU-based code Cholla, we achieve unprecedented resolution in these simulations, modeling galaxies over a 20 kpc region at a constant resolution of 5 pc. The simulations include a feedback model designed to test the effects of different mass- and energy-loading factors on galactic outflows over kiloparsec scales. In addition to describing the simulation methodology in detail, we also present the results from an adiabatic simulation that tests the frequently adopted analytic galactic wind model of Chevalier & Clegg. Our results indicate that the Chevalier & Clegg model is a good fit to nuclear starburst winds in the nonradiative region of parameter space. Finally, we investigate the role of resolution and convergence in large-scale simulations of multiphase galactic winds. While our largest-scale simulations show convergence of observable features like soft X-ray emission, our tests demonstrate that simulations of this kind with resolutions greater than 10 pc are not yet converged, confirming the need for extreme resolution in order to study the structure of winds and their effects on the circumgalactic medium.

  17. Modeling of technical soil-erosion control measures and its impact on soil erosion off-site effects within urban areas

    NASA Astrophysics Data System (ADS)

    Dostal, Tomas; Devaty, Jan

    2013-04-01

    The paper presents results of surface runoff, soil erosion and sediment transport modeling using Erosion 3D software - physically based mathematical simulation model, event oriented, fully distributed. Various methods to simulate technical soil-erosion conservation measures were tested, using alternative digital elevation models of different precision and resolution. Ditches and baulks were simulated by three different approaches, (i) by change of the land-cover parameters to increase infiltration and decrease flow velocity, (ii) by change of the land-cover parameters to completely infiltrate the surface runoff and (iii) by adjusting the height of the digital elevation model by "burning in" the channels of the ditches. Results show advantages and disadvantages of each approach and conclude suitable methods for combinations of particular digital elevation model and purpose of the simulations. Further on a set of simulations was carried out to model situations before and after technical soil-erosion conservation measures application within a small catchment of 4 km2. These simulations were focused on quantitative and qualitative assessment of technical soil-erosion control measures impact on soil erosion off-site effects within urban areas located downstream of intensively used agricultural fields. The scenarios were built upon a raster digital elevation model with spatial resolution of 3 meters derived from LiDAR 5G vector point elevation data. Use of this high-resolution elevation model allowed simulating the technical soil-erosion control measures by direct terrain elevation adjustment. Also the structures within the settlements were emulated by direct change in the elevation of the terrain model. The buildings were lifted up to simulate complicated flow behavior of the surface runoff within urban areas, using approach of Arévalo (Arévalo, 2011) but focusing on the use of commonly available data without extensive detailed editing. Application of the technical soil-erosion control measures induced strong change in overall amount of eroded/deposited material as well as spatial erosion/deposition patterns within the settlement areas. Validation of modeled scenarios and effects on measured data was not possible as no real runoff event was recorded in the target area so the conclusions were made by comparing the different modeled scenarios. Advantages and disadvantages of used approach to simulate technical soil-erosion conservation measures are evaluated and discussed as well as the impact of use of high-resolution elevation data on the intensity and spatial distribution of soil erosion and deposition. Model approved ability to show detailed distribution of damages over target urban area, which is very sensitive for off-site effects of surface runoff, soil erosion and sediment transport and also high sensitivity to input data, especially to DEM, which affects surface runoff pattern and therefore intensity of harmful effects. Acknowledgement: This paper has been supported by projects: Ministry of the interior of the CR VG 20122015092, and project NAZV QI91C008 TPEO.

  18. Towards a Fine-Resolution Global Coupled Climate System for Prediction on Decadal/Centennial Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClean, Julie L.

    The over-arching goal of this project was to contribute to the realization of a fully coupled fine resolution Earth System Model simulation in which a weather-scale atmosphere is coupled to an ocean in which mesoscale eddies are largely resolved. Both a prototype fine-resolution fully coupled ESM simulation and a first-ever multi-decadal forced fine-resolution global coupled ocean/ice simulation were configured, tested, run, and analyzed as part of this grant. Science questions focused on the gains from the use of high horizontal resolution, particularly in the ocean and sea-ice, with respect to climatically important processes. Both these fine resolution coupled ocean/sea icemore » and fully-coupled simulations and precedent stand-alone eddy-resolving ocean and eddy-permitting coupled ocean/ice simulations were used to explore the high resolution regime. Overall, these studies showed that the presence of mesoscale eddies significantly impacted mixing processes and the global meridional overturning circulation in the ocean simulations. Fourteen refereed publications and a Ph.D. dissertation resulted from this grant.« less

  19. Sources and pathways of the upscale effects on the Southern Hemisphere jet in MPAS-CAM4 variable-resolution simulations

    DOE PAGES

    Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby; ...

    2016-10-22

    Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less

  20. Sources and pathways of the upscale effects on the Southern Hemisphere jet in MPAS-CAM4 variable-resolution simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby

    Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less

  1. New methods and astrophysical applications of adaptive mesh fluid simulations

    NASA Astrophysics Data System (ADS)

    Wang, Peng

    The formation of stars, galaxies and supermassive black holes are among the most interesting unsolved problems in astrophysics. Those problems are highly nonlinear and involve enormous dynamical ranges. Thus numerical simulations with spatial adaptivity are crucial in understanding those processes. In this thesis, we discuss the development and application of adaptive mesh refinement (AMR) multi-physics fluid codes to simulate those nonlinear structure formation problems. To simulate the formation of star clusters, we have developed an AMR magnetohydrodynamics (MHD) code, coupled with radiative cooling. We have also developed novel algorithms for sink particle creation, accretion, merging and outflows, all of which are coupled with the fluid algorithms using operator splitting. With this code, we have been able to perform the first AMR-MHD simulation of star cluster formation for several dynamical times, including sink particle and protostellar outflow feedbacks. The results demonstrated that protostellar outflows can drive supersonic turbulence in dense clumps and explain the observed slow and inefficient star formation. We also suggest that global collapse rate is the most important factor in controlling massive star accretion rate. In the topics of galaxy formation, we discuss the results of three projects. In the first project, using cosmological AMR hydrodynamics simulations, we found that isolated massive star still forms in cosmic string wakes even though the mega-parsec scale structure has been perturbed significantly by the cosmic strings. In the second project, we calculated the dynamical heating rate in galaxy formation. We found that by balancing our heating rate with the atomic cooling rate, it gives a critical halo mass which agrees with the result of numerical simulations. This demonstrates that the effect of dynamical heating should be put into semi-analytical works in the future. In the third project, using our AMR-MHD code coupled with radiative cooling module, we performed the first MHD simulations of disk galaxy formation. We find that the initial magnetic fields are quickly amplified to Milky-Way strength in a self-regulated way with amplification rate roughly one e-folding per orbit. This suggests that Milky Way strength magnetic field might be common in high redshift disk galaxies. We have also developed AMR relativistic hydrodynamics code to simulate black hole relativistic jets. We discuss the coupling of the AMR framework with various relativistic solvers and conducted extensive algorithmic comparisons. Via various test problems, we emphasize the importance of resolution studies in relativistic flow simulations because extremely high resolution is required especially when shear flows are present in the problem. Then we present the results of 3D simulations of supermassive black hole jets propagation and gamma ray burst jet breakout. Resolution studies of the two 3D jets simulations further highlight the need of high resolutions to calculate accurately relativistic flow problems. Finally, to push forward the kind of simulations described above, we need faster codes with more physics included. We describe an implementation of compressible inviscid fluid solvers with AMR on Graphics Processing Units (GPU) using NVIDIA's CUDA. We show that the class of high resolution shock capturing schemes can be mapped naturally on this architecture. For both uniform and adaptive simulations, we achieve an overall speedup of approximately 10 times faster execution on one Quadro FX 5600 GPU as compared to a single 3 GHz Intel core on the host computer. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case.

  2. Dynamically downscaled climate simulations over North America: Methods, evaluation, and supporting documentation for users

    USGS Publications Warehouse

    Hostetler, S.W.; Alder, J.R.; Allan, A.M.

    2011-01-01

    We have completed an array of high-resolution simulations of present and future climate over Western North America (WNA) and Eastern North America (ENA) by dynamically downscaling global climate simulations using a regional climate model, RegCM3. The simulations are intended to provide long time series of internally consistent surface and atmospheric variables for use in climate-related research. In addition to providing high-resolution weather and climate data for the past, present, and future, we have developed an integrated data flow and methodology for processing, summarizing, viewing, and delivering the climate datasets to a wide range of potential users. Our simulations were run over 50- and 15-kilometer model grids in an attempt to capture more of the climatic detail associated with processes such as topographic forcing than can be captured by general circulation models (GCMs). The simulations were run using output from four GCMs. All simulations span the present (for example, 1968-1999), common periods of the future (2040-2069), and two simulations continuously cover 2010-2099. The trace gas concentrations in our simulations were the same as those of the GCMs: the IPCC 20th century time series for 1968-1999 and the A2 time series for simulations of the future. We demonstrate that RegCM3 is capable of producing present day annual and seasonal climatologies of air temperature and precipitation that are in good agreement with observations. Important features of the high-resolution climatology of temperature, precipitation, snow water equivalent (SWE), and soil moisture are consistently reproduced in all model runs over WNA and ENA. The simulations provide a potential range of future climate change for selected decades and display common patterns of the direction and magnitude of changes. As expected, there are some model to model differences that limit interpretability and give rise to uncertainties. Here, we provide background information about the GCMs and the RegCM3, a basic evaluation of the model output and examples of simulated future climate. We also provide information needed to access the web applications for visualizing and downloading the data, and give complete metadata that describe the variables in the datasets.

  3. The Role of Temporal Evolution in Modeling Atmospheric Emissions from Tropical Fires

    NASA Technical Reports Server (NTRS)

    Marlier, Miriam E.; Voulgarakis, Apostolos; Shindell, Drew T.; Faluvegi, Gregory S.; Henry, Candise L.; Randerson, James T.

    2014-01-01

    Fire emissions associated with tropical land use change and maintenance influence atmospheric composition, air quality, and climate. In this study, we explore the effects of representing fire emissions at daily versus monthly resolution in a global composition-climate model. We find that simulations of aerosols are impacted more by the temporal resolution of fire emissions than trace gases such as carbon monoxide or ozone. Daily-resolved datasets concentrate emissions from fire events over shorter time periods and allow them to more realistically interact with model meteorology, reducing how often emissions are concurrently released with precipitation events and in turn increasing peak aerosol concentrations. The magnitude of this effect varies across tropical ecosystem types, ranging from smaller changes in modeling the low intensity, frequent burning typical of savanna ecosystems to larger differences when modeling the short-term, intense fires that characterize deforestation events. The utility of modeling fire emissions at a daily resolution also depends on the application, such as modeling exceedances of particulate matter concentrations over air quality guidelines or simulating regional atmospheric heating patterns.

  4. Oceanography at coastal scales: Introduction to the special issue on results from the EU FP7 FIELD_AC project

    NASA Astrophysics Data System (ADS)

    Sánchez-Arcilla, Agustín; Wolf, Judith; Monbaliu, Jaak

    2014-09-01

    The high-resolution and coupled forecasting of wind, waves and currents, in restricted coastal domains, offer a number of important challenges; these limit the quality of predictions, in the present state-of-the-art. This paper presents the main results obtained for such coastal domains, with reference to a variety of modelling suites and observing networks for: a) Liverpool Bay; b) German Bight; c) Gulf of Venice; and d) the Catalan coast. All of these areas are restricted domains, where boundary effects play a significant role in the resulting inner dynamics. This contribution addresses also the themes of the other papers in this Special Issue, ranging from observations to simulations. Emphasis is placed upon the physics controlling such restricted areas. The text deals also with the transfer to end-users and other interested parties, since the requirements on resolution, accuracy and robustness must be linked to their applications. Finally, some remarks are included on the way forward for coastal oceanography and the synergetic combination of in-situ and remote measurements, with high-resolution 3D simulations.

  5. Image synthesis for SAR system, calibration and processor design

    NASA Technical Reports Server (NTRS)

    Holtzman, J. C.; Abbott, J. L.; Kaupp, V. H.; Frost, V. S.

    1978-01-01

    The Point Scattering Method of simulating radar imagery rigorously models all aspects of the imaging radar phenomena. Its computational algorithms operate on a symbolic representation of the terrain test site to calculate such parameters as range, angle of incidence, resolution cell size, etc. Empirical backscatter data and elevation data are utilized to model the terrain. Additionally, the important geometrical/propagation effects such as shadow, foreshortening, layover, and local angle of incidence are rigorously treated. Applications of radar image simulation to a proposed calibrated SAR system are highlighted: soil moisture detection and vegetation discrimination.

  6. Evaluating runoff simulations from the Community Land Model 4.0 using observations from flux towers and a mountainous watershed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hongyi; Huang, Maoyi; Wigmosta, Mark S.

    2011-12-24

    Previous studies using the Community Land Model (CLM) focused on simulating landatmosphere interactions and water balance at continental to global scales, with limited attention paid to its capability for hydrologic simulations at watershed or regional scales. This study evaluates the performance of CLM 4.0 (CLM4) for hydrologic simulations, and explores possible directions of improvement. Specifically, it is found that CLM4 tends to produce unrealistically large temporal variation of runoff for applications at a mountainous catchment in the Northwest United States where subsurface runoff is dominant, as well as at a few flux tower sites. We show that runoff simulations frommore » CLM4 can be improved by: (1) increasing spatial resolution of the land surface representations; (2) calibrating parameter values; (3) replacing the subsurface formulation with a more general nonlinear function; (4) implementing the runoff generation schemes from the Variability Infiltration Capacity (VIC) model. This study also highlights the importance of evaluating both the energy and water fluxes application of land surface models across multiple scales.« less

  7. Application of fire and evacuation models in evaluation of fire safety in railway tunnels

    NASA Astrophysics Data System (ADS)

    Cábová, Kamila; Apeltauer, Tomáš; Okřinová, Petra; Wald, František

    2017-09-01

    The paper describes an application of numerical simulation of fire dynamics and evacuation of people in a tunnel. The software tool Fire Dynamics Simulator is used to simulate temperature resolution and development of smoke in a railway tunnel. Comparing to temperature curves which are usually used in the design stage results of the model show that the numerical model gives lower temperature of hot smoke layer. Outputs of the numerical simulation of fire also enable to improve models of evacuation of people during fires in tunnels. In the presented study the calculated high of smoke layer in the tunnel is in 10 min after the fire ignition lower than the level of 2.2 m which is considered as the maximal limit for safe evacuation. Simulation of the evacuation process in bigger scale together with fire dynamics can provide very valuable information about important security conditions like Available Safe Evacuation Time (ASET) vs Required Safe Evacuation Time (RSET). On given example in software EXODUS the paper summarizes selected results of evacuation model which should be in mind of a designer when preparing an evacuation plan.

  8. Exploring connectivity with large-scale Granger causality on resting-state functional MRI.

    PubMed

    DSouza, Adora M; Abidin, Anas Z; Leistritz, Lutz; Wismüller, Axel

    2017-08-01

    Large-scale Granger causality (lsGC) is a recently developed, resting-state functional MRI (fMRI) connectivity analysis approach that estimates multivariate voxel-resolution connectivity. Unlike most commonly used multivariate approaches, which establish coarse-resolution connectivity by aggregating voxel time-series avoiding an underdetermined problem, lsGC estimates voxel-resolution, fine-grained connectivity by incorporating an embedded dimension reduction. We investigate application of lsGC on realistic fMRI simulations, modeling smoothing of neuronal activity by the hemodynamic response function and repetition time (TR), and empirical resting-state fMRI data. Subsequently, functional subnetworks are extracted from lsGC connectivity measures for both datasets and validated quantitatively. We also provide guidelines to select lsGC free parameters. Results indicate that lsGC reliably recovers underlying network structure with area under receiver operator characteristic curve (AUC) of 0.93 at TR=1.5s for a 10-min session of fMRI simulations. Furthermore, subnetworks of closely interacting modules are recovered from the aforementioned lsGC networks. Results on empirical resting-state fMRI data demonstrate recovery of visual and motor cortex in close agreement with spatial maps obtained from (i) visuo-motor fMRI stimulation task-sequence (Accuracy=0.76) and (ii) independent component analysis (ICA) of resting-state fMRI (Accuracy=0.86). Compared with conventional Granger causality approach (AUC=0.75), lsGC produces better network recovery on fMRI simulations. Furthermore, it cannot recover functional subnetworks from empirical fMRI data, since quantifying voxel-resolution connectivity is not possible as consequence of encountering an underdetermined problem. Functional network recovery from fMRI data suggests that lsGC gives useful insight into connectivity patterns from resting-state fMRI at a multivariate voxel-resolution. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. High resolution micro-CT of low attenuating organic materials using large area photon-counting detector

    NASA Astrophysics Data System (ADS)

    Kumpová, I.; Vavřík, D.; Fíla, T.; Koudelka, P.; Jandejsek, I.; Jakůbek, J.; Kytýř, D.; Zlámal, P.; Vopálenský, M.; Gantar, A.

    2016-02-01

    To overcome certain limitations of contemporary materials used for bone tissue engineering, such as inflammatory response after implantation, a whole new class of materials based on polysaccharide compounds is being developed. Here, nanoparticulate bioactive glass reinforced gelan-gum (GG-BAG) has recently been proposed for the production of bone scaffolds. This material offers promising biocompatibility properties, including bioactivity and biodegradability, with the possibility of producing scaffolds with directly controlled microgeometry. However, to utilize such a scaffold with application-optimized properties, large sets of complex numerical simulations using the real microgeometry of the material have to be carried out during the development process. Because the GG-BAG is a material with intrinsically very low attenuation to X-rays, its radiographical imaging, including tomographical scanning and reconstructions, with resolution required by numerical simulations might be a very challenging task. In this paper, we present a study on X-ray imaging of GG-BAG samples. High-resolution volumetric images of investigated specimens were generated on the basis of micro-CT measurements using a large area flat-panel detector and a large area photon-counting detector. The photon-counting detector was composed of a 010× 1 matrix of Timepix edgeless silicon pixelated detectors with tiling based on overlaying rows (i.e. assembled so that no gap is present between individual rows of detectors). We compare the results from both detectors with the scanning electron microscopy on selected slices in transversal plane. It has been shown that the photon counting detector can provide approx. 3× better resolution of the details in low-attenuating materials than the integrating flat panel detectors. We demonstrate that employment of a large area photon counting detector is a good choice for imaging of low attenuating materials with the resolution sufficient for numerical simulations.

  10. A prototype small CdTe gamma camera for radioguided surgery and other imaging applications.

    PubMed

    Tsuchimochi, Makoto; Sakahara, Harumi; Hayama, Kazuhide; Funaki, Minoru; Ohno, Ryoichi; Shirahata, Takashi; Orskaug, Terje; Maehlum, Gunnar; Yoshioka, Koki; Nygard, Einar

    2003-12-01

    Gamma probes have been used for sentinel lymph node biopsy in melanoma and breast cancer. However, these probes can provide only radioactivity counts and variable pitch audio output based on the intensity of the detected radioactivity. We have developed a small semiconductor gamma camera (SSGC) that allows visualisation of the size, shape and location of the target tissues. This study is designed to characterise the performance of the SSGC for radioguided surgery of metastatic lesions and for other imaging applications amenable to the smaller format of this prototype imaging system. The detector head had 32 cadmium telluride semiconductor arrays with a total of 1,024 pixels, and with application-specific integrated circuits (ASICs) and a tungsten collimator. The entire assembly was encased in a lead housing measuring 152 mmx166 mmx65 mm. The effective visual field was 44.8 mmx44.8 mm. The energy resolution and imaging aspects were tested. Two spherical 5-mm- and 15-mm-diameter technetium-99m radioactive sources that had activities of 0.15 MBq and 100 MBq, respectively, were used to simulate a sentinel lymph node and an injection site. The relative detectability of these foci by the new detector and a conventional scintillation camera was studied. The prototype was also examined in a variety of clinical applications. Energy resolution [full-width at half-maximum (FWHM)] for a single element at the centre of the field of view was 4.2% at 140 keV (99mTc), and the mean energy resolution of the CdTe detector arrays was approximately 7.8%. The spatial resolution, represented by FWHM, had a mean value of 1.56 +/- 0.05 mm. Simulated node foci could be visualised clearly by the SSGC using a 15-s acquisition time. In preliminary clinical tests, the SSGC successfully imaged diseases in a variety of tissues, including salivary and thyroid glands, temporomandibular joints and sentinel lymph nodes. The SSGC has significant potential for diagnosing diseases and facilitating subsequent radioguided surgery.

  11. Electric crosstalk impairs spatial resolution of multi-electrode arrays in retinal implants

    NASA Astrophysics Data System (ADS)

    Wilke, R. G. H.; Khalili Moghadam, G.; Lovell, N. H.; Suaning, G. J.; Dokos, S.

    2011-08-01

    Active multi-electrode arrays are used in vision prostheses, including optic nerve cuffs and cortical and retinal implants for stimulation of neural tissue. For retinal implants, arrays with up to 1500 electrodes are used in clinical trials. The ability to convey information with high spatial resolution is critical for these applications. To assess the extent to which spatial resolution is impaired by electric crosstalk, finite-element simulation of electric field distribution in a simplified passive tissue model of the retina is performed. The effects of electrode size, electrode spacing, distance to target cells, and electrode return configuration (monopolar, tripolar, hexagonal) on spatial resolution is investigated in the form of a mathematical model of electric field distribution. Results show that spatial resolution is impaired with increased distance from the electrode array to the target cells. This effect can be partly compensated by non-monopolar electrode configurations and larger electrode diameters, albeit at the expense of lower pixel densities due to larger covering areas by each stimulation electrode. In applications where multi-electrode arrays can be brought into close proximity to target cells, as presumably with epiretinal implants, smaller electrodes in monopolar configuration can provide the highest spatial resolution. However, if the implantation site is further from the target cells, as is the case in suprachoroidal approaches, hexagonally guarded electrode return configurations can convey higher spatial resolution. This paper was originally submitted for the special issue containing contributions from the Sixth Biennial Research Congress of The Eye and the Chip.

  12. Super-resolution fluorescence microscopy by stepwise optical saturation

    PubMed Central

    Zhang, Yide; Nallathamby, Prakash D.; Vigil, Genevieve D.; Khan, Aamir A.; Mason, Devon E.; Boerckel, Joel D.; Roeder, Ryan K.; Howard, Scott S.

    2018-01-01

    Super-resolution fluorescence microscopy is an important tool in biomedical research for its ability to discern features smaller than the diffraction limit. However, due to its difficult implementation and high cost, the super-resolution microscopy is not feasible in many applications. In this paper, we propose and demonstrate a saturation-based super-resolution fluorescence microscopy technique that can be easily implemented and requires neither additional hardware nor complex post-processing. The method is based on the principle of stepwise optical saturation (SOS), where M steps of raw fluorescence images are linearly combined to generate an image with a M-fold increase in resolution compared with conventional diffraction-limited images. For example, linearly combining (scaling and subtracting) two images obtained at regular powers extends the resolution by a factor of 1.4 beyond the diffraction limit. The resolution improvement in SOS microscopy is theoretically infinite but practically is limited by the signal-to-noise ratio. We perform simulations and experimentally demonstrate super-resolution microscopy with both one-photon (confocal) and multiphoton excitation fluorescence. We show that with the multiphoton modality, the SOS microscopy can provide super-resolution imaging deep in scattering samples. PMID:29675306

  13. Towards Improving Satellite Tropospheric NO2 Retrieval Products: Impacts of the spatial resolution and lighting NOx production from the a priori chemical transport model

    NASA Astrophysics Data System (ADS)

    Smeltzer, C. D.; Wang, Y.; Zhao, C.; Boersma, F.

    2009-12-01

    Polar orbiting satellite retrievals of tropospheric nitrogen dioxide (NO2) columns are important to a variety of scientific applications. These NO2 retrievals rely on a priori profiles from chemical transport models and radiative transfer models to derive the vertical columns (VCs) from slant columns measurements. In this work, we compare the retrieval results using a priori profiles from a global model (TM4) and a higher resolution regional model (REAM) at the OMI overpass hour of 1330 local time, implementing the Dutch OMI NO2 (DOMINO) retrieval. We also compare the retrieval results using a priori profiles from REAM model simulations with and without lightning NOx (NO + NO2) production. A priori model resolution and lightning NOx production are both found to have large impact on satellite retrievals by altering the satellite sensitivity to a particular observation by shifting the NO2 vertical distribution interpreted by the radiation model. The retrieved tropospheric NO2 VCs may increase by 25-100% in urban regions and be reduced by 50% in rural regions if the a priori profiles from REAM simulations are used during the retrievals instead of the profiles from TM4 simulations. The a priori profiles with lightning NOx may result in a 25-50% reduction of the retrieved tropospheric NO2 VCs compared to the a priori profiles without lightning. As first priority, a priori vertical NO2 profiles from a chemical transport model with a high resolution, which can better simulate urban-rural NO2 gradients in the boundary layer and make use of observation-based parameterizations of lightning NOx production, should be first implemented to obtain more accurate NO2 retrievals over the United States, where NOx source regions are spatially separated and lightning NOx production is significant. Then as consequence of a priori NO2 profile variabilities resulting from lightning and model resolution dynamics, geostationary satellite, daylight observations would further promote the next step towards producing a more complete NO2 data product provided sufficient resolution of the observations. Both the corrected retrieval algorithm and the proposed next generation geostationary satellite observations would thus improve emission inventories, better validate model simulations, and advantageously optimize regional specific ozone control strategies.

  14. Application of optimal control theory to the design of broadband excitation pulses for high-resolution NMR.

    PubMed

    Skinner, Thomas E; Reiss, Timo O; Luy, Burkhard; Khaneja, Navin; Glaser, Steffen J

    2003-07-01

    Optimal control theory is considered as a methodology for pulse sequence design in NMR. It provides the flexibility for systematically imposing desirable constraints on spin system evolution and therefore has a wealth of applications. We have chosen an elementary example to illustrate the capabilities of the optimal control formalism: broadband, constant phase excitation which tolerates miscalibration of RF power and variations in RF homogeneity relevant for standard high-resolution probes. The chosen design criteria were transformation of I(z)-->I(x) over resonance offsets of +/- 20 kHz and RF variability of +/-5%, with a pulse length of 2 ms. Simulations of the resulting pulse transform I(z)-->0.995I(x) over the target ranges in resonance offset and RF variability. Acceptably uniform excitation is obtained over a much larger range of RF variability (approximately 45%) than the strict design limits. The pulse performs well in simulations that include homonuclear and heteronuclear J-couplings. Experimental spectra obtained from 100% 13C-labeled lysine show only minimal coupling effects, in excellent agreement with the simulations. By increasing pulse power and reducing pulse length, we demonstrate experimental excitation of 1H over +/-32 kHz, with phase variations in the spectra <8 degrees and peak amplitudes >93% of maximum. Further improvements in broadband excitation by optimized pulses (BEBOP) may be possible by applying more sophisticated implementations of the optimal control formalism.

  15. Use of simulation to optimize the pinhole diameter and mask thickness for an x-ray backscatter imaging system

    NASA Astrophysics Data System (ADS)

    Vella, A.; Munoz, Andre; Healy, Matthew J. F.; Lane, David; Lockley, D.

    2017-08-01

    The PENELOPE Monte Carlo simulation code was used to determine the optimum thickness and aperture diameter of a pinhole mask for X-ray backscatter imaging in a security application. The mask material needs to be thick enough to absorb most X-rays, and the pinhole must be wide enough for sufficient field of view whilst narrow enough for sufficient image spatial resolution. The model consisted of a fixed geometry test object, various masks with and without pinholes, and a 1040 x 1340 pixels' area detector inside a lead lined camera housing. The photon energy distribution incident upon masks was flat up to selected energy limits. This artificial source was used to avoid the optimisation being specific to any particular X-ray source technology. The pixelated detector was modelled by digitising the surface area represented by the PENELOPE phase space file and integrating the energies of the photons impacting within each pixel; a MATLAB code was written for this. The image contrast, signal to background ratio, spatial resolution, and collimation effect were calculated at the simulated detector as a function of pinhole diameter and various thicknesses of mask made of tungsten, tungsten/epoxy composite or bismuth alloy. A process of elimination was applied to identify suitable masks for a viable X-ray backscattering security application.

  16. The End-to-end Demonstrator for improved decision making in the water sector in Europe (EDgE)

    NASA Astrophysics Data System (ADS)

    Wood, Eric; Wanders, Niko; Pan, Ming; Sheffield, Justin; Samaniego, Luis; Thober, Stephan; Kumar, Rohinni; Prudhomme, Christel; Houghton-Carr, Helen

    2017-04-01

    High-resolution simulations of water resources from hydrological models are vital to supporting important climate services. Apart from a high level of detail, both spatially and temporally, it is important to provide simulations that consistently cover a range of timescales, from historical reanalysis to seasonal forecast and future projections. In the new EDgE project commissioned by the ECMWF (C3S) we try to fulfill these requirements. EDgE is a proof-of-concept project which combines climate data and state-of-the-art hydrological modelling to demonstrate a water-oriented information system implemented through a web application. EDgE is working with key European stakeholders representative of private and public sectors to jointly develop and tailor approaches and techniques. With these tools, stakeholders are assisted in using improved climate information in decision-making, and supported in the development of climate change adaptation and mitigation policies. Here, we present the first results of the EDgE modelling chain, which is divided into three main processes: 1) pre-processing and downscaling; 2) hydrological modelling; 3) post-processing. Consistent downscaling and bias corrections for historical simulations, seasonal forecasts and climate projections ensure that the results across scales are robust. The daily temporal resolution and 5km spatial resolution ensure locally relevant simulations. With the use of four hydrological models (PCR-GLOBWB, VIC, mHM, Noah-MP), uncertainty between models is properly addressed, while consistency is guaranteed by using identical input data for static land surface parameterizations. The forecast results are communicated to stakeholders via Sectoral Climate Impact Indicators (SCIIs) that have been created in collaboration with the end-user community of the EDgE project. The final product of this project is composed of 15 years of seasonal forecast and 10 climate change projections, all combined with four hydrological models. These unique high-resolution climate information simulations in the EDgE project provide an unprecedented information system for decision-making over Europe.

  17. Soft Tissue Structure Modelling for Use in Orthopaedic Applications and Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Audenaert, E. A.; Mahieu, P.; van Hoof, T.; Pattyn, C.

    2009-12-01

    We present our methodology for the three-dimensional anatomical and geometrical description of soft tissues, relevant for orthopaedic surgical applications and musculoskeletal biomechanics. The technique involves the segmentation and geometrical description of muscles and neurovascular structures from high-resolution computer tomography scanning for the reconstruction of generic anatomical models. These models can be used for quantitative interpretation of anatomical and biomechanical aspects of different soft tissue structures. This approach should allow the use of these data in other application fields, such as musculoskeletal modelling, simulations for radiation therapy, and databases for use in minimally invasive, navigated and robotic surgery.

  18. Improved spatial resolution in PET scanners using sampling techniques

    PubMed Central

    Surti, Suleman; Scheuermann, Ryan; Werner, Matthew E.; Karp, Joel S.

    2009-01-01

    Increased focus towards improved detector spatial resolution in PET has led to the use of smaller crystals in some form of light sharing detector design. In this work we evaluate two sampling techniques that can be applied during calibrations for pixelated detector designs in order to improve the reconstructed spatial resolution. The inter-crystal positioning technique utilizes sub-sampling in the crystal flood map to better sample the Compton scatter events in the detector. The Compton scatter rejection technique, on the other hand, rejects those events that are located further from individual crystal centers in the flood map. We performed Monte Carlo simulations followed by measurements on two whole-body scanners for point source data. The simulations and measurements were performed for scanners using scintillators with Zeff ranging from 46.9 to 63 for LaBr3 and LYSO, respectively. Our results show that near the center of the scanner, inter-crystal positioning technique leads to a gain of about 0.5-mm in reconstructed spatial resolution (FWHM) for both scanner designs. In a small animal LYSO scanner the resolution improves from 1.9-mm to 1.6-mm with the inter-crystal technique. The Compton scatter rejection technique shows higher gains in spatial resolution but at the cost of reduction in scanner sensitivity. The inter-crystal positioning technique represents a modest acquisition software modification for an improvement in spatial resolution, but at a cost of potentially longer data correction and reconstruction times. The Compton scatter rejection technique, while also requiring a modest acquisition software change with no increased data correction and reconstruction times, will be useful in applications where the scanner sensitivity is very high and larger improvements in spatial resolution are desirable. PMID:19779586

  19. A Portable Regional Weather and Climate Downscaling System Using GEOS-5, LIS-6, WRF, and the NASA Workflow Tool

    NASA Astrophysics Data System (ADS)

    Kemp, E. M.; Putman, W. M.; Gurganus, J.; Burns, R. W.; Damon, M. R.; McConaughy, G. R.; Seablom, M. S.; Wojcik, G. S.

    2009-12-01

    We present a regional downscaling system (RDS) suitable for high-resolution weather and climate simulations in multiple supercomputing environments. The RDS is built on the NASA Workflow Tool, a software framework for configuring, running, and managing computer models on multiple platforms with a graphical user interface. The Workflow Tool is used to run the NASA Goddard Earth Observing System Model Version 5 (GEOS-5), a global atmospheric-ocean model for weather and climate simulations down to 1/4 degree resolution; the NASA Land Information System Version 6 (LIS-6), a land surface modeling system that can simulate soil temperature and moisture profiles; and the Weather Research and Forecasting (WRF) community model, a limited-area atmospheric model for weather and climate simulations down to 1-km resolution. The Workflow Tool allows users to customize model settings to user needs; saves and organizes simulation experiments; distributes model runs across different computer clusters (e.g., the DISCOVER cluster at Goddard Space Flight Center, the Cray CX-1 Desktop Supercomputer, etc.); and handles all file transfers and network communications (e.g., scp connections). Together, the RDS is intended to aid researchers by making simulations as easy as possible to generate on the computer resources available. Initial conditions for LIS-6 and GEOS-5 are provided by Modern Era Retrospective-Analysis for Research and Applications (MERRA) reanalysis data stored on DISCOVER. The LIS-6 is first run for 2-4 years forced by MERRA atmospheric analyses, generating initial conditions for the WRF soil physics. GEOS-5 is then initialized from MERRA data and run for the period of interest. Large-scale atmospheric data, sea-surface temperatures, and sea ice coverage from GEOS-5 are used as boundary conditions for WRF, which is run for the same period of interest. Multiply nested grids are used for both LIS-6 and WRF, with the innermost grid run at a resolution sufficient for typical local weather features (terrain, convection, etc.) All model runs, restarts, and file transfers are coordinated by the Workflow Tool. Two use cases are being pursued. First, the RDS generates regional climate simulations down to 4-km for the Chesapeake Bay region, with WRF output provided as input to more specialized models (e.g., ocean/lake, hydrological, marine biology, and air pollution). This will allow assessment of climate impact on local interests (e.g., changes in Bay water levels and temperatures, innundation, fish kills, etc.) Second, the RDS generates high-resolution hurricane simulations in the tropical North Atlantic. This use case will support Observing System Simulation Experiments (OSSEs) of dynamically-targeted lidar observations as part of the NASA Sensor Web Simulator project. Sample results will be presented at the AGU Fall Meeting.

  20. Regional Climate Simulations over North America: Interaction of Local Processes with Improved Large-Scale Flow.

    NASA Astrophysics Data System (ADS)

    Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan

    2005-04-01

    The reasons for biases in regional climate simulations were investigated in an attempt to discern whether they arise from deficiencies in the model parameterizations or are due to dynamical problems. Using the Regional Atmospheric Modeling System (RAMS) forced by the National Centers for Environmental Prediction-National Center for Atmospheric Research reanalysis, the detailed climate over North America at 50-km resolution for June 2000 was simulated. First, the RAMS equations were modified to make them applicable to a large region, and its turbulence parameterization was corrected. The initial simulations showed large biases in the location of precipitation patterns and surface air temperatures. By implementing higher-resolution soil data, soil moisture and soil temperature initialization, and corrections to the Kain-Fritch convective scheme, the temperature biases and precipitation amount errors could be removed, but the precipitation location errors remained. The precipitation location biases could only be improved by implementing spectral nudging of the large-scale (wavelength of 2500 km) dynamics in RAMS. This corrected for circulation errors produced by interactions and reflection of the internal domain dynamics with the lateral boundaries where the model was forced by the reanalysis.

  1. Improved image reconstruction of low-resolution multichannel phase contrast angiography

    PubMed Central

    P. Krishnan, Akshara; Joy, Ajin; Paul, Joseph Suresh

    2016-01-01

    Abstract. In low-resolution phase contrast magnetic resonance angiography, the maximum intensity projected channel images will be blurred with consequent loss of vascular details. The channel images are enhanced using a stabilized deblurring filter, applied to each channel prior to combining the individual channel images. The stabilized deblurring is obtained by the addition of a nonlocal regularization term to the reverse heat equation, referred to as nonlocally stabilized reverse diffusion filter. Unlike reverse diffusion filter, which is highly unstable and blows up noise, nonlocal stabilization enhances intensity projected parallel images uniformly. Application to multichannel vessel enhancement is illustrated using both volunteer data and simulated multichannel angiograms. Robustness of the filter applied to volunteer datasets is shown using statistically validated improvement in flow quantification. Improved performance in terms of preserving vascular structures and phased array reconstruction in both simulated and real data is demonstrated using structureness measure and contrast ratio. PMID:26835501

  2. The Impact of Solid Surface Features on Fluid-Fluid Interface Configuration

    NASA Astrophysics Data System (ADS)

    Araujo, J. B.; Brusseau, M. L. L.

    2017-12-01

    Pore-scale fluid processes in geological media are critical for a broad range of applications such as radioactive waste disposal, carbon sequestration, soil moisture distribution, subsurface pollution, land stability, and oil and gas recovery. The continued improvement of high-resolution image acquisition and processing have provided a means to test the usefulness of theoretical models developed to simulate pore-scale fluid processes, through the direct quantification of interfaces. High-resolution synchrotron X-ray microtomography is used in combination with advanced visualization tools to characterize fluid distributions in natural geologic media. The studies revealed the presence of fluid-fluid interface associated with macroscopic features on the surfaces of the solids such as pits and crevices. These features and respective fluid interfaces, which are not included in current theoretical or computational models, may have a significant impact on accurate simulation and understanding of multi-phase flow, energy, heat and mass transfer processes.

  3. Research Essay for the Goldwater Scholarship Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davison, Jacob August

    Oxygen is found in many natural and human-made structures and materials, including water, concrete, or any oxide. The severe lack of data on the cross section of 16O(n,α), a reaction that can be found in any material containing oxygen, is detrimental to a complete understanding of the natural or induced behavior of these materials [HYL16]. Additionally, study of this particular reaction and other neutron-induced reactions involving oxygen are useful in the design of naval light water reactors and applications in radio-biology [HYL16]. A detailed understanding of the 16O(n,α) reaction is vital to the safe and efficient study, design, and developmentmore » of applications such as these. My consequent work at the Los Alamos National Laboratory (LANL), under the supervision of my mentor, Dr. Hye Young Lee, concerned an experiment to measure the reaction rate of 16O(n,α) with unprecedented precision, using a method of experimentation known as the ”forward propagating approach.” What separates this method from traditional experimentation is in the use of computer simulations; in essence, this method entails the development of a computer-simulated experimental environment that behaves similarly to a corresponding physical experimental environment (the word ”similar” is used here to convey an equivalence in properties of materials, like geometry or density, and characteristics of certain nuclear processes between the simulated and physical environments). The simulated environment receives inputs, like detector resolution and efficiency, beam resolution, or theoretical calculations of cross sections, that are determined from physically measured results, and then output data that – provided the simulation was prepared and executed properly – closely resemble the results expected from physical execution of the experiment. By comparing data from the simulated experiment and the physical experiment, the relevant results can be constrained to achieve a high precision measurement. The goal of my mentor’s experiment–the experiment that I helped build and simulate–was to achieve a high precision measurement of the cross section of 16O(n,α) using the forward propagating approach technique.« less

  4. Climate SPHINX: High-resolution present-day and future climate simulations with an improved representation of small-scale variability

    NASA Astrophysics Data System (ADS)

    Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim

    2016-04-01

    The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).

  5. A Large Scale Code Resolution Service Network in the Internet of Things

    PubMed Central

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-01-01

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207

  6. A large scale code resolution service network in the Internet of Things.

    PubMed

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-11-07

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.

  7. Assessment of a high-resolution central scheme for the solution of the relativistic hydrodynamics equations

    NASA Astrophysics Data System (ADS)

    Lucas-Serrano, A.; Font, J. A.; Ibáñez, J. M.; Martí, J. M.

    2004-12-01

    We assess the suitability of a recent high-resolution central scheme developed by \\cite{kurganov} for the solution of the relativistic hydrodynamic equations. The novelty of this approach relies on the absence of Riemann solvers in the solution procedure. The computations we present are performed in one and two spatial dimensions in Minkowski spacetime. Standard numerical experiments such as shock tubes and the relativistic flat-faced step test are performed. As an astrophysical application the article includes two-dimensional simulations of the propagation of relativistic jets using both Cartesian and cylindrical coordinates. The simulations reported clearly show the capabilities of the numerical scheme of yielding satisfactory results, with an accuracy comparable to that obtained by the so-called high-resolution shock-capturing schemes based upon Riemann solvers (Godunov-type schemes), even well inside the ultrarelativistic regime. Such a central scheme can be straightforwardly applied to hyperbolic systems of conservation laws for which the characteristic structure is not explicitly known, or in cases where a numerical computation of the exact solution of the Riemann problem is prohibitively expensive. Finally, we present comparisons with results obtained using various Godunov-type schemes as well as with those obtained using other high-resolution central schemes which have recently been reported in the literature.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frenje, J. A.; Hilsabeck, T. J.; Wink, C. W.

    The next-generation magnetic recoil spectrometer for time-resolved measurements of the neutron spectrum has been conceptually designed for the National Ignition Facility. This spectrometer, called MRSt, represents a paradigm shift in our thinking about neutron spectrometry for inertial confinement fusion applications, as it will provide simultaneously information about the burn history and time evolution of areal density (ρR), apparent ion temperature (T i), yield (Y n), and macroscopic flows during burn. From this type of data, an assessment of the evolution of the fuel assembly, hotspot, and alpha heating can be made. According to simulations, the MRSt will provide accurate datamore » with a time resolution of ~20 ps and energy resolution of ~100 keV for total neutron yields above ~10 16. Lastly, at lower yields, the diagnostic will be operated at a higher-efficiency, lower-energy-resolution mode to provide a time resolution of ~20 ps.« less

  9. Improving Spectroscopic Performance of a Coplanar-Anode High-Pressure Xenon Gamma-Ray Spectrometer

    NASA Astrophysics Data System (ADS)

    Kiff, Scott Douglas; He, Zhong; Tepper, Gary C.

    2007-08-01

    High-pressure xenon (HPXe) gas is a desirable radiation detection medium for homeland security applications because of its good inherent room-temperature energy resolution, potential for large, efficient devices, and stability over a broad temperature range. Past work in HPXe has produced large-diameter gridded ionization chambers with energy resolution at 662 keV between 3.5 and 4% FWHM. However, one major limitation of these detectors is resolution degradation due to Frisch grid microphonics. A coplanar-anode HPXe detector has been developed as an alternative to gridded chambers. An investigation of this detector's energy resolution is reported in this submission. A simulation package is used to investigate the contributions of important physical processes to the measured photopeak broadening. Experimental data is presented for pure Xe and Xe + 0.2%H2 mixtures, including an analysis of interaction location effects on the energy spectrum.

  10. Manufacturing data analytics using a virtual factory representation.

    PubMed

    Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun

    2017-01-01

    Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.

  11. Ensemble flood simulation for a small dam catchment in Japan using 10 and 2 km resolution nonhydrostatic model rainfalls

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kenichiro; Otsuka, Shigenori; Apip; Saito, Kazuo

    2016-08-01

    This paper presents a study on short-term ensemble flood forecasting specifically for small dam catchments in Japan. Numerical ensemble simulations of rainfall from the Japan Meteorological Agency nonhydrostatic model (JMA-NHM) are used as the input data to a rainfall-runoff model for predicting river discharge into a dam. The ensemble weather simulations use a conventional 10 km and a high-resolution 2 km spatial resolutions. A distributed rainfall-runoff model is constructed for the Kasahori dam catchment (approx. 70 km2) and applied with the ensemble rainfalls. The results show that the hourly maximum and cumulative catchment-average rainfalls of the 2 km resolution JMA-NHM ensemble simulation are more appropriate than the 10 km resolution rainfalls. All the simulated inflows based on the 2 and 10 km rainfalls become larger than the flood discharge of 140 m3 s-1, a threshold value for flood control. The inflows with the 10 km resolution ensemble rainfall are all considerably smaller than the observations, while at least one simulated discharge out of 11 ensemble members with the 2 km resolution rainfalls reproduces the first peak of the inflow at the Kasahori dam with similar amplitude to observations, although there are spatiotemporal lags between simulation and observation. To take positional lags into account of the ensemble discharge simulation, the rainfall distribution in each ensemble member is shifted so that the catchment-averaged cumulative rainfall of the Kasahori dam maximizes. The runoff simulation with the position-shifted rainfalls shows much better results than the original ensemble discharge simulations.

  12. Design and testing of a novel multi-stroke micropositioning system with variable resolutions.

    PubMed

    Xu, Qingsong

    2014-02-01

    Multi-stroke stages are demanded in micro-/nanopositioning applications which require smaller and larger motion strokes with fine and coarse resolutions, respectively. This paper presents the conceptual design of a novel multi-stroke, multi-resolution micropositioning stage driven by a single actuator for each working axis. It eliminates the issue of the interference among different drives, which resides in conventional multi-actuation stages. The stage is devised based on a fully compliant variable stiffness mechanism, which exhibits unequal stiffnesses in different strokes. Resistive strain sensors are employed to offer variable position resolutions in the different strokes. To quantify the design of the motion strokes and coarse/fine resolution ratio, analytical models are established. These models are verified through finite-element analysis simulations. A proof-of-concept prototype XY stage is designed, fabricated, and tested to demonstrate the feasibility of the presented ideas. Experimental results of static and dynamic testing validate the effectiveness of the proposed design.

  13. Streamflow simulation for continental-scale river basins

    NASA Astrophysics Data System (ADS)

    Nijssen, Bart; Lettenmaier, Dennis P.; Liang, Xu; Wetzel, Suzanne W.; Wood, Eric F.

    1997-04-01

    A grid network version of the two-layer variable infiltration capacity (VIC-2L) macroscale hydrologic model is described. VIC-2L is a hydrologically based soil- vegetation-atmosphere transfer scheme designed to represent the land surface in numerical weather prediction and climate models. The grid network scheme allows streamflow to be predicted for large continental rivers. Off-line (observed and estimated surface meteorological and radiative forcings) applications of the model to the Columbia River (1° latitude-longitude spatial resolution) and Delaware River (0.5° resolution) are described. The model performed quite well in both applications, reproducing the seasonal hydrograph and annual flow volumes to within a few percent. Difficulties in reproducing observed streamflow in the arid portion of the Snake River basin are attributed to groundwater-surface water interactions, which are not modeled by VIC-2L.

  14. Quantitative Segmentation of Fluorescence Microscopy Images of Heterogeneous Tissue: Application to the Detection of Residual Disease in Tumor Margins

    PubMed Central

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-01-01

    Purpose To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features. Materials and Methods Tissue excised from a genetically engineered mouse model of sarcoma was imaged using a subcellular resolution microendoscope after topical application of a fluorescent anatomical contrast agent: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma. Results Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach. Conclusion The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue. PMID:23824589

  15. Quantitative Segmentation of Fluorescence Microscopy Images of Heterogeneous Tissue: Application to the Detection of Residual Disease in Tumor Margins.

    PubMed

    Mueller, Jenna L; Harmany, Zachary T; Mito, Jeffrey K; Kennedy, Stephanie A; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G; Willett, Rebecca M; Brown, J Quincy; Ramanujam, Nimmi

    2013-01-01

    To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features. TISSUE EXCISED FROM A GENETICALLY ENGINEERED MOUSE MODEL OF SARCOMA WAS IMAGED USING A SUBCELLULAR RESOLUTION MICROENDOSCOPE AFTER TOPICAL APPLICATION OF A FLUORESCENT ANATOMICAL CONTRAST AGENT: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma. Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach. The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue.

  16. Quantitative observation of tracer transport with high-resolution PET

    NASA Astrophysics Data System (ADS)

    Kulenkampff, Johannes; Gruendig, Marion; Zakhnini, Abdelhamid; Lippmann-Pipke, Johanna

    2016-04-01

    Transport processes in natural porous media are typically heterogeneous over various scales. This heterogeneity is caused by the complexity of pore geometry and molecular processes. Heterogeneous processes, like diffusive transport, conservative advective transport, mixing and reactive transport, can be observed and quantified with quantitative tomography of tracer transport patterns. Positron Emission Tomography (PET) is by far the most sensitive method and perfectly selective for positron-emitting radiotracers, therefore it is suited as reference method for spatiotemporal tracer transport observations. The number of such PET-applications is steadily increasing. However, many applications are afflicted by the low spatial resolution (3 - 5 mm) of the clinical scanners from cooperating nuclear medical departments. This resolution is low in relation to typical sample dimensions of 10 cm, which are restricted by the mass attenuation of the material. In contrast, our GeoPET-method applies a high-resolution scanner with a resolution of 1 mm, which is the physical limit of the method and which is more appropriate for samples of the size of soil columns or drill cores. This higher resolution is achieved at the cost of a more elaborate image reconstruction procedure, especially considering the effects of Compton scatter. The result of the quantitative image reconstruction procedure is a suite of frames of the quantitative tracer distribution with adjustable frame rates from minutes to months. The voxel size has to be considered as reference volume of the tracer concentration. This continuous variable includes contributions from structures far below the spatial resolution, as far as a detection threshold, in the pico-molar range, is exceeded. Examples from a period of almost 10 years (Kulenkampff et al. 2008a, Kulenkampff et al. 2008b) of development and application of quantitative GeoPET-process tomography are shown. These examples include different transport processes, like conservative flow, reative transport, and diffusion (Kulenkampff et al, 2015). Such experimental data are complementary to the outcome of model simulations based upon structural μCT-images. The PET-data can be evaluated with respect to specific process parameters, like effective volume and flow velocity distribution. They can further serve as a basis for establishing intermediate-scale simulation models which directly incorporate the observed specific response functions, without requiring modeling on the pore scale at the highest possible spatial resolution. Kulenkampff, J., Gründig, M., Richter, M., Wolf, M., Dietzel, O.: First applications of a small-animal-PET scanner for process monitoring in rocks and soils. Geophysical Research Abstracts, Vol. 10, EGU2008-A-03727, 2008a. Kulenkampff, J., Gründig, M., Richter, M., and Enzmann, F.: Evaluation of positron emission tomography for visualisation of migration processes in geomaterials, Physics and Chemistry of the Earth, 33, 937-942, 2008b. Kulenkampff, J., Gruendig, M., Zakhnini, A., Gerasch, R., and Lippmann-Pipke, J.: Process tomography of diffusion with PET for evaluating anisotropy and heterogeneity, Clay Minerals, accepted 2015, 2015.

  17. Regional model simulations of New Zealand climate

    NASA Astrophysics Data System (ADS)

    Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.

    1998-03-01

    Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.

  18. Proper Generalized Decomposition (PGD) for the numerical simulation of polycrystalline aggregates under cyclic loading

    NASA Astrophysics Data System (ADS)

    Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck

    2018-02-01

    The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.

  19. A depth-of-interaction PET detector using mutual gain-equalized silicon photomultiplier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Xi, A.G, Weisenberger, H. Dong, Brian Kross, S. Lee, J. McKisson, Carl Zorn

    We developed a prototype high resolution, high efficiency depth-encoding detector for PET applications based on dual-ended readout of LYSO array with two silicon photomultipliers (SiPMs). Flood images, energy resolution, and depth-of-interaction (DOI) resolution were measured for a LYSO array - 0.7 mm in crystal pitch and 10 mm in thickness - with four unpolished parallel sides. Flood images were obtained such that individual crystal element in the array is resolved. The energy resolution of the entire array was measured to be 33%, while individual crystal pixel elements utilizing the signal from both sides ranged from 23.3% to 27%. By applyingmore » a mutual-gain equalization method, a DOI resolution of 2 mm for the crystal array was obtained in the experiments while simulations indicate {approx}1 mm DOI resolution could possibly be achieved. The experimental DOI resolution can be further improved by obtaining revised detector supporting electronics with better energy resolutions. This study provides a detailed detector calibration and DOI response characterization of the dual-ended readout SiPM-based PET detectors, which will be important in the design and calibration of a PET scanner in the future.« less

  20. Ultracompact vibrometry measurement with nanometric accuracy using optical feedback

    NASA Astrophysics Data System (ADS)

    Jha, Ajit; Azcona, Francisco; Royo, Santiago

    2015-05-01

    The nonlinear dynamics of a semiconductor laser with optical feedback (OF) combined with direct current modulation of the laser is demonstrated to suffice for the measurement of subwavelength changes in the position of a vibrating object. So far, classical Optical Feedback Interferometry (OFI) has been used to measure the vibration of an object given its amplitude is greater than half the wavelength of emission, and the resolution of the measurement limited to some tenths of the wavelength after processing. We present here a methodology which takes advantage of the combination of two different phenomena: continuous wave frequency modulation (CWFM), induced by direct modulation of the laser, and non-linear dynamics inside of the laser cavity subject to optical self-injection (OSI). The methodology we propose shows how to detect vibration amplitudes smaller than half the emission wavelength with resolutions way beyond λ/2, extending the typical performance of OFI setups to very small amplitudes. A detailed mathematical model and simulation results are presented to support the proposed methodology, showing its ability to perform such displacement measurements of frequencies in the MHz range, depending upon the modulation frequency. Such approach makes the technique a suitable candidate, among other applications, to economic laser-based ultrasound measurements, with applications in nondestructive testing of materials (thickness, flaws, density, stresses), among others. The results of simulations of the proposed approach confirm the merit of the figures as detection of amplitudes of vibration below λ/2) with resolutions in the nanometer range.

  1. Time resolution of the plastic scintillator strips with matrix photomultiplier readout for J-PET tomograph.

    PubMed

    Moskal, P; Rundel, O; Alfs, D; Bednarski, T; Białas, P; Czerwiński, E; Gajos, A; Giergiel, K; Gorgol, M; Jasińska, B; Kamińska, D; Kapłon, Ł; Korcyl, G; Kowalski, P; Kozik, T; Krzemień, W; Kubicz, E; Niedźwiecki, Sz; Pałka, M; Raczyński, L; Rudy, Z; Sharma, N G; Słomski, A; Silarski, M; Strzelecki, A; Wieczorek, A; Wiślicki, W; Witkowski, P; Zieliński, M; Zoń, N

    2016-03-07

    Recent tests of a single module of the Jagiellonian Positron Emission Tomography system (J-PET) consisting of 30 cm long plastic scintillator strips have proven its applicability for the detection of annihilation quanta (0.511 MeV) with a coincidence resolving time (CRT) of 0.266 ns. The achieved resolution is almost by a factor of two better with respect to the current TOF-PET detectors and it can still be improved since, as it is shown in this article, the intrinsic limit of time resolution for the determination of time of the interaction of 0.511 MeV gamma quanta in plastic scintillators is much lower. As the major point of the article, a method allowing to record timestamps of several photons, at two ends of the scintillator strip, by means of matrix of silicon photomultipliers (SiPM) is introduced. As a result of simulations, conducted with the number of SiPM varying from 4 to 42, it is shown that the improvement of timing resolution saturates with the growing number of photomultipliers, and that the [Formula: see text] configuration at two ends allowing to read twenty timestamps, constitutes an optimal solution. The conducted simulations accounted for the emission time distribution, photon transport and absorption inside the scintillator, as well as quantum efficiency and transit time spread of photosensors, and were checked based on the experimental results. Application of the [Formula: see text] matrix of SiPM allows for achieving the coincidence resolving time in positron emission tomography of [Formula: see text]0.170 ns for 15 cm axial field-of-view (AFOV) and [Formula: see text]0.365 ns for 100 cm AFOV. The results open perspectives for construction of a cost-effective TOF-PET scanner with significantly better TOF resolution and larger AFOV with respect to the current TOF-PET modalities.

  2. Time resolution of the plastic scintillator strips with matrix photomultiplier readout for J-PET tomograph

    NASA Astrophysics Data System (ADS)

    Moskal, P.; Rundel, O.; Alfs, D.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Giergiel, K.; Gorgol, M.; Jasińska, B.; Kamińska, D.; Kapłon, Ł.; Korcyl, G.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz; Pałka, M.; Raczyński, L.; Rudy, Z.; Sharma, N. G.; Słomski, A.; Silarski, M.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Witkowski, P.; Zieliński, M.; Zoń, N.

    2016-03-01

    Recent tests of a single module of the Jagiellonian Positron Emission Tomography system (J-PET) consisting of 30 cm long plastic scintillator strips have proven its applicability for the detection of annihilation quanta (0.511 MeV) with a coincidence resolving time (CRT) of 0.266 ns. The achieved resolution is almost by a factor of two better with respect to the current TOF-PET detectors and it can still be improved since, as it is shown in this article, the intrinsic limit of time resolution for the determination of time of the interaction of 0.511 MeV gamma quanta in plastic scintillators is much lower. As the major point of the article, a method allowing to record timestamps of several photons, at two ends of the scintillator strip, by means of matrix of silicon photomultipliers (SiPM) is introduced. As a result of simulations, conducted with the number of SiPM varying from 4 to 42, it is shown that the improvement of timing resolution saturates with the growing number of photomultipliers, and that the 2× 5 configuration at two ends allowing to read twenty timestamps, constitutes an optimal solution. The conducted simulations accounted for the emission time distribution, photon transport and absorption inside the scintillator, as well as quantum efficiency and transit time spread of photosensors, and were checked based on the experimental results. Application of the 2× 5 matrix of SiPM allows for achieving the coincidence resolving time in positron emission tomography of ≈ 0.170 ns for 15 cm axial field-of-view (AFOV) and ≈ 0.365 ns for 100 cm AFOV. The results open perspectives for construction of a cost-effective TOF-PET scanner with significantly better TOF resolution and larger AFOV with respect to the current TOF-PET modalities.

  3. Resolution Enhancement In Ultrasonic Imaging By A Time-Varying Filter

    NASA Astrophysics Data System (ADS)

    Ching, N. H.; Rosenfeld, D.; Braun, M.

    1987-09-01

    The study reported here investigates the use of a time-varying filter to compensate for the spreading of ultrasonic pulses due to the frequency dependence of attenuation by tissues. The effect of this pulse spreading is to degrade progressively the axial resolution with increasing depth. The form of compensation required to correct for this effect is impossible to realize exactly. A novel time-varying filter utilizing a bank of bandpass filters is proposed as a realizable approximation of the required compensation. The performance of this filter is evaluated by means of a computer simulation. The limits of its application are discussed. Apart from improving the axial resolution, and hence the accuracy of axial measurements, the compensating filter could be used in implementing tissue characterization algorithms based on attenuation data.

  4. Quantum sensing with arbitrary frequency resolution

    NASA Astrophysics Data System (ADS)

    Boss, J. M.; Cujia, K. S.; Zopes, J.; Degen, C. L.

    2017-05-01

    Quantum sensing takes advantage of well-controlled quantum systems for performing measurements with high sensitivity and precision. We have implemented a concept for quantum sensing with arbitrary frequency resolution, independent of the qubit probe and limited only by the stability of an external synchronization clock. Our concept makes use of quantum lock-in detection to continuously probe a signal of interest. Using the electronic spin of a single nitrogen-vacancy center in diamond, we demonstrate detection of oscillating magnetic fields with a frequency resolution of 70 microhertz over a megahertz bandwidth. The continuous sampling further guarantees an enhanced sensitivity, reaching a signal-to-noise ratio in excess of 104 for a 170-nanotesla test signal measured during a 1-hour interval. Our technique has applications in magnetic resonance spectroscopy, quantum simulation, and sensitive signal detection.

  5. Multi-pass transmission electron microscopy

    DOE PAGES

    Juffmann, Thomas; Koppell, Stewart A.; Klopfer, Brannon B.; ...

    2017-05-10

    Feynman once asked physicists to build better electron microscopes to be able to watch biology at work. While electron microscopes can now provide atomic resolution, electron beam induced specimen damage precludes high resolution imaging of sensitive materials, such as single proteins or polymers. Here, we use simulations to show that an electron microscope based on a multi-pass measurement protocol enables imaging of single proteins, without averaging structures over multiple images. While we demonstrate the method for particular imaging targets, the approach is broadly applicable and is expected to improve resolution and sensitivity for a range of electron microscopy imaging modalities,more » including, for example, scanning and spectroscopic techniques. The approach implements a quantum mechanically optimal strategy which under idealized conditions can be considered interaction-free.« less

  6. Integrated optics to improve resolution on multiple configuration

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Guo, Chunjie; Zhou, Liwei

    2015-04-01

    Inspired to in order to reveal the structure to improve imaging resolution, further technical requirement is proposed in some areas of the function and influence on the development of multiple configuration. To breakthrough diffraction limit, smart structures are recommended as the most efficient and economical method, while by used to improve the system performance, especially on signal to noise ratio and resolution. Integrated optics were considered in the selection, with which typical multiple configuration, by use the method of simulation experiment. Methodology can change traditional design concept and to develop the application space. Our calculations using multiple matrix transfer method, also the correlative algorithm and full calculations, show the expected beam shaping through system and, in particular, the experimental results will support our argument, which will be reported in the presentation.

  7. Scaling between reanalyses and high-resolution land-surface modelling in mountainous areas - enabling better application and testing of reanalyses in heterogeneous environments

    NASA Astrophysics Data System (ADS)

    Gruber, S.; Fiddes, J.

    2013-12-01

    In mountainous topography, the difference in scale between atmospheric reanalyses (typically tens of kilometres) and relevant processes and phenomena near the Earth surface, such as permafrost or snow cover (meters to tens of meters) is most obvious. This contrast of scales is one of the major obstacles to using reanalysis data for the simulation of surface phenomena and to confronting reanalyses with independent observation. At the example of modelling permafrost in mountain areas (but simple to generalise to other phenomena and heterogeneous environments), we present and test methods against measurements for (A) scaling atmospheric data from the reanalysis to the ground level and (B) smart sampling of the heterogeneous landscape in order to set up a lumped model simulation that represents the high-resolution land surface. TopoSCALE (Part A, see http://dx.doi.org/10.5194/gmdd-6-3381-2013) is a scheme, which scales coarse-grid climate fields to fine-grid topography using pressure level data. In addition, it applies necessary topographic corrections e.g. those variables required for computation of radiation fields. This provides the necessary driving fields to the LSM. Tested against independent ground data, this scheme has been shown to improve the scaling and distribution of meteorological parameters in complex terrain, as compared to conventional methods, e.g. lapse rate based approaches. TopoSUB (Part B, see http://dx.doi.org/10.5194/gmd-5-1245-2012) is a surface pre-processor designed to sample a fine-grid domain (defined by a digital elevation model) along important topographical (or other) dimensions through a clustering scheme. This allows constructing a lumped model representing the main sources of fine-grid variability and applying a 1D LSM efficiently over large areas. Results can processed to derive (i) summary statistics at coarse-scale re-analysis grid resolution, (ii) high-resolution data fields spatialized to e.g., the fine-scale digital elevation model grid, or (iii) validation products for locations at which measurements exist, only. The ability of TopoSUB to approximate results simulated by a 2D distributed numerical LSM at a factor of ~10,000 less computations is demonstrated by comparison of 2D and lumped simulations. Successful application of the combined scheme in the European Alps is reported and based on its results, open issues for future research are outlined.

  8. Effects of Soil Data and Simulation Unit Resolution on Quantifying Changes of Soil Organic Carbon at Regional Scale with a Biogeochemical Process Model

    PubMed Central

    Zhang, Liming; Yu, Dongsheng; Shi, Xuezheng; Xu, Shengxiang; Xing, Shihe; Zhao, Yongcong

    2014-01-01

    Soil organic carbon (SOC) models were often applied to regions with high heterogeneity, but limited spatially differentiated soil information and simulation unit resolution. This study, carried out in the Tai-Lake region of China, defined the uncertainty derived from application of the DeNitrification-DeComposition (DNDC) biogeochemical model in an area with heterogeneous soil properties and different simulation units. Three different resolution soil attribute databases, a polygonal capture of mapping units at 1∶50,000 (P5), a county-based database of 1∶50,000 (C5) and county-based database of 1∶14,000,000 (C14), were used as inputs for regional DNDC simulation. The P5 and C5 databases were combined with the 1∶50,000 digital soil map, which is the most detailed soil database for the Tai-Lake region. The C14 database was combined with 1∶14,000,000 digital soil map, which is a coarse database and is often used for modeling at a national or regional scale in China. The soil polygons of P5 database and county boundaries of C5 and C14 databases were used as basic simulation units. Results project that from 1982 to 2000, total SOC change in the top layer (0–30 cm) of the 2.3 M ha of paddy soil in the Tai-Lake region was +1.48 Tg C, −3.99 Tg C and −15.38 Tg C based on P5, C5 and C14 databases, respectively. With the total SOC change as modeled with P5 inputs as the baseline, which is the advantages of using detailed, polygon-based soil dataset, the relative deviation of C5 and C14 were 368% and 1126%, respectively. The comparison illustrates that DNDC simulation is strongly influenced by choice of fundamental geographic resolution as well as input soil attribute detail. The results also indicate that improving the framework of DNDC is essential in creating accurate models of the soil carbon cycle. PMID:24523922

  9. Nonlocal and Mixed-Locality Multiscale Finite Element Methods

    DOE PAGES

    Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

    2018-03-27

    In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less

  10. Nonlocal and Mixed-Locality Multiscale Finite Element Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

    In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less

  11. Enhancing SMAP Soil Moisture Retrievals via Superresolution Techniques

    NASA Astrophysics Data System (ADS)

    Beale, K. D.; Ebtehaj, A. M.; Romberg, J. K.; Bras, R. L.

    2017-12-01

    Soil moisture is a key state variable that modulates land-atmosphere interactions and its high-resolution global scale estimates are essential for improved weather forecasting, drought prediction, crop management, and the safety of troop mobility. Currently, NASA's Soil Moisture Active/Passive (SMAP) satellite provides a global picture of soil moisture variability at a resolution of 36 km, which is prohibitive for some hydrologic applications. The goal of this research is to enhance the resolution of SMAP passive microwave retrievals by a factor of 2 to 4 using modern superresolution techniques that rely on the knowledge of high-resolution land surface models. In this work, we explore several super-resolution techniques including an empirical dictionary method, a learned dictionary method, and a three-layer convolutional neural network. Using a year of global high-resolution land surface model simulations as training set, we found that we are able to produce high-resolution soil moisture maps that outperform the original low-resolution observations both qualitatively and quantitatively. In particular, on a patch-by-patch basis we are able to produce estimates of high-resolution soil moisture maps that improve on the original low-resolution patches by on average 6% in terms of mean-squared error, and 14% in terms of the structural similarity index.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Fuyu; Collins, William D.; Wehner, Michael F.

    High-resolution climate models have been shown to improve the statistics of tropical storms and hurricanes compared to low-resolution models. The impact of increasing horizontal resolution in the tropical storm simulation is investigated exclusively using a series of Atmospheric Global Climate Model (AGCM) runs with idealized aquaplanet steady-state boundary conditions and a fixed operational storm-tracking algorithm. The results show that increasing horizontal resolution helps to detect more hurricanes, simulate stronger extreme rainfall, and emulate better storm structures in the models. However, increasing model resolution does not necessarily produce stronger hurricanes in terms of maximum wind speed, minimum sea level pressure, andmore » mean precipitation, as the increased number of storms simulated by high-resolution models is mainly associated with weaker storms. The spatial scale at which the analyses are conducted appears to have more important control on these meteorological statistics compared to horizontal resolution of the model grid. When the simulations are analyzed on common low-resolution grids, the statistics of the hurricanes, particularly the hurricane counts, show reduced sensitivity to the horizontal grid resolution and signs of scale invariant.« less

  13. Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.

    2014-12-01

    This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.

  14. Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Mohr, Karen I.

    2015-01-01

    The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).

  15. Leveraging simulation to evaluate system performance in presence of fixed pattern noise

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.

    2017-05-01

    The development of image simulation techniques which map the effects of a notional, modeled sensor system onto an existing image can be used to evaluate the image quality of camera systems prior to the development of prototype systems. In addition, image simulation or `virtual prototyping' can be utilized to reduce the time and expense associated with conducting extensive field trials. In this paper we examine the development of a perception study designed to assess the performance of the NVESD imager performance metrics as a function of fixed pattern noise. This paper discusses the development of the model theory and the implementation and execution of the perception study. In addition, other applications of the image simulation component including the evaluation of limiting resolution and other test targets is provided.

  16. Compact, self-contained enhanced-vision system (EVS) sensor simulator

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo

    2007-04-01

    We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.

  17. A high-resolution conceptual model for diffuse organic micropollutant loads in streams

    NASA Astrophysics Data System (ADS)

    Stamm, Christian; Honti, Mark; Ghielmetti, Nico

    2013-04-01

    The ecological state of surface waters has become the dominant aspect in water quality assessments. Toxicity is a key determinant of the ecological state, but organic micropollutants (OMP) are seldom monitored with the same spatial and temporal frequency as for example nutrients, mainly due the demanding analytical methods and costs. However, diffuse transport pathways are at least equally complex for OMPs as for nutrients and there are still significant knowledge gaps. Moreover, concentrations of the different compounds would need to be known with fairly high temporal resolution because acute toxicity can be as important as the chronic one. Fully detailed mechanistic models of diffuse OMP loads require an immense set of site-specific knowledge and are rarely applicable for catchments lacking an exceptional monitoring coverage. Simple empirical methods are less demanding but usually work with more temporal aggregation and that's why they have limited possibilities to support the estimation of the ecological state. This study presents a simple conceptual model that aims to simulate the concentrations of selected organic micropollutants with daily resolution at 11 locations in the stream network of a small catchment (46 km2). The prerequisite is a known hydrological and meteorological background (daily discharge, precipitation and air temperature time series), a land use map and some historic measurements of the desired compounds. The model is conceptual in the sense that all important diffuse transport pathways are simulated separately, but each with a simple empirical process rate. Consequently, some site-specific observations are required to calibrate the model, but afterwards the model can be used for forecasting and scenario analysis as the calibrated process rates typically describe invariant properties of the catchment. We simulated 6 different OMPs from the categories of agricultural and urban pesticides and urban biocides. The application of agricultural pesticides was also simulated with the model using a heat-sum approach. Calibration was carried out with weekly aggregated samples covering the growing season in 2 years. The model could reproduce the observed OMP concentrations with varying success. Compounds that are less persistent in the environment and thus have a dominant temporal dynamics (pesticides with a short half-life) could be simulated in general better than the persistent ones. For the latter group the relatively stable available stock meant that there were no clear seasonal dynamics, which revealed that transport processes are quite uncertain even when daily rainfall is used as the main driver. Nevertheless the daily concentration distribution could still be simulated with higher accuracy than the individual peaks. Thus we can model the concentration-duration relationship for daily resolution in an acceptable way for each compound.

  18. Piezoelectric and optical setup to measure an electrical field: application to the longitudinal near-field generated by a tapered coax.

    PubMed

    Euphrasie, S; Vairac, P; Cretin, B; Lengaigne, G

    2008-03-01

    We propose a new setup to measure an electrical field in one direction. This setup is made of a piezoelectric sintered lead zinconate titanate film and an optical interferometric probe. We used this setup to investigate how the shape of the extremity of a coaxial cable influences the longitudinal electrical near-field generated by it. For this application, we designed our setup to have a spatial resolution of 100 microm in the direction of the electrical field. Simulations and experiments are presented.

  19. The atmospheric boundary layer — advances in knowledge and application

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.; Hess, G. D.; Physick, W. L.; Bougeault, P.

    1996-02-01

    We summarise major activities and advances in boundary-layer knowledge in the 25 years since 1970, with emphasis on the application of this knowledge to surface and boundary-layer parametrisation schemes in numerical models of the atmosphere. Progress in three areas is discussed: (i) the mesoscale modelling of selected phenomena; (ii) numerical weather prediction; and (iii) climate simulations. Future trends are identified, including the incorporation into models of advanced cloud schemes and interactive canopy schemes, and the nesting of high resolution boundary-layer schemes in global climate models.

  20. Evaluation of the dosimetric properties of a diode detector for small field proton radiosurgery

    PubMed Central

    Teran, Anthony V.; Slater, Jerry D.; Slater, James M.; Wroe, Andrew J.

    2015-01-01

    The small fields and sharp gradients typically encountered in proton radiosurgery require high spatial resolution dosimetric measurements, especially below 1–2 cm diameters. Radiochromic film provides high resolution, but requires postprocessing and special handling. Promising alternatives are diode detectors with small sensitive volumes (SV) that are capable of high resolution and real‐time dose acquisition. In this study we evaluated the PTW PR60020 proton dosimetry diode using radiation fields and beam energies relevant to radiosurgery applications. Energies of 127 and 157 MeV (9.7 to 15 cm range) and initial diameters of 8, 10, 12, and 20 mm were delivered using single‐stage scattering and four modulations (0, 15, 30, and 60 mm) to a water tank in our treatment room. Depth dose and beam profile data were compared with PTW Markus N23343 ionization chamber, EBT2 Gafchromic film, and Monte Carlo simulations. Transverse dose profiles were measured using the diode in "edge‐on" orientation or EBT2 film. Diode response was linear with respect to dose, uniform with dose rate, and showed an orientation‐dependent (i.e., beam parallel to, or perpendicular to, detector axis) response of less than 1%. Diode vs. Markus depth‐dose profiles, as well as Markus relative dose ratio vs. simulated dose‐weighted average lineal energy plots, suggest that any LET‐dependent diode response is negligible from particle entrance up to the very distal portion of the SOBP for the energies tested. Finally, while not possible with the ionization chamber due to partial volume effects, accurate diode depth‐dose measurements of 8, 10, and 12 mm diameter beams were obtained compared to Monte Carlo simulations. Because of the small SV that allows measurements without partial volume effects and the capability of submillimeter resolution (in edge‐on orientation) that is crucial for small fields and high‐dose gradients (e.g., penumbra, distal edge), as well as negligible LET dependence over nearly the full the SOBP, the PTW proton diode proved to be a useful high‐resolution, real‐time metrology device for small proton field radiation measurements such as would be encountered in radiosurgery applications. PACS numbers: 87.56.‐v, 87.56.jf, 87.56.Fc PMID:26699554

  1. Trilateration-based reconstruction of ortho-positronium decays into three photons with the J-PET detector

    NASA Astrophysics Data System (ADS)

    Gajos, A.; Kamińska, D.; Czerwiński, E.; Alfs, D.; Bednarski, T.; Białas, P.; Głowacz, B.; Gorgol, M.; Jasińska, B.; Kapłon, Ł.; Korcyl, G.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Mohammed, M.; Niedźwiecki, Sz.; Pałka, M.; Pawlik-Niedźwiecka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Słomski, A.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2016-05-01

    This work reports on a new reconstruction algorithm allowing us to reconstruct the decays of ortho-positronium atoms into three photons using the places and times of photons recorded in the detector. The method is based on trilateration and allows for a simultaneous reconstruction of both location and time of the decay. Results of resolution tests of the new reconstruction in the J-PET detector based on Monte Carlo simulations are presented, which yield a spatial resolution at the level of 2 cm (FWHM) for X and Y and at the level of 1 cm (FWHM) for Z available with the present resolution of J-PET after application of a kinematic fit. Prospects of employment of this method for studying angular correlations of photons in decays of polarized ortho-positronia for the needs of tests of CP and CPT discrete symmetries are also discussed. The new reconstruction method allows for discrimination of background from random three-photon coincidences as well as for application of a novel method for determination of the linear polarization of ortho-positronium atoms, which is also introduced in this work.

  2. Performance of a miniature mechanically cooled HPGe gamma-spectrometer for space applications

    NASA Astrophysics Data System (ADS)

    Kondratjev, V.; Pchelintsev, A.; Jakovlevs, O.; Sokolov, A.; Gostilo, V.; Owens, A.

    2018-01-01

    We report on the development of a miniaturized HPGe gamma-spectrometer for space applications. The instrument is designed around a 158 cm3 intrinsically pure Ge crystal in the closed-end coaxial configuration, cooled by a Thales RM3 miniature Stirling cycle electric cooler. To compensate the noise induced by the mechanical cooler the digital procession of the spectrometric signals with low frequency reject filter (LFR) is applied. The complete spectrometer assembly has a mass of 3.1 kg and consumes less than 10 W under working operation. The spectrometer was tested under a number of operating conditions in a specially designed chamber, which simulates the space environment. With the mechanical cooler switched off, FWHM energy resolutions of 1.5 keV and 2.2 keV were obtained at 122 keV and 1333 keV, respectively, at the nominal operating temperature of 90 K. When the cooler was switched on the energy resolutions degraded to 2.5 keV and 4 keV respectively. However, with the LFR filter switched in, the resolutions improved significantly to 1.8 keV and 2.4 keV.

  3. Particle Number Dependence of the N-body Simulations of Moon Formation

    NASA Astrophysics Data System (ADS)

    Sasaki, Takanori; Hosono, Natsuki

    2018-04-01

    The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.

  4. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  5. A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall

    NASA Astrophysics Data System (ADS)

    Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian

    2018-02-01

    Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.

  6. The inertial attitude augmentation for ambiguity resolution in SF/SE-GNSS attitude determination.

    PubMed

    Zhu, Jiancheng; Hu, Xiaoping; Zhang, Jingyu; Li, Tao; Wang, Jinling; Wu, Meiping

    2014-06-26

    The Unaided Single Frequency/Single Epoch Global Navigation Satellite System (SF/SE GNSS) model is the most challenging scenario for ambiguity resolution in the GNSS attitude determination application. To improve the performance of SF/SE-GNSS ambiguity resolution without excessive cost, the Micro-Electro-Mechanical System Inertial Measurement Unit (MEMS-IMU) is a proper choice for the auxiliary sensor that carries out the inertial attitude augmentation. Firstly, based on the SF/SE-GNSS compass model, the Inertial Derived Baseline Vector (IDBV) is defined to connect the MEMS-IMU attitude measurement with the SF/SE-GNSS ambiguity search space, and the mechanism of inertial attitude augmentation is revealed from the perspective of geometry. Then, through the quantitative description of model strength by Ambiguity Dilution of Precision (ADOP), two ADOPs are specified for the unaided SF/SE-GNSS compass model and its inertial attitude augmentation counterparts, respectively, and a sufficient condition is proposed for augmenting the SF/SE-GNSS model strength with inertial attitude measurement. Finally, in the framework of an integer aperture estimator with fixed failure rate, the performance of SF/SE-GNSS ambiguity resolution with inertial attitude augmentation is analyzed when the model strength is varying from strong to weak. The simulation results show that, in the SF/SE-GNSS attitude determination application, MEMS-IMU can satisfy the requirements of ambiguity resolution with inertial attitude augmentation.

  7. The Inertial Attitude Augmentation for Ambiguity Resolution in SF/SE-GNSS Attitude Determination

    PubMed Central

    Zhu, Jiancheng; Hu, Xiaoping; Zhang, Jingyu; Li, Tao; Wang, Jinling; Wu, Meiping

    2014-01-01

    The Unaided Single Frequency/Single Epoch Global Navigation Satellite System (SF/SE GNSS) model is the most challenging scenario for ambiguity resolution in the GNSS attitude determination application. To improve the performance of SF/SE-GNSS ambiguity resolution without excessive cost, the Micro-Electro-Mechanical System Inertial Measurement Unit (MEMS-IMU) is a proper choice for the auxiliary sensor that carries out the inertial attitude augmentation. Firstly, based on the SF/SE-GNSS compass model, the Inertial Derived Baseline Vector (IDBV) is defined to connect the MEMS-IMU attitude measurement with the SF/SE-GNSS ambiguity search space, and the mechanism of inertial attitude augmentation is revealed from the perspective of geometry. Then, through the quantitative description of model strength by Ambiguity Dilution of Precision (ADOP), two ADOPs are specified for the unaided SF/SE-GNSS compass model and its inertial attitude augmentation counterparts, respectively, and a sufficient condition is proposed for augmenting the SF/SE-GNSS model strength with inertial attitude measurement. Finally, in the framework of an integer aperture estimator with fixed failure rate, the performance of SF/SE-GNSS ambiguity resolution with inertial attitude augmentation is analyzed when the model strength is varying from strong to weak. The simulation results show that, in the SF/SE-GNSS attitude determination application, MEMS-IMU can satisfy the requirements of ambiguity resolution with inertial attitude augmentation. PMID:24971472

  8. A Semi-Structured MODFLOW-USG Model to Evaluate Local Water Sources to Wells for Decision Support.

    PubMed

    Feinstein, Daniel T; Fienen, Michael N; Reeves, Howard W; Langevin, Christian D

    2016-07-01

    In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A "semi-structured" approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a). Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  9. A semi-structured MODFLOW-USG model to evaluate local water sources to wells for decision support

    USGS Publications Warehouse

    Feinstein, Daniel T.; Fienen, Michael N.; Reeves, Howard W.; Langevin, Christian D.

    2016-01-01

    In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A “semi-structured” approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a).

  10. Atmospheric Moisture Budget and Spatial Resolution Dependence of Precipitation Extremes in Aquaplanet Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Qing; Leung, Lai-Yung R.; Rauscher, Sara

    This study investigates the resolution dependency of precipitation extremes in an aqua-planet framework. Strong resolution dependency of precipitation extremes is seen over both tropics and extra-tropics, and the magnitude of this dependency also varies with dynamical cores. Moisture budget analyses based on aqua-planet simulations with the Community Atmosphere Model (CAM) using the Model for Prediction Across Scales (MPAS) and High Order Method Modeling Environment (HOMME) dynamical cores but the same physics parameterizations suggest that during precipitation extremes moisture supply for surface precipitation is mainly derived from advective moisture convergence. The resolution dependency of precipitation extremes mainly originates from advective moisturemore » transport in the vertical direction. At most vertical levels over the tropics and in the lower atmosphere over the subtropics, the vertical eddy transport of mean moisture field dominates the contribution to precipitation extremes and its resolution dependency. Over the subtropics, the source of moisture, its associated energy, and the resolution dependency during extremes are dominated by eddy transport of eddies moisture at the mid- and upper-troposphere. With both MPAS and HOMME dynamical cores, the resolution dependency of the vertical advective moisture convergence is mainly explained by dynamical changes (related to vertical velocity or omega), although the vertical gradients of moisture act like averaging kernels to determine the sensitivity of the overall resolution dependency to the changes in omega at different vertical levels. The natural reduction of variability with coarser resolution, represented by areal data averaging (aggregation) effect, largely explains the resolution dependency in omega. The thermodynamic changes, which likely result from non-linear feedback in response to the large dynamical changes, are small compared to the overall changes in dynamics (omega). However, after excluding the data aggregation effect in omega, thermodynamic changes become relatively significant in offsetting the effect of dynamics leading to reduce differences between the simulated and aggregated results. Compared to MPAS, the simulated stronger vertical motion with HOMME also results in larger resolution dependency. Compared to the simulation at fine resolution, the vertical motion during extremes is insufficiently resolved/parameterized at the coarser resolution even after accounting for the natural reduction in variability with coarser resolution, and this is more distinct in the simulation with HOMME. To reduce uncertainties in simulated precipitation extremes, future development in cloud parameterizations must address their sensitivity to spatial resolution as well as dynamical cores.« less

  11. Evaluation of position-estimation methods applied to CZT-based photon-counting detectors for dedicated breast CT

    PubMed Central

    Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J.

    2015-01-01

    Abstract. Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of 100  μm. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a 5×5 array of 200  μm pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent K-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of 194  μm, with 2×2 binning during the acquisition giving an effective pixel size of 388  μm. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors. PMID:26158095

  12. Evaluation of position-estimation methods applied to CZT-based photon-counting detectors for dedicated breast CT.

    PubMed

    Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J

    2015-04-01

    Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of [Formula: see text]. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a [Formula: see text] array of [Formula: see text] pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent [Formula: see text]-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of [Formula: see text], with [Formula: see text] binning during the acquisition giving an effective pixel size of [Formula: see text]. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors.

  13. Reduced Complexity Modelling of Urban Floodplain Inundation

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Brasington, J.; Mihir, M.

    2004-12-01

    Significant recent advances in floodplain inundation modelling have been achieved by directly coupling 1d channel hydraulic models with a raster storage cell approximation for floodplain flows. The strengths of this reduced-complexity model structure derive from its explicit dependence on a digital elevation model (DEM) to parameterize flows through riparian areas, providing a computationally efficient algorithm to model heterogeneous floodplains. Previous applications of this framework have generally used mid-range grid scales (101-102 m), showing the capacity of the models to simulate long reaches (103-104 m). However, the increasing availability of precision DEMs derived from airborne laser altimetry (LIDAR) enables their use at very high spatial resolutions (100-101 m). This spatial scale offers the opportunity to incorporate the complexity of the built environment directly within the floodplain DEM and simulate urban flooding. This poster describes a series of experiments designed to explore model functionality at these reduced scales. Important questions are considered, raised by this new approach, about the reliability and representation of the floodplain topography and built environment, and the resultant sensitivity of inundation forecasts. The experiments apply a raster floodplain model to reconstruct a 1:100 year flood event on the River Granta in eastern England, which flooded 72 properties in the town of Linton in October 2001. The simulations use a nested-scale model to maintain efficiency. A 2km by 4km urban zone is represented by a high-resolution DEM derived from single-pulse LIDAR data supplied by the UK Environment Agency, together with surveyed data and aerial photography. Novel methods of processing the raw data to provide the individual structure detail required are investigated and compared. This is then embedded within a lower-resolution model application at the reach scale which provides boundary conditions based on recorded flood stage. The high resolution predictions on a scale commensurate with urban structures make possible a multi-criteria validation which combines verification of reach-scale characteristics such as downstream flow and inundation extent with internal validation of flood depth at individual sites.

  14. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  15. The Effect of Model Grid Resolution on the Distributed Hydrologic Simulations for Forecasting Stream Flows and Reservoir Storage

    NASA Astrophysics Data System (ADS)

    Turnbull, S. J.

    2017-12-01

    Within the US Army Corps of Engineers (USACE), reservoirs are typically operated according to a rule curve that specifies target water levels based on the time of year. The rule curve is intended to maximize flood protection by specifying releases of water before the dominant rainfall period for a region. While some operating allowances are permissible, generally the rule curve elevations must be maintained. While this operational approach provides for the required flood control purpose, it may not result in optimal reservoir operations for multi-use impoundments. In the Russian River Valley of California a multi-agency research effort called Forecast-Informed Reservoir Operations (FIRO) is assessing the application of forecast weather and streamflow predictions to potentially enhance the operation of reservoirs in the watershed. The focus of the study has been on Lake Mendocino, a USACE project important for flood control, water supply, power generation and ecological flows. As part of this effort the Engineer Research and Development Center is assessing the ability of utilizing the physics based, distributed watershed model Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model to simulate stream flows, reservoir stages, and discharges while being driven by weather forecast products. A key question in this application is the effect of watershed model resolution on forecasted stream flows. To help resolve this question, GSSHA models of multiple grid resolutions, 30, 50, and 270m, were developed for the upper Russian River, which includes Lake Mendocino. The models were derived from common inputs: DEM, soils, land use, stream network, reservoir characteristics, and specified inflows and discharges. All the models were calibrated in both event and continuous simulation mode using measured precipitation gages and then driven with the West-WRF atmospheric model in prediction mode to assess the ability of the model to function in short term, less than one week, forecasting mode. In this presentation we will discuss the effect the grid resolution has model development, parameter assignment, streamflow prediction and forecasting capability utilizing the West-WRF forecast hydro-meteorology.

  16. Sensitivity of U.S. summer precipitation to model resolution and convective parameterizations across gray zone resolutions

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson

    2017-03-01

    Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.

  17. Regional Climate Simulation with a Variable Resolution Stretched Grid GCM: The Regional Down-Scaling Effects

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Suarez, Max; Sawyer, William; Govindaraju, Ravi C.

    1999-01-01

    The results obtained with the variable resolution stretched grid (SG) GEOS GCM (Goddard Earth Observing System General Circulation Models) are discussed, with the emphasis on the regional down-scaling effects and their dependence on the stretched grid design and parameters. A variable resolution SG-GCM and SG-DAS using a global stretched grid with fine resolution over an area of interest, is a viable new approach to REGIONAL and subregional CLIMATE studies and applications. The stretched grid approach is an ideal tool for representing regional to global scale interactions. It is an alternative to the widely used nested grid approach introduced a decade ago as a pioneering step in regional climate modeling. The GEOS SG-GCM is used for simulations of the anomalous U.S. climate events of 1988 drought and 1993 flood, with enhanced regional resolution. The height low level jet, precipitation and other diagnostic patterns are successfully simulated and show the efficient down-scaling over the area of interest the U.S. An imitation of the nested grid approach is performed using the developed SG-DAS (Data Assimilation System) that incorporates the SG-GCM. The SG-DAS is run with withholding data over the area of interest. The design immitates the nested grid framework with boundary conditions provided from analyses. No boundary condition buffer is needed for the case due to the global domain of integration used for the SG-GCM and SG-DAS. The experiments based on the newly developed versions of the GEOS SG-GCM and SG-DAS, with finer 0.5 degree (and higher) regional resolution, are briefly discussed. The major aspects of parallelization of the SG-GCM code are outlined. The KEY OBJECTIVES of the study are: 1) obtaining an efficient DOWN-SCALING over the area of interest with fine and very fine resolution; 2) providing CONSISTENT interactions between regional and global scales including the consistent representation of regional ENERGY and WATER BALANCES; 3) providing a high computational efficiency for future SG-GCM and SG-DAS versions using PARALLEL codes.

  18. a Spiral-Based Downscaling Method for Generating 30 M Time Series Image Data

    NASA Astrophysics Data System (ADS)

    Liu, B.; Chen, J.; Xing, H.; Wu, H.; Zhang, J.

    2017-09-01

    The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland) make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.

  19. Resolution versus speckle relative to geologic interpretability of spaceborne radar images - A survey of user preference

    NASA Technical Reports Server (NTRS)

    Ford, J. P.

    1982-01-01

    A survey conducted to evaluate user preference for resolution versus speckle relative to the geologic interpretability of spaceborne radar images is discussed. Thirteen different resolution/looks combinations are simulated from Seasat synthetic-aperture radar data of each of three test sites. The SAR images were distributed with questionnaires for analysis to 85 earth scientists. The relative discriminability of geologic targets at each test site for each simulation of resolution and speckle on the images is determined on the basis of a survey of the evaluations. A large majority of the analysts respond that for most targets a two-look image at the highest simulated resolution is best. For a constant data rate, a higher resolution is more important for target discrimination than a higher number of looks. It is noted that sand dunes require more looks than other geologic targets. At all resolutions, multiple-look images are preferred over the corresponding single-look image. In general, the number of multiple looks that is optimal for discriminating geologic targets is inversely related to the simulated resolution.

  20. Efficient Exploration of Membrane-Associated Phenomena at Atomic Resolution.

    PubMed

    Vermaas, Josh V; Baylon, Javier L; Arcario, Mark J; Muller, Melanie P; Wu, Zhe; Pogorelov, Taras V; Tajkhorshid, Emad

    2015-06-01

    Biological membranes constitute a critical component in all living cells. In addition to providing a conducive environment to a wide range of cellular processes, including transport and signaling, mounting evidence has established active participation of specific lipids in modulating membrane protein function through various mechanisms. Understanding lipid-protein interactions underlying these mechanisms at a sufficiently high resolution has proven extremely challenging, partly due to the semi-fluid nature of the membrane. In order to address this challenge computationally, multiple methods have been developed, including an alternative membrane representation termed highly mobile membrane mimetic (HMMM) in which lateral lipid diffusion has been significantly enhanced without compromising atomic details. The model allows for efficient sampling of lipid-protein interactions at atomic resolution, thereby significantly enhancing the effectiveness of molecular dynamics simulations in capturing membrane-associated phenomena. In this review, after providing an overview of HMMM model development, we will describe briefly successful application of the model to study a variety of membrane processes, including lipid-dependent binding and insertion of peripheral proteins, the mechanism of phospholipid insertion into lipid bilayers, and characterization of optimal tilt angle of transmembrane helices. We conclude with practical recommendations for proper usage of the model in simulation studies of membrane processes.

  1. Carotid lesion characterization by synthetic-aperture-imaging techniques with multioffset ultrasonic probes

    NASA Astrophysics Data System (ADS)

    Capineri, Lorenzo; Castellini, Guido; Masotti, Leonardo F.; Rocchi, Santina

    1992-06-01

    This paper explores the applications of a high-resolution imaging technique to vascular ultrasound diagnosis, with emphasis on investigation of the carotid vessel. With the present diagnostic systems, it is difficult to measure quantitatively the extension of the lesions and to characterize the tissue; quantitative images require enough spatial resolution and dynamic to reveal fine high-risk pathologies. A broadband synthetic aperture technique with multi-offset probes is developed to improve the lesion characterization by the evaluation of local scattering parameters. This technique works with weak scatterers embedded in a constant velocity medium, large aperture, and isotropic sources and receivers. The features of this technique are: axial and lateral spatial resolution of the order of the wavelength, high dynamic range, quantitative measurements of the size and scattering intensity of the inhomogeneities, and capabilities of investigation of inclined layer. The evaluation of the performances in real condition is carried out by a software simulator in which different experimental situations can be reproduced. Images of simulated anatomic test-objects are presented. The images are obtained with an inversion process of the synthesized ultrasonic signals, collected on the linear aperture by a limited number of finite size transducers.

  2. Ultrafast secondary emission X-ray imaging detectors: A possible application to TRD

    NASA Astrophysics Data System (ADS)

    Akkerman, A.; Breskin, A.; Chechik, R.; Elkind, V.; Gibrekhterman, A.; Majewski, S.

    1992-05-01

    Fist high accuracy, X-ray imaging at high photon flux can be achieved when coupling thin solid convertors to gaseous electron multipliers, operating at low gas pressures. Secondary electrons emitted from the convertor foil are multiplied in several successive amplification elements. The obvious advantages of solid X-ray convertors, as compared to gaseous conversion, are the production of parallax-free images and the fast (subnanosecond) response. These X-ray detectors have many potential applications in basic and applied research. Of particular interest is the possibility of an efficient and ultrafast high resolution imaging of transition radiation (TR), with a reduced d E/d x background. We present experimental results on the operation of secondary emission X-ray (SEX) detectors, their detection efficiency, localization and time resolution. The experimental work is accompanied by mathematical modelling and computer simulation of transition radiation detectors (TRDs) based on CsI TR convertors.

  3. A novel plasmonic interferometry and the potential applications

    NASA Astrophysics Data System (ADS)

    Ali, J.; Pornsuwancharoen, N.; Youplao, P.; Aziz, M. S.; Chiangga, S.; Jaglan, J.; Amiri, I. S.; Yupapin, P.

    2018-03-01

    In this article, we have proposed the plasmonic interferometry concept and analytical details given. By using the conventional optical interferometry, which can be simply calculated by using the relationship between the electric field and electron mobility, the interference mobility visibility (fringe visibility) can be observed. The surface plasmons in the sensing arm of the Michelson interferometer is constructed by the stacked layers of the silicon-graphene-gold, allows to characterize the spatial resolution of light beams in terms of the electron mobility down to 100-nm scales, with measured coherence lengths as low as ∼100 nm for an incident wavelength of 1550 nm. We have demonstrated a compact plasmonic interferometer that can apply to the electron mean free paths measurement, from which the precise determination can be used for the high-resolution mean free path measurement and sensing applications. This system provides the practical simulation device parameters that can be fabricated and tested by the experimental platform.

  4. Investigations in thunderstorm energetics using satellite instrumentation and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Brunner, K. N.; Bitzer, P. M.

    2017-12-01

    The electrical energy dissipated by lightning is a fundamental question in lightning physics and may be used in severe weather applications. However, the electrical energy, flash area/extent and spectral energy density (radiance) are all influenced by the geometry of the lightning channel. We present details of a Monte Carlo based model simulating the optical emission from lightning and compare with observations. Using time-of-arrival techniques and the electric field change measurements from the Huntsville Alabama Marx Meter Array (HAMMA), the 4D lightning channel is reconstructed. The located sources and lightning channel emit optical emission, calibrated by the ground based electric field, that scatters until absorbed or a cloud boundary is reached within the model. At cloud top, the simulation is gridded as LIS pixels (events) and contiguous events (groups). The radiance is related via the LIS calibration and the estimated lightning electrical energy is calculated at the LIS/GLM time resolution. Previous Monte Carlo simulations have relied on a simplified lightning channel and scattering medium. This work considers the cloud a stratified medium of graupel/ice and inhomogeneous at flash scale. The impact of cloud inhomogeneity on the scattered optical emission at cloud top and at the time resolution of LIS and GLM are also considered. The simulation results and energy metrics provide an estimation of the electrical energy using GLM and LIS on the International Space Station (ISS-LIS).

  5. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  6. Simulating Visible/Infrared Imager Radiometer Suite Normalized Difference Vegetation Index Data Using Hyperion and MODIS

    NASA Technical Reports Server (NTRS)

    Ross, Kenton W.; Russell, Jeffrey; Ryan, Robert E.

    2006-01-01

    The success of MODIS (the Moderate Resolution Imaging Spectrometer) in creating unprecedented, timely, high-quality data for vegetation and other studies has created great anticipation for data from VIIRS (the Visible/Infrared Imager Radiometer Suite). VIIRS will be carried onboard the joint NASA/Department of Defense/National Oceanic and Atmospheric Administration NPP (NPOESS (National Polar-orbiting Operational Environmental Satellite System) Preparatory Project). Because the VIIRS instruments will have lower spatial resolution than the current MODIS instruments 400 m versus 250 m at nadir for the channels used to generate Normalized Difference Vegetation Index data, scientists need the answer to this question: how will the change in resolution affect vegetation studies? By using simulated VIIRS measurements, this question may be answered before the VIIRS instruments are deployed in space. Using simulated VIIRS products, the U.S. Department of Agriculture and other operational agencies can then modify their decision support systems appropriately in preparation for receipt of actual VIIRS data. VIIRS simulations and validations will be based on the ART (Application Research Toolbox), an integrated set of algorithms and models developed in MATLAB(Registerd TradeMark) that enables users to perform a suite of simulations and statistical trade studies on remote sensing systems. Specifically, the ART provides the capability to generate simulated multispectral image products, at various scales, from high spatial hyperspectral and/or multispectral image products. The ART uses acquired ( real ) or synthetic datasets, along with sensor specifications, to create simulated datasets. For existing multispectral sensor systems, the simulated data products are used for comparison, verification, and validation of the simulated system s actual products. VIIRS simulations will be performed using Hyperion and MODIS datasets. The hyperspectral and hyperspatial properties of Hyperion data will be used to produce simulated MODIS and VIIRS products. Hyperion-derived MODIS data will be compared with near-coincident MODIS collects to validate both spectral and spatial synthesis, which will ascertain the accuracy of converting from MODIS to VIIRS. MODIS-derived VIIRS data is needed for global coverage and for the generation of time series for regional and global investigations. These types of simulations will have errors associated with aliasing for some scene types. This study will help quantify these errors and will identify cases where high-quality, MODIS-derived VIIRS data will be available.

  7. Simulation of climatology and Interannual Variability of Spring Persistent Rains by Meteorological Research Institute Model: Impacts of different horizontal resolutions

    NASA Astrophysics Data System (ADS)

    Li, Puxi; Zhou, Tianjun; Zou, Liwei

    2016-04-01

    The authors evaluated the performance of Meteorological Research Institute (MRI) AGCM3.2 models in the simulations of climatology and interannual variability of the Spring Persistent Rains (SPR) over southeastern China. The possible impacts of different horizontal resolutions were also investigated based on the experiments with three different horizontal resolutions (i.e., 120, 60, and 20km). The model could reasonably reproduce the main rainfall center over southeastern China in boreal spring under the three different resolutions. In comparison with 120 simulation, it revealed that 60km and 20km simulations show the superiority in simulating rainfall centers anchored by the Nanling-Wuyi Mountains, but overestimate rainfall intensity. Water vapor budget diagnosis showed that, the 60km and 20km simulations tended to overestimate the water vapor convergence over southeastern China, which leads to wet biases. In the aspect of interannual variability of SPR, the model could reasonably reproduce the anomalous lower-tropospheric anticyclone in the western North Pacific (WNPAC) and positive precipitation anomalies over southeastern China in El Niño decaying spring. Compared with the 120km resolution, the large positive biases are substantially reduced in the mid and high resolution models which evidently improve the simulation of horizontal moisture advection in El Niño decaying spring. We highlight the importance of developing high resolution climate model as it could potentially improve the climatology and interannual variability of SPR.

  8. Quadrant CFD Analysis of a Mixer-Ejector Nozzle for HSCT Applications

    NASA Technical Reports Server (NTRS)

    Yoder, Dennis A.; Georgiadis, Nicholas J.; Wolter, John D.

    2005-01-01

    This study investigates the sidewall effect on flow within the mixing duct downstream of a lobed mixer-ejector nozzle. Simulations which model only one half-chute width of the ejector array are compared with those which model one complete quadrant of the nozzle geometry and with available experimental data. These solutions demonstrate the applicability of the half-chute technique to model the flowfield far away from the sidewall and the necessity of a full-quadrant simulation to predict the formation of a low-energy flow region near the sidewall. The quadrant solutions are further examined to determine the cause of this low-energy region, which reduces the amount of mixing and lowers the thrust of the nozzle. Grid resolution and different grid topologies are also examined. Finally, an assessment of the half-chute and quadrant approaches is made to determine the ability of these simulations to provide qualitative and/or quantitative predictions for this type of complex flowfield.

  9. Modeling of Turbulent Free Shear Flows

    NASA Technical Reports Server (NTRS)

    Yoder, Dennis A.; DeBonis, James R.; Georgiadis, Nicolas J.

    2013-01-01

    The modeling of turbulent free shear flows is crucial to the simulation of many aerospace applications, yet often receives less attention than the modeling of wall boundary layers. Thus, while turbulence model development in general has proceeded very slowly in the past twenty years, progress for free shear flows has been even more so. This paper highlights some of the fundamental issues in modeling free shear flows for propulsion applications, presents a review of past modeling efforts, and identifies areas where further research is needed. Among the topics discussed are differences between planar and axisymmetric flows, development versus self-similar regions, the effect of compressibility and the evolution of compressibility corrections, the effect of temperature on jets, and the significance of turbulent Prandtl and Schmidt numbers for reacting shear flows. Large eddy simulation greatly reduces the amount of empiricism in the physical modeling, but is sensitive to a number of numerical issues. This paper includes an overview of the importance of numerical scheme, mesh resolution, boundary treatment, sub-grid modeling, and filtering in conducting a successful simulation.

  10. Torso-Tank Validation of High-Resolution Electrogastrography (EGG): Forward Modelling, Methodology and Results.

    PubMed

    Calder, Stefan; O'Grady, Greg; Cheng, Leo K; Du, Peng

    2018-04-27

    Electrogastrography (EGG) is a non-invasive method for measuring gastric electrical activity. Recent simulation studies have attempted to extend the current clinical utility of the EGG, in particular by providing a theoretical framework for distinguishing specific gastric slow wave dysrhythmias. In this paper we implement an experimental setup called a 'torso-tank' with the aim of expanding and experimentally validating these previous simulations. The torso-tank was developed using an adult male torso phantom with 190 electrodes embedded throughout the torso. The gastric slow waves were reproduced using an artificial current source capable of producing 3D electrical fields. Multiple gastric dysrhythmias were reproduced based on high-resolution mapping data from cases of human gastric dysfunction (gastric re-entry, conduction blocks and ectopic pacemakers) in addition to normal test data. Each case was recorded and compared to the previously-presented simulated results. Qualitative and quantitative analyses were performed to define the accuracy showing [Formula: see text] 1.8% difference, [Formula: see text] 0.99 correlation, and [Formula: see text] 0.04 normalised RMS error between experimental and simulated findings. These results reaffirm previous findings and these methods in unison therefore present a promising morphological-based methodology for advancing the understanding and clinical applications of EGG.

  11. Computational tissue volume reconstruction of a peripheral nerve using high-resolution light-microscopy and reconstruct.

    PubMed

    Gierthmuehlen, Mortimer; Freiman, Thomas M; Haastert-Talini, Kirsten; Mueller, Alexandra; Kaminsky, Jan; Stieglitz, Thomas; Plachta, Dennis T T

    2013-01-01

    The development of neural cuff-electrodes requires several in vivo studies and revisions of the electrode design before the electrode is completely adapted to its target nerve. It is therefore favorable to simulate many of the steps involved in this process to reduce costs and animal testing. As the restoration of motor function is one of the most interesting applications of cuff-electrodes, the position and trajectories of myelinated fibers in the simulated nerve are important. In this paper, we investigate a method for building a precise neuroanatomical model of myelinated fibers in a peripheral nerve based on images obtained using high-resolution light microscopy. This anatomical model describes the first aim of our "Virtual workbench" project to establish a method for creating realistic neural simulation models based on image datasets. The imaging, processing, segmentation and technical limitations are described, and the steps involved in the transition into a simulation model are presented. The results showed that the position and trajectories of the myelinated axons were traced and virtualized using our technique, and small nerves could be reliably modeled based on of light microscopy images using low-cost OpenSource software and standard hardware. The anatomical model will be released to the scientific community.

  12. Computational Tissue Volume Reconstruction of a Peripheral Nerve Using High-Resolution Light-Microscopy and Reconstruct

    PubMed Central

    Gierthmuehlen, Mortimer; Freiman, Thomas M.; Haastert-Talini, Kirsten; Mueller, Alexandra; Kaminsky, Jan; Stieglitz, Thomas; Plachta, Dennis T. T.

    2013-01-01

    The development of neural cuff-electrodes requires several in vivo studies and revisions of the electrode design before the electrode is completely adapted to its target nerve. It is therefore favorable to simulate many of the steps involved in this process to reduce costs and animal testing. As the restoration of motor function is one of the most interesting applications of cuff-electrodes, the position and trajectories of myelinated fibers in the simulated nerve are important. In this paper, we investigate a method for building a precise neuroanatomical model of myelinated fibers in a peripheral nerve based on images obtained using high-resolution light microscopy. This anatomical model describes the first aim of our “Virtual workbench” project to establish a method for creating realistic neural simulation models based on image datasets. The imaging, processing, segmentation and technical limitations are described, and the steps involved in the transition into a simulation model are presented. The results showed that the position and trajectories of the myelinated axons were traced and virtualized using our technique, and small nerves could be reliably modeled based on of light microscopy images using low-cost OpenSource software and standard hardware. The anatomical model will be released to the scientific community. PMID:23785485

  13. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  14. Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit

    NASA Astrophysics Data System (ADS)

    Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi

    2017-02-01

    In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.

  15. Integrating TITAN2D Geophysical Mass Flow Model with GIS

    NASA Astrophysics Data System (ADS)

    Namikawa, L. M.; Renschler, C.

    2005-12-01

    TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.

  16. Fine-scale application of WRF-CAM5 during a dust storm episode over East Asia: Sensitivity to grid resolutions and aerosol activation parameterizations

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Zhang, Yang; Zhang, Xin; Fan, Jiwen; Leung, L. Ruby; Zheng, Bo; Zhang, Qiang; He, Kebin

    2018-03-01

    An advanced online-coupled meteorology and chemistry model WRF-CAM5 has been applied to East Asia using triple-nested domains at different grid resolutions (i.e., 36-, 12-, and 4-km) to simulate a severe dust storm period in spring 2010. Analyses are performed to evaluate the model performance and investigate model sensitivity to different horizontal grid sizes and aerosol activation parameterizations and to examine aerosol-cloud interactions and their impacts on the air quality. A comprehensive model evaluation of the baseline simulations using the default Abdul-Razzak and Ghan (AG) aerosol activation scheme shows that the model can well predict major meteorological variables such as 2-m temperature (T2), water vapor mixing ratio (Q2), 10-m wind speed (WS10) and wind direction (WD10), and shortwave and longwave radiation across different resolutions with domain-average normalized mean biases typically within ±15%. The baseline simulations also show moderate biases for precipitation and moderate-to-large underpredictions for other major variables associated with aerosol-cloud interactions such as cloud droplet number concentration (CDNC), cloud optical thickness (COT), and cloud liquid water path (LWP) due to uncertainties or limitations in the aerosol-cloud treatments. The model performance is sensitive to grid resolutions, especially for surface meteorological variables such as T2, Q2, WS10, and WD10, with the performance generally improving at finer grid resolutions for those variables. Comparison of the sensitivity simulations with an alternative (i.e., the Fountoukis and Nenes (FN) series scheme) and the default (i.e., AG scheme) aerosol activation scheme shows that the former predicts larger values for cloud variables such as CDNC and COT across all grid resolutions and improves the overall domain-average model performance for many cloud/radiation variables and precipitation. Sensitivity simulations using the FN series scheme also have large impacts on radiations, T2, precipitation, and air quality (e.g., decreasing O3) through complex aerosol-radiation-cloud-chemistry feedbacks. The inclusion of adsorptive activation of dust particles in the FN series scheme has similar impacts on the meteorology and air quality but to lesser extent as compared to differences between the FN series and AG schemes. Compared to the overall differences between the FN series and AG schemes, impacts of adsorptive activation of dust particles can contribute significantly to the increase of total CDNC (∼45%) during dust storm events and indicate their importance in modulating regional climate over East Asia.

  17. The simulation of stratospheric water vapor in the NH summer monsoon regions in a suite of WACCM models

    NASA Astrophysics Data System (ADS)

    Wang, X.; Wu, Y.; Huang, Y.; Tilmes, S.

    2016-12-01

    Water vapor maxima are found in the upper troposphere lower stratosphere (UTLS) over Asian and North America monsoon regions during Northern Hemisphere (NH) summer months. High concentrations of stratospheric water vapor are associated with the upper-level anticyclonic circulation and they play an important role in the radiative forcing for the climate system. However, discrepancies in the simulation of stratospheric water vapor are found among different models. In this study, we use both observational data: Aura Microwave Limb Sounder satellite observations (MLS), the Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA-2) and chemistry climate model outputs: different configurations of the Whole Atmosphere Community Climate Model (WACCM), including standard configuration of WACCM, WACCM L110, specified chemistry (SC) WACCM and specified dynamics (SD) WACCM. We find that WACCM L110 with finer vertical resolution better simulates the stratospheric water vapor maxima over the summer monsoon regions. To better understand the mechanism, we examine the simulated temperature at around 100 hPa since 100 hPa is known to act as a dehydration mechanism, i.e. the warmer the temperature, the wetter the stratospheric water vapor. We find that both WACCM L110 and SD-WACCM better simulate the temperature at 100 hPa as compared to that of MERRA2. This suggests that improving model vertical resolution and dynamical processes in the UTLS is crucial in simulating the stratospheric water vapor concentrations.

  18. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    NASA Astrophysics Data System (ADS)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  19. Using adaptive-mesh refinement in SCFT simulations of surfactant adsorption

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Kumar, Rajeev; Jamroz, Ben; Crockett, Robert; Pletzer, Alex

    2013-03-01

    Adsorption of surfactants at interfaces is relevant to many applications such as detergents, adhesives, emulsions and ferrofluids. Atomistic simulations of interface adsorption are challenging due to the difficulty of modeling the wide range of length scales in these problems: the thin interface region in equilibrium with a large bulk region that serves as a reservoir for the adsorbed species. Self-consistent field theory (SCFT) has been extremely useful for studying the morphologies of dense block copolymer melts. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. However, even SCFT methods can be difficult to apply to systems in which small spatial regions might require finer resolution than most of the simulation grid (eg. interface adsorption and confinement). We will present results on interface adsorption simulations using PolySwift++, an object-oriented, polymer SCFT simulation code aided by the Tech-X Chompst library that enables via block-structured AMR calculations with PETSc.

  20. Using High Resolution Satellite Precipitation fields to Assess the Impacts of Climate Change on the Santa Cruz and San Pedro River Basins

    NASA Astrophysics Data System (ADS)

    Robles-Morua, A.; Vivoni, E.; Rivera-Fernandez, E. R.; Dominguez, F.; Meixner, T.

    2013-05-01

    Hydrologic modeling using high spatiotemporal resolution satellite precipitation products in the southwestern United States and northwest Mexico is important given the sparse nature of available rain gauges. In addition, the bimodal distribution of annual precipitation also presents a challenge as differential climate impacts during the winter and summer seasons are not currently well understood. In this work, we focus on hydrological comparisons using rainfall forcing from a satellite-based product, downscaled GCM precipitation estimates and available ground observations. The simulations are being conducted in the Santa Cruz and San Pedro river basins along the Arizona-Sonora border at high spatiotemporal resolutions (~100 m and ~1 hour). We use a distributed hydrologic model, known as the TIN-based Real-time Integrated Basin Simulator (tRIBS), to generate simulated hydrological fields under historical (1991-2000) and climate change (2031-2040) scenarios obtained from an application of the Weather Research and Forecast (WRF) model. Using the distributed model, we transform the meteorological scenarios at 10-km, hourly resolution into predictions of the annual water budget, seasonal land surface fluxes and individual hydrographs of flood and recharge events. We compare the model outputs and rainfall fields of the WRF products against the forcing from the North American Land Data Assimilation System (NLDAS) and available ground observations from the National Climatic Data Center (NCDC) and Arizona Meteorological Network (AZMET). For this contribution, we selected two full years in the historical period and in the future scenario that represent wet and dry conditions for each decade. Given the size of the two basins, we rely on a high performance computing platform and a parallel domain discretization with higher resolutions maintained at experimental catchments in each river basin. Model simulations utilize best-available data across the Arizona-Sonora border on topography, land cover and soils obtained from analysis of remotely-sensed imagery and government databases. In addition, for the historical period, we build confidence in the model simulations through comparisons with streamflow estimates in the region. The model comparisons during the historical and future periods will yield a first-of-its-kind assessment on the impacts of climate change on the hydrology of two large semiarid river basins of the southwestern United States

  1. Studying Spatial Resolution of CZT Detectors Using Sub-Pixel Positioning for SPECT

    NASA Astrophysics Data System (ADS)

    Montémont, Guillaume; Lux, Silvère; Monnet, Olivier; Stanchina, Sylvain; Verger, Loïck

    2014-10-01

    CZT detectors are the basic building block of a variety of new SPECT systems. Their modularity allows adapting system architecture to specific applications such as cardiac, breast, brain or small animal imaging. In semiconductors, a high number of electron-hole pairs is produced by a single interaction. This direct conversion process allows better energy and spatial resolutions than usual scintillation detectors based on NaI(Tl). However, it remains often unclear if SPECT imaging can really benefit of that performance gain. We investigate the system performance of a detection module, which is based on 5 mm thick CZT with a segmented anode having a 2.5 mm pitch by simulation and experimentation. This pitch allows an easy assembly of the crystal on the readout board and limits the space occupied by electronics without significantly degrading energy and spatial resolution.

  2. Mesoscale Numerical Simulations of the IAS Circulation

    NASA Astrophysics Data System (ADS)

    Mooers, C. N.; Ko, D.

    2008-05-01

    Real-time nowcasts and forecasts of the IAS circulation have been made for several years with mesoscale resolution using the Navy Coastal Ocean Model (NCOM) implemented for the IAS. It is commonly called IASNFS and is driven by the lower resolution Global NCOM on the open boundaries, synoptic atmospheric forcing obtained from the Navy Global Atmospheric Prediction System (NOGAPS), and assimilated satellite-derived sea surface height anomalies and sea surface temperature. Here, examples of the model output are demonstrated; e.g., Gulf of Mexico Loop Current eddy shedding events and the meandering Caribbean Current jet and associated eddies. Overall, IASNFS is ready for further analysis, application to a variety of studies, and downscaling to even higher resolution shelf models. Its output fields are available online through NOAA's National Coastal Data Development Center (NCDDC), located at the Stennis Space Center.

  3. Coherent total internal reflection dark-field microscopy: label-free imaging beyond the diffraction limit.

    PubMed

    von Olshausen, Philipp; Rohrbach, Alexander

    2013-10-15

    Coherent imaging is barely applicable in life-science microscopy due to multiple interference artifacts. Here, we show how these interferences can be used to improve image resolution and contrast. We present a dark-field microscopy technique with evanescent illumination via total internal reflection that delivers high-contrast images of coherently scattering samples. By incoherent averaging of multiple coherent images illuminated from different directions we can resolve image structures that remain unresolved by conventional (incoherent) fluorescence microscopy. We provide images of 190 nm beads revealing resolution beyond the diffraction limit and slightly increased object distances. An analytical model is introduced that accounts for the observed effects and which is confirmed by numerical simulations. Our approach may be a route to fast, label-free, super-resolution imaging in live-cell microscopy.

  4. Progress and supercomputing in computational fluid dynamics; Proceedings of U.S.-Israel Workshop, Jerusalem, Israel, December 1984

    NASA Technical Reports Server (NTRS)

    Murman, E. M. (Editor); Abarbanel, S. S. (Editor)

    1985-01-01

    Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.

  5. PDF added value of a high resolution climate simulation for precipitation

    NASA Astrophysics Data System (ADS)

    Soares, Pedro M. M.; Cardoso, Rita M.

    2015-04-01

    General Circulation Models (GCMs) are models suitable to study the global atmospheric system, its evolution and response to changes in external forcing, namely to increasing emissions of CO2. However, the resolution of GCMs, of the order of 1o, is not sufficient to reproduce finer scale features of the atmospheric flow related to complex topography, coastal processes and boundary layer processes, and higher resolution models are needed to describe observed weather and climate. The latter are known as Regional Climate Models (RCMs) and are widely used to downscale GCMs results for many regions of the globe and are able to capture physically consistent regional and local circulations. Most of the RCMs evaluations rely on the comparison of its results with observations, either from weather stations networks or regular gridded datasets, revealing the ability of RCMs to describe local climatic properties, and assuming most of the times its higher performance in comparison with the forcing GCMs. The additional climatic details given by RCMs when compared with the results of the driving models is usually named as added value, and it's evaluation is still scarce and controversial in the literuature. Recently, some studies have proposed different methodologies to different applications and processes to characterize the added value of specific RCMs. A number of examples reveal that some RCMs do add value to GCMs in some properties or regions, and also the opposite, elighnening that RCMs may add value to GCM resuls, but improvements depend basically on the type of application, model setup, atmospheric property and location. The precipitation can be characterized by histograms of daily precipitation, or also known as probability density functions (PDFs). There are different strategies to evaluate the quality of both GCMs and RCMs in describing the precipitation PDFs when compared to observations. Here, we present a new method to measure the PDF added value obtained from dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.

  6. High resolution, MRI-based, segmented, computerized head phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zubal, I.G.; Harrell, C.R.; Smith, E.O.

    1999-01-01

    The authors have created a high-resolution software phantom of the human brain which is applicable to voxel-based radiation transport calculations yielding nuclear medicine simulated images and/or internal dose estimates. A software head phantom was created from 124 transverse MRI images of a healthy normal individual. The transverse T2 slices, recorded in a 256x256 matrix from a GE Signa 2 scanner, have isotropic voxel dimensions of 1.5 mm and were manually segmented by the clinical staff. Each voxel of the phantom contains one of 62 index numbers designating anatomical, neurological, and taxonomical structures. The result is stored as a 256x256x128 bytemore » array. Internal volumes compare favorably to those described in the ICRP Reference Man. The computerized array represents a high resolution model of a typical human brain and serves as a voxel-based anthropomorphic head phantom suitable for computer-based modeling and simulation calculations. It offers an improved realism over previous mathematically described software brain phantoms, and creates a reference standard for comparing results of newly emerging voxel-based computations. Such voxel-based computations lead the way to developing diagnostic and dosimetry calculations which can utilize patient-specific diagnostic images. However, such individualized approaches lack fast, automatic segmentation schemes for routine use; therefore, the high resolution, typical head geometry gives the most realistic patient model currently available.« less

  7. Modeling Global Atmospheric CO2 Fluxes and Transport Using NASA MERRA Reanalysis Data

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Kawa, S. R.; Collatz, G. J.

    2010-12-01

    We present our first results of CO2 surface biosphere fluxes and global atmospheric CO2 transport using NASA’s new MERRA reanalysis data. MERRA is the Modern Era Retrospective-Analysis For Research And Applications based on the Goddard Global Modeling and Assimilation Office GEOS-5 data assimilation system. After some application testing and analysis, we have generated biospheric CO2 fluxes at 3-hourly temporal resolution from an updated version of the CASA carbon cycle model using the 1x1.25-degree reanalysis data. The experiment covers a period of 9 years from 2000 -2008. The affects of US midwest crop (largely corn and soy) carbon uptake and removal by harvest are explicitly included in this version of CASA. Across the agricultural regions of the Midwest US, USDA crop yield data are used to scale vegetation fluxes producing a strong sink in the growing season and a comparatively weaker source from respiration after harvest. Comparisons of the new fluxes to previous ones generated using GEOS-4 data are provided. The Parameterized Chemistry/Transport Model (PCTM) is then used with the analyzed meteorology in offline CO2 transport. In the simulation of CO2 transport, we have a higher vertical resolution from MERRA (the lowest 56 of 72 levels are used in our simulation). A preliminary analysis of the CO2 simulation results is carried out, including diurnal, seasonal and latitudinal variability. We make comparisons of our simulation to continuous CO2 analyzer sites, especially those in agricultural regions. The results show that the model captures reasonably well the observed synoptic variability due to transport changes and biospheric fluxes.

  8. Assimilation of high resolution satellite imagery into the 3D-CMCC forest ecosystem model

    NASA Astrophysics Data System (ADS)

    Natali, S.; Collalti, A.; Candini, A.; Della Vecchia, A.; Valentini, R.

    2012-04-01

    The use of satellite observations for the accurate monitoring of the terrestrial biosphere has been carried out since the very early stage of remote sensing applications. The possibility to observe the ground surface with different wavelengths and different observation modes (namely active and passive observations) has given to the scientific community an invaluable tool for the observation of wide areas with a resolution down to the single tree. On the other hand, the continuous development of forest ecosystem models has permitted to perform simulations of complex ("natural") forest scenarios to evaluate forest status, forest growth and future dynamics. Both remote sensing and modelling forest assessment methods have advantages and disadvantages that could be overcome by the adoption of an integrated approach. In the framework of the European Space Agency Project KLAUS, high resolution optical satellite data has been integrated /assimilated into a forest ecosystem model (named 3D-CMCC) specifically developed for multi-specie, multi-age forests. 3D-CMCC permits to simulate forest areas with different forest layers, with different trees at different age on the same point. Moreover, the model permits to simulate management activities on the forest, thus evaluating the carbon stock evolution following a specific management scheme. The model has been modified including satellite data at 10m resolution, permitting the use of directly measured information, adding to the model the real phenological cycle of each simulated point. Satellite images have been collected by the JAXA ALOS-AVNIR-2 sensor. The integration schema has permitted to identify a spatial domain in which each pixel is characterised by a forest structure (species, ages, soil parameters), meteo-climatological parameters and estimated Leaf Area Index from satellite. The resulting software package (3D-CMCC-SAT) is built around 3D-CMCC: 2D / 3D input datasets are processed iterating on each point of the analysed domain to create a set of monthly/ yearly output maps. The integrated approach has been tested on the "Parco Nazionale dei Monti Sibillini, Italy". The high correlation showed between observed and computed data can be considered statistically meaningful and hence the model can be deemed a good predictor both for high resolution and for short period of simulation. Moreover the coupling satellite data at high resolution and field information as input data have shown that these data can be used in the 3D-CMCC Forest Model run. These data can be also successfully used to simulate the main physiological processes at regional scale and to produce with good accordance with measured and literature data, reliable output to better investigate forest growth, dynamic and carbon stock.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrnstein, Aaron R.

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less

  10. A novel representation of groundwater dynamics in large-scale land surface modelling

    NASA Astrophysics Data System (ADS)

    Rahman, Mostaquimur; Rosolem, Rafael; Kollet, Stefan

    2017-04-01

    Land surface processes are connected to groundwater dynamics via shallow soil moisture. For example, groundwater affects evapotranspiration (by influencing the variability of soil moisture) and runoff generation mechanisms. However, contemporary Land Surface Models (LSM) generally consider isolated soil columns and free drainage lower boundary condition for simulating hydrology. This is mainly due to the fact that incorporating detailed groundwater dynamics in LSMs usually requires considerable computing resources, especially for large-scale applications (e.g., continental to global). Yet, these simplifications undermine the potential effect of groundwater dynamics on land surface mass and energy fluxes. In this study, we present a novel approach of representing high-resolution groundwater dynamics in LSMs that is computationally efficient for large-scale applications. This new parameterization is incorporated in the Joint UK Land Environment Simulator (JULES) and tested at the continental-scale.

  11. Laboratory simulation of high-frequency GPR responses of damaged tunnel liners

    NASA Astrophysics Data System (ADS)

    Siggins, A. F.; Whiteley, Robert J.

    2000-04-01

    Concrete lined tunnels and pipelines commonly suffer from damage due to subsidence or poor drainage in the surrounding soils, corrosion of reinforcement if present, and acid vapor leaching of the lining. There is a need to conduct tunnel condition monitoring using non-destructive testing methods (NDT) on a regular basis in many buried installations, for example sewers and storm water drains. A wide variety of NDT methods have been employed in the past to monitor these linings including closed circuit TV (CCTV) inspection, magnetic and various electromagnetic and seismic methods. Ground penetrating radar, GPR, is a promising technique for this application, however there are few systems currently available that can provide the high resolution imaging needed to test the lining. A recently developed Australian GPR system operating at 1400 MHz offers the potential to overcome many of these limitations while maintaining adequate resolution to the rear of the linings which are typically less than 0.5 meters thick. The new high frequency GPR has a nominal resolution of 0.03 m at the center of the pulse band-width. This is a significant improvement over existing radars with the possible exception of some horn based systems. This paper describes the results of a laboratory study on a model tunnel lining using the new 1.4 GHz radar. The model simulated a concrete lining with various degrees of damage including, heavily leached sections, voids and corroded reinforcing. The test results established that the new GPR was capable of imaging subtle variations in the concrete structure and that simulated damage could be detected throughout the liner depth. Furthermore, resolution was found to exceed 0.02 m which was significantly better than expected.

  12. A High-Resolution Spatially Explicit Monte-Carlo Simulation Approach to Commercial and Residential Electricity and Water Demand Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; McManamay, Ryan A; Nagle, Nicholas N

    Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may thereforemore » not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.« less

  13. Validation of GATE Monte Carlo simulations of the GE Advance/Discovery LS PET scanners.

    PubMed

    Schmidtlein, C Ross; Kirov, Assen S; Nehmeh, Sadek A; Erdi, Yusuf E; Humm, John L; Amols, Howard I; Bidaut, Luc M; Ganin, Alex; Stearns, Charles W; McDaniel, David L; Hamacher, Klaus A

    2006-01-01

    The recently developed GATE (GEANT4 application for tomographic emission) Monte Carlo package, designed to simulate positron emission tomography (PET) and single photon emission computed tomography (SPECT) scanners, provides the ability to model and account for the effects of photon noncollinearity, off-axis detector penetration, detector size and response, positron range, photon scatter, and patient motion on the resolution and quality of PET images. The objective of this study is to validate a model within GATE of the General Electric (GE) Advance/Discovery Light Speed (LS) PET scanner. Our three-dimensional PET simulation model of the scanner consists of 12 096 detectors grouped into blocks, which are grouped into modules as per the vendor's specifications. The GATE results are compared to experimental data obtained in accordance with the National Electrical Manufactures Association/Society of Nuclear Medicine (NEMA/SNM), NEMA NU 2-1994, and NEMA NU 2-2001 protocols. The respective phantoms are also accurately modeled thus allowing us to simulate the sensitivity, scatter fraction, count rate performance, and spatial resolution. In-house software was developed to produce and analyze sinograms from the simulated data. With our model of the GE Advance/Discovery LS PET scanner, the ratio of the sensitivities with sources radially offset 0 and 10 cm from the scanner's main axis are reproduced to within 1% of measurements. Similarly, the simulated scatter fraction for the NEMA NU 2-2001 phantom agrees to within less than 3% of measured values (the measured scatter fractions are 44.8% and 40.9 +/- 1.4% and the simulated scatter fraction is 43.5 +/- 0.3%). The simulated count rate curves were made to match the experimental curves by using deadtimes as fit parameters. This resulted in deadtime values of 625 and 332 ns at the Block and Coincidence levels, respectively. The experimental peak true count rate of 139.0 kcps and the peak activity concentration of 21.5 kBq/cc were matched by the simulated results to within 0.5% and 0.1% respectively. The simulated count rate curves also resulted in a peak NECR of 35.2 kcps at 10.8 kBq/cc compared to 37.6 kcps at 10.0 kBq/cc from averaged experimental values. The spatial resolution of the simulated scanner matched the experimental results to within 0.2 mm.

  14. Toward 10-km mesh global climate simulations

    NASA Astrophysics Data System (ADS)

    Ohfuchi, W.; Enomoto, T.; Takaya, K.; Yoshioka, M. K.

    2002-12-01

    An atmospheric general circulation model (AGCM) that runs very efficiently on the Earth Simulator (ES) was developed. The ES is a gigantic vector-parallel computer with the peak performance of 40 Tflops. The AGCM, named AFES (AGCM for ES), was based on the version 5.4.02 of an AGCM developed jointly by the Center for Climate System Research, the University of Tokyo and the Japanese National Institute for Environmental Sciences. The AFES was, however, totally rewritten in FORTRAN90 and MPI while the original AGCM was written in FORTRAN77 and not capable of parallel computing. The AFES achieved 26 Tflops (about 65 % of the peak performance of the ES) at resolution of T1279L96 (10-km horizontal resolution and 500-m vertical resolution in middle troposphere to lower stratosphere). Some results of 10- to 20-day global simulations will be presented. At this moment, only short-term simulations are possible due to data storage limitation. As ten tera flops computing is achieved, peta byte data storage are necessary to conduct climate-type simulations at this super-high resolution global simulations. Some possibilities for future research topics in global super-high resolution climate simulations will be discussed. Some target topics are mesoscale structures and self-organization of the Baiu-Meiyu front over Japan, cyclogenecsis over the North Pacific and typhoons around the Japan area. Also improvement in local precipitation with increasing horizontal resolution will be demonstrated.

  15. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.

    PubMed

    Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C

    2009-09-01

    A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.

  16. Monte Carlo simulation of the resolution volume for the SEQUOIA spectrometer

    NASA Astrophysics Data System (ADS)

    Granroth, G. E.; Hahn, S. E.

    2015-01-01

    Monte Carlo ray tracing simulations, of direct geometry spectrometers, have been particularly useful in instrument design and characterization. However, these tools can also be useful for experiment planning and analysis. To this end, the McStas Monte Carlo ray tracing model of SEQUOIA, the fine resolution fermi chopper spectrometer at the Spallation Neutron Source (SNS) of Oak Ridge National Laboratory (ORNL), has been modified to include the time of flight resolution sample and detector components. With these components, the resolution ellipsoid can be calculated for any detector pixel and energy bin of the instrument. The simulation is split in two pieces. First, the incident beamline up to the sample is simulated for 1 × 1011 neutron packets (4 days on 30 cores). This provides a virtual source for the backend that includes the resolution sample and monitor components. Next, a series of detector and energy pixels are computed in parallel. It takes on the order of 30 s to calculate a single resolution ellipsoid on a single core. Python scripts have been written to transform the ellipsoid into the space of an oriented single crystal, and to characterize the ellipsoid in various ways. Though this tool is under development as a planning tool, we have successfully used it to provide the resolution function for convolution with theoretical models. Specifically, theoretical calculations of the spin waves in YFeO3 were compared to measurements taken on SEQUOIA. Though the overall features of the spectra can be explained while neglecting resolution effects, the variation in intensity of the modes is well described once the resolution is included. As this was a single sharp mode, the simulated half intensity value of the resolution ellipsoid was used to provide the resolution width. A description of the simulation, its use, and paths forward for this technique will be discussed.

  17. Simulating the characteristics of tropical cyclones over the South West Indian Ocean using a Stretched-Grid Global Climate Model

    NASA Astrophysics Data System (ADS)

    Maoyi, Molulaqhooa L.; Abiodun, Babatunde J.; Prusa, Joseph M.; Veitch, Jennifer J.

    2018-03-01

    Tropical cyclones (TCs) are one of the most devastating natural phenomena. This study examines the capability of a global climate model with grid stretching (CAM-EULAG, hereafter CEU) in simulating the characteristics of TCs over the South West Indian Ocean (SWIO). In the study, CEU is applied with a variable increment global grid that has a fine horizontal grid resolution (0.5° × 0.5°) over the SWIO and coarser resolution (1° × 1°—2° × 2.25°) over the rest of the globe. The simulation is performed for the 11 years (1999-2010) and validated against the Joint Typhoon Warning Center (JTWC) best track data, global precipitation climatology project (GPCP) satellite data, and ERA-Interim (ERAINT) reanalysis. CEU gives a realistic simulation of the SWIO climate and shows some skill in simulating the spatial distribution of TC genesis locations and tracks over the basin. However, there are some discrepancies between the observed and simulated climatic features over the Mozambique channel (MC). Over MC, CEU simulates a substantial cyclonic feature that produces a higher number of TC than observed. The dynamical structure and intensities of the CEU TCs compare well with observation, though the model struggles to produce TCs with a deep pressure centre as low as the observed. The reanalysis has the same problem. The model captures the monthly variation of TC occurrence well but struggles to reproduce the interannual variation. The results of this study have application in improving and adopting CEU for seasonal forecasting over the SWIO.

  18. Genetic particle filter application to land surface temperature downscaling

    NASA Astrophysics Data System (ADS)

    Mechri, Rihab; Ottlé, Catherine; Pannekoucke, Olivier; Kallel, Abdelaziz

    2014-03-01

    Thermal infrared data are widely used for surface flux estimation giving the possibility to assess water and energy budgets through land surface temperature (LST). Many applications require both high spatial resolution (HSR) and high temporal resolution (HTR), which are not presently available from space. It is therefore necessary to develop methodologies to use the coarse spatial/high temporal resolutions LST remote-sensing products for a better monitoring of fluxes at appropriate scales. For that purpose, a data assimilation method was developed to downscale LST based on particle filtering. The basic tenet of our approach is to constrain LST dynamics simulated at both HSR and HTR, through the optimization of aggregated temperatures at the coarse observation scale. Thus, a genetic particle filter (GPF) data assimilation scheme was implemented and applied to a land surface model which simulates prior subpixel temperatures. First, the GPF downscaling scheme was tested on pseudoobservations generated in the framework of the study area landscape (Crau-Camargue, France) and climate for the year 2006. The GPF performances were evaluated against observation errors and temporal sampling. Results show that GPF outperforms prior model estimations. Finally, the GPF method was applied on Spinning Enhanced Visible and InfraRed Imager time series and evaluated against HSR data provided by an Advanced Spaceborne Thermal Emission and Reflection Radiometer image acquired on 26 July 2006. The temperatures of seven land cover classes present in the study area were estimated with root-mean-square errors less than 2.4 K which is a very promising result for downscaling LST satellite products.

  19. The relative entropy is fundamental to adaptive resolution simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreis, Karsten; Graduate School Materials Science in Mainz, Staudingerweg 9, 55128 Mainz; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy withmore » respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.« less

  20. The relative entropy is fundamental to adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Potestio, Raffaello

    2016-07-01

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.

  1. Sensitivity studies of high-resolution RegCM3 simulations of precipitation over the European Alps: the effect of lateral boundary conditions and domain size

    NASA Astrophysics Data System (ADS)

    Nadeem, Imran; Formayer, Herbert

    2016-11-01

    A suite of high-resolution (10 km) simulations were performed with the International Centre for Theoretical Physics (ICTP) Regional Climate Model (RegCM3) to study the effect of various lateral boundary conditions (LBCs), domain size, and intermediate domains on simulated precipitation over the Great Alpine Region. The boundary conditions used were ECMWF ERA-Interim Reanalysis with grid spacing 0.75∘, the ECMWF ERA-40 Reanalysis with grid spacing 1.125 and 2.5∘, and finally the 2.5∘ NCEP/DOE AMIP-II Reanalysis. The model was run in one-way nesting mode with direct nesting of the high-resolution RCM (horizontal grid spacing Δx = 10 km) with driving reanalysis, with one intermediate resolution nest (Δx = 30 km) between high-resolution RCM and reanalysis forcings, and also with two intermediate resolution nests (Δx = 90 km and Δx = 30 km) for simulations forced with LBC of resolution 2.5∘. Additionally, the impact of domain size was investigated. The results of multiple simulations were evaluated using different analysis techniques, e.g., Taylor diagram and a newly defined useful statistical parameter, called Skill-Score, for evaluation of daily precipitation simulated by the model. It has been found that domain size has the major impact on the results, while different resolution and versions of LBCs, e.g., 1.125∘ ERA40 and 0.7∘ ERA-Interim, do not produce significantly different results. It is also noticed that direct nesting with reasonable domain size, seems to be the most adequate method for reproducing precipitation over complex terrain, while introducing intermediate resolution nests seems to deteriorate the results.

  2. Unraveling the martian water cycle with high-resolution global climate simulations

    NASA Astrophysics Data System (ADS)

    Pottier, Alizée; Forget, François; Montmessin, Franck; Navarro, Thomas; Spiga, Aymeric; Millour, Ehouarn; Szantai, André; Madeleine, Jean-Baptiste

    2017-07-01

    Global climate modeling of the Mars water cycle is usually performed at relatively coarse resolution (200 - 300km), which may not be sufficient to properly represent the impact of waves, fronts, topography effects on the detailed structure of clouds and surface ice deposits. Here, we present new numerical simulations of the annual water cycle performed at a resolution of 1° × 1° (∼ 60 km in latitude). The model includes the radiative effects of clouds, whose influence on the thermal structure and atmospheric dynamics is significant, thus we also examine simulations with inactive clouds to distinguish the direct impact of resolution on circulation and winds from the indirect impact of resolution via water ice clouds. To first order, we find that the high resolution does not dramatically change the behavior of the system, and that simulations performed at ∼ 200 km resolution capture well the behavior of the simulated water cycle and Mars climate. Nevertheless, a detailed comparison between high and low resolution simulations, with reference to observations, reveal several significant changes that impact our understanding of the water cycle active today on Mars. The key northern cap edge dynamics are affected by an increase in baroclinic wave strength, with a complication of northern summer dynamics. South polar frost deposition is modified, with a westward longitudinal shift, since southern dynamics are also influenced. Baroclinic wave mode transitions are observed. New transient phenomena appear, like spiral and streak clouds, already documented in the observations. Atmospheric circulation cells in the polar region exhibit a large variability and are fine structured, with slope winds. Most modeled phenomena affected by high resolution give a picture of a more turbulent planet, inducing further variability. This is challenging for long-period climate studies.

  3. Optimization of an ultralow-dose high-resolution pediatric PET scanner design based on monolithic scintillators with dual-sided digital SiPM readout: a simulation study

    NASA Astrophysics Data System (ADS)

    Mikhaylova, Ekaterina; Tabacchini, Valerio; Borghi, Giacomo; Mollet, Pieter; D'Hoe, Ester; Schaart, Dennis R.; Vandenberghe, Stefaan

    2017-11-01

    The goal of this simulation study is the performance evaluation and comparison of six potential designs for a time-of-flight PET scanner for pediatric patients of up to about 12 years of age. It is designed to have a high sensitivity and provide high-contrast and high-resolution images. The simulated pediatric PET is a full ring scanner, consisting of 32  ×  32 mm2 monolithic LYSO:Ce crystals coupled to digital silicon photomultiplier arrays. The six considered designs differ in axial lengths (27.2 cm, 54.4 cm and 102 cm) and crystal thicknesses (22 mm and 11 mm). The simulations are based on measured detector response data. We study two possible detector arrangements: 22 mm-thick crystals with dual-sided readout and 11 mm-thick crystals with back-sided readout. The six designs are simulated by means of the GEANT4 application for tomographic emission software, using the measured spatial, energy and time response of the monolithic scintillator detectors as input. The performance of the six designs is compared on the basis of four studies: (1) spatial resolution; (2) NEMA NU2-2012 sensitivity and scatter fraction (SF) tests; (3) non-prewhitening signal-to-noise ratio observer study; and (4) receiver operating characteristics analysis. Based on the results, two designs are identified as cost-effective solutions for fast and efficient imaging of children: one with 54.4 cm axial field-of-view (FOV) and 22 mm-thick crystals, and another one with 102 cm axial FOV and 11 cm-thick crystals. The first one has a higher center point sensitivity than the second one, but requires dual-sided readout. The second design has the advantage of allowing a whole-body scan in a single bed position acquisition. Both designs have the potential to provide an excellent spatial resolution (˜2 mm) and an ultra-high sensitivity (>100 cps kBq-1 ).

  4. The Application of Satellite-Derived, High-Resolution Land Use/Land Cover Data to Improve Urban Air Quality Model Forecasts

    NASA Technical Reports Server (NTRS)

    Quattrochi, D. A.; Lapenta, W. M.; Crosson, W. L.; Estes, M. G., Jr.; Limaye, A.; Kahn, M.

    2006-01-01

    Local and state agencies are responsible for developing state implementation plans to meet National Ambient Air Quality Standards. Numerical models used for this purpose simulate the transport and transformation of criteria pollutants and their precursors. The specification of land use/land cover (LULC) plays an important role in controlling modeled surface meteorology and emissions. NASA researchers have worked with partners and Atlanta stakeholders to incorporate an improved high-resolution LULC dataset for the Atlanta area within their modeling system and to assess meteorological and air quality impacts of Urban Heat Island (UHI) mitigation strategies. The new LULC dataset provides a more accurate representation of land use, has the potential to improve model accuracy, and facilitates prediction of LULC changes. Use of the new LULC dataset for two summertime episodes improved meteorological forecasts, with an existing daytime cold bias of approx. equal to 3 C reduced by 30%. Model performance for ozone prediction did not show improvement. In addition, LULC changes due to Atlanta area urbanization were predicted through 2030, for which model simulations predict higher urban air temperatures. The incorporation of UHI mitigation strategies partially offset this warming trend. The data and modeling methods used are generally applicable to other U.S. cities.

  5. Application of dot-matrix illumination of liquid crystal phase space light modulator in 3D imaging of APD array

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Sun, Huayan; Guo, Huichao

    2018-01-01

    Aiming at the problem of beam scanning in low-resolution APD array in three-dimensional imaging, a method of beam scanning with liquid crystal phase-space optical modulator is proposed to realize high-resolution imaging by low-resolution APD array. First, a liquid crystal phase spatial light modulator is used to generate a beam array and then a beam array is scanned. Since the sub-beam divergence angle in the beam array is smaller than the field angle of a single pixel in the APD array, the APD's pixels respond only to the three-dimensional information of the beam illumination position. Through the scanning of the beam array, a single pixel is used to collect the target three-dimensional information multiple times, thereby improving the resolution of the APD detector. Finally, MATLAB is used to simulate the algorithm in this paper by using two-dimensional scalar diffraction theory, which realizes the splitting and scanning with a resolution of 5 x 5. The feasibility is verified theoretically.

  6. Changes in Moisture Flux over the Tibetan Plateau during 1979-2011: Insights from a High Resolution Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin

    2015-05-15

    Net precipitation (precipitation minus evapotranspiration, P-E) changes between 1979 and 2011 from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The high resolution simulation better resolves precipitation changes than its coarse resolution forcing, which contributes dominantly to the improved P-E change in the regional simulation compared to the global reanalysis. Hence, the former may provide better insights about the drivers of P-E changes. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence intomore » thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.« less

  7. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  8. Above-real-time training (ARTT) improves transfer to a simulated flight control task.

    PubMed

    Donderi, D C; Niall, Keith K; Fish, Karyn; Goldstein, Benjamin

    2012-06-01

    The aim of this study was to measure the effects of above-real-time-training (ARTT) speed and screen resolution on a simulated flight control task. ARTT has been shown to improve transfer to the criterion task in some military simulation experiments. We tested training speed and screen resolution in a project, sponsored by Defence Research and Development Canada, to develop components for prototype air mission simulators. For this study, 54 participants used a single-screen PC-based flight simulation program to learn to chase and catch an F-18A fighter jet with another F-18A while controlling the chase aircraft with a throttle and side-stick controller. Screen resolution was varied between participants, and training speed was varied factorially across two sessions within participants. Pretest and posttest trials were at high resolution and criterion (900 knots) speed. Posttest performance was best with high screen resolution training and when one ARTT training session was followed by a session of criterion speed training. ARTT followed by criterion training improves performance on a visual-motor coordination task. We think that ARTT influences known facilitators of transfer, including similarity to the criterion task and contextual interference. Use high-screen resolution, start with ARTT, and finish with criterion speed training when preparing a mission simulation.

  9. The magnetic recoil spectrometer (MRSt) for time-resolved measurements of the neutron spectrum at the National Ignition Facility (NIF)

    DOE PAGES

    Frenje, J. A.; Hilsabeck, T. J.; Wink, C. W.; ...

    2016-08-02

    The next-generation magnetic recoil spectrometer for time-resolved measurements of the neutron spectrum has been conceptually designed for the National Ignition Facility. This spectrometer, called MRSt, represents a paradigm shift in our thinking about neutron spectrometry for inertial confinement fusion applications, as it will provide simultaneously information about the burn history and time evolution of areal density (ρR), apparent ion temperature (T i), yield (Y n), and macroscopic flows during burn. From this type of data, an assessment of the evolution of the fuel assembly, hotspot, and alpha heating can be made. According to simulations, the MRSt will provide accurate datamore » with a time resolution of ~20 ps and energy resolution of ~100 keV for total neutron yields above ~10 16. Lastly, at lower yields, the diagnostic will be operated at a higher-efficiency, lower-energy-resolution mode to provide a time resolution of ~20 ps.« less

  10. The magnetic recoil spectrometer (MRSt) for time-resolved measurements of the neutron spectrum at the National Ignition Facility (NIF).

    PubMed

    Frenje, J A; Hilsabeck, T J; Wink, C W; Bell, P; Bionta, R; Cerjan, C; Gatu Johnson, M; Kilkenny, J D; Li, C K; Séguin, F H; Petrasso, R D

    2016-11-01

    The next-generation magnetic recoil spectrometer for time-resolved measurements of the neutron spectrum has been conceptually designed for the National Ignition Facility. This spectrometer, called MRSt, represents a paradigm shift in our thinking about neutron spectrometry for inertial confinement fusion applications, as it will provide simultaneously information about the burn history and time evolution of areal density (ρR), apparent ion temperature (T i ), yield (Y n ), and macroscopic flows during burn. From this type of data, an assessment of the evolution of the fuel assembly, hotspot, and alpha heating can be made. According to simulations, the MRSt will provide accurate data with a time resolution of ∼20 ps and energy resolution of ∼100 keV for total neutron yields above ∼10 16 . At lower yields, the diagnostic will be operated at a higher-efficiency, lower-energy-resolution mode to provide a time resolution of ∼20 ps.

  11. Design and fabrication of prototype 6×6 cm 2 microchannel plate photodetector with bialkali photocathode for fast timing applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Junqi; Byrum, Karen; Demarteau, Marcel

    Planar microchannel plate-based photodetector with bialkali photocathode is capable of fast and accurate time and position resolutions. A new 6 cm x 6 cm photodetector production facility was designed and built at Argonne National Laboratory. Small form-factor MCP-based photodetectors completely constructed of glass were designed and prototypes were successfully fabricated. Knudsen effusion cells were incorporated in the photocathode growth chamber to achieve uniform and high quantum efficiency hotocathodes. The thin film uniformity distribution was simulated and measured for an antimony film deposition, showing uniformity of better than 10%. Several prototype devices with bialkali photocathodes have been fabricated with the describedmore » system and their characteristics were evaluated in the large signal (multi-PE) limit. A typical prototype device exhibits time-of-flight resolution of ~ 27 psec and differential time resolution of ~ 9 psec, corresponding to spatial resolution of ~ 0.65 mm.« less

  12. The magnetic recoil spectrometer (MRSt) for time-resolved measurements of the neutron spectrum at the National Ignition Facility (NIF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frenje, J. A., E-mail: jfrenje@psfc.mit.edu; Wink, C. W.; Gatu Johnson, M.

    The next-generation magnetic recoil spectrometer for time-resolved measurements of the neutron spectrum has been conceptually designed for the National Ignition Facility. This spectrometer, called MRSt, represents a paradigm shift in our thinking about neutron spectrometry for inertial confinement fusion applications, as it will provide simultaneously information about the burn history and time evolution of areal density (ρR), apparent ion temperature (T{sub i}), yield (Y{sub n}), and macroscopic flows during burn. From this type of data, an assessment of the evolution of the fuel assembly, hotspot, and alpha heating can be made. According to simulations, the MRSt will provide accurate datamore » with a time resolution of ∼20 ps and energy resolution of ∼100 keV for total neutron yields above ∼10{sup 16}. At lower yields, the diagnostic will be operated at a higher-efficiency, lower-energy-resolution mode to provide a time resolution of ∼20 ps.« less

  13. Energy resolution experiments of conical organic scintillators and a comparison with Geant4 simulations

    NASA Astrophysics Data System (ADS)

    Sosa, C. S.; Thompson, S. J.; Chichester, D. L.; Clarke, S. D.; Di Fulvio, A.; Pozzi, S. A.

    2018-08-01

    An increase in light-collection efficiency (LCE) improves the energy resolution of scintillator-based detection systems. An improvement in energy resolution can benefit detector performance, for example by lowering the measurement threshold and achieving greater accuracy in light-output calibration. This work shows that LCE can be increased by modifying the scintillator shape to reduce optical-photon reflections, thereby decreasing transmission and absorption likelihood at the reflector boundary. The energy resolution of four organic scintillators (EJ200) were compared: two cones and two right-circular cylinders, all with equal base diameter and height (50 mm). The sides of each shape had two surface conditions: one was polished and the other was ground. Each scintillator was coupled to the center of four photomultiplier tube (PMT) configurations of different diameters. The photocathode response of all PMTs was assessed as a function of position using a small cube (5 mm height) of EJ200. The worst configuration, a highly polished conical scintillator mated to a PMT of equal base diameter, produced a smeared energy spectrum. The cause of spectrum smearing is explored in detail. Results demonstrate that a ground cone had the greatest improvement in energy resolution over a ground cylinder by approximately 16.2% at 478 keVee, when using the largest diameter (127 mm) PMT. This result is attributed to the greater LCE of the cone, its ground surface, and the uniform photocathode response near center of the largest PMT. Optical-photon transport simulations in Geant4 of the cone and cylinder assuming a diffuse reflector and a uniform photocathode were compared to the best experimental configuration and agreed well. If a detector application requires excellent energy resolution above all other considerations, a ground cone on a large PMT is recommended over a cylinder.

  14. High resolution urban morphology data for urban wind flow modeling

    NASA Astrophysics Data System (ADS)

    Cionco, Ronald M.; Ellefsen, Richard

    The application of urban forestry methods and technologies to a number of practical problems can be further enhanced by the use and incorporation of localized, high resolution wind and temperature fields into their analysis methods. The numerical simulation of these micrometeorological fields will represent the interactions and influences of urban structures, vegetation elements, and variable terrain as an integral part of the dynamics of an urban domain. Detailed information of the natural and man-made components that make up the urban area is needed to more realistically model meteorological fields in urban domains. Simulating high resolution wind and temperatures over and through an urban domain utilizing detailed morphology data can also define and quantify local areas where urban forestry applications can contribute to better solutions. Applications such as the benefits of planting trees for shade purposes can be considered, planned, and evaluated for their impact on conserving energy and cooling costs as well as the possible reconfiguration or removal of trees and other barriers for improved airflow ventilation and similar processes. To generate these fields, a wind model must be provided, as a minimum, the location, type, height, structural silhouette, and surface roughness of these components, in order to account for the presence and effects of these land morphology features upon the ambient airflow. The morphology of Sacramento, CA has been characterized and quantified in considerable detail primarily for wind flow modeling, simulation, and analyses, but can also be used for improved meteorological analyses, urban forestry, urban planning, and other urban related activities. Morphology methods previously developed by Ellefsen are applied to the Sacramento scenario with a high resolution grid of 100 m × 100 m. The Urban Morphology Scheme defines Urban Terrain Zones (UTZ) according to how buildings and other urban elements are structured and placed with respect to each other. The urban elements within the 100 m × 100 m cells (one hectare) are further described and digitized as building height, building footprint (in percent), reflectivity of its roof, pitched roof or flat, building's long axis orientation, footprint of impervious surface and its reflectivity, footprint of canopy elements, footprint of woodlots, footprint of grass area, and footprint of water surface. A variety of maps, satellite images, low level aerial photographs, and street level photographs are the raw data used to quantify these urban properties. The final digitized morphology database resides in a spreadsheet ready for use on ordinary personal computers.

  15. Updating a preoperative surface model with information from real-time tracked 2D ultrasound using a Poisson surface reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Deyu; Rettmann, Maryam E.; Holmes, David R.; Linte, Cristian A.; Packer, Douglas; Robb, Richard A.

    2014-03-01

    In this work, we propose a method for intraoperative reconstruction of a left atrial surface model for the application of cardiac ablation therapy. In this approach, the intraoperative point cloud is acquired by a tracked, 2D freehand intra-cardiac echocardiography device, which is registered and merged with a preoperative, high resolution left atrial surface model built from computed tomography data. For the surface reconstruction, we introduce a novel method to estimate the normal vector of the point cloud from the preoperative left atrial model, which is required for the Poisson Equation Reconstruction algorithm. In the current work, the algorithm is evaluated using a preoperative surface model from patient computed tomography data and simulated intraoperative ultrasound data. Factors such as intraoperative deformation of the left atrium, proportion of the left atrial surface sampled by the ultrasound, sampling resolution, sampling noise, and registration error were considered through a series of simulation experiments.

  16. Application of the correlation constrained multivariate curve resolution alternating least-squares method for analyte quantitation in the presence of unexpected interferences using first-order instrumental data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà

    2010-03-01

    Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.

  17. 3D ion flow measurements and simulations near a boundary at oblique incidence to a magnetic field

    NASA Astrophysics Data System (ADS)

    Thompson, Derek S.; Keniley, Shane; Khaziev, Rinat; Curreli, Davide; Good, Timothy N.; Henriquez, Miguel; McIlvain, Julianne; Siddiqui, M. Umair; Scime, Earl E.

    2016-10-01

    Boundaries at oblique incidence to magnetic fields are abundant in magnetic confinement plasmas. The ion dynamics near these boundaries has implications for applications such as tokamak divertor wall loading and Hall thruster channel erosion. We present 3D, non-perturbative measurements of ion velocity distribution functions (IVDFs), providing ion temperatures and flows upstream of a grounded stainless steel limiter plate immersed in an argon plasma, oriented obliquely to the background axial magnetic field (ψ = 74°). The spatial resolution of the measurements is sufficient to probe the kinetic details of magnetic presheath structures, which span several ion Larmor radii ( 1 cm). Furthermore, we report probe measurements of electron density and temperature, and of local electric potential. To complement these measurements, results from particle-in-cell and Boltzmann models of the same region are presented. These models allow for point-to-point comparison of simulated and measured electrostatic structures and IVDFs at high spatial resolution. NSF Award PHYS-1360278.

  18. Measurement of pulsatile motion with millisecond resolution by MRI.

    PubMed

    Souchon, Rémi; Gennisson, Jean-Luc; Tanter, Mickael; Salomir, Rares; Chapelon, Jean-Yves; Rouvière, Olivier

    2012-06-01

    We investigated a technique based on phase-contrast cine MRI combined with deconvolution of the phase shift waveforms to measure rapidly varying pulsatile motion waveforms. The technique does not require steady-state displacement during motion encoding. Simulations and experiments were performed in porcine liver samples in view of a specific application, namely the observation of transient displacements induced by acoustic radiation force. Simulations illustrate the advantages and shortcomings of the methods. For experimental validation, the waveforms were acquired with an ultrafast ultrasound scanner (Supersonic Imagine Aixplorer), and the rates of decay of the waveforms (relaxation time) were compared. With bipolar motion-encoding gradient of 8.4 ms, the method was able to measure displacement waveforms with a temporal resolution of 1 ms over a time course of 40 ms. Reasonable agreement was found between the rate of decay of the waveforms measured in ultrasound (2.8 ms) and in MRI (2.7-3.3 ms). Copyright © 2011 Wiley-Liss, Inc.

  19. Resolution dependence of precipitation statistical fidelity in hindcast simulations

    DOE PAGES

    O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik; ...

    2016-06-19

    This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less

  20. Resolution dependence of precipitation statistical fidelity in hindcast simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik

    This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less

  1. Does Explosive Nuclear Burning Occur in Tidal Disruption Events of White Dwarfs by Intermediate-mass Black Holes?

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Sato, Yushi; Nomoto, Ken'ichi; Maeda, Keiichi; Nakasato, Naohito; Hachisu, Izumi

    2017-04-01

    We investigate nucleosynthesis in tidal disruption events (TDEs) of white dwarfs (WDs) by intermediate-mass black holes. We consider various types of WDs with different masses and compositions by means of three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations. We model these WDs with different numbers of SPH particles, N, from a few 104 to a few 107 in order to check mass resolution convergence, where SPH simulations with N > 107 (or a space resolution of several 106 cm) have unprecedentedly high resolution in this kind of simulation. We find that nuclear reactions become less active with increasing N and that these nuclear reactions are excited by spurious heating due to low resolution. Moreover, we find no shock wave generation. In order to investigate the reason for the absence of a shock wave, we additionally perform one-dimensional (1D) SPH and mesh-based simulations with a space resolution ranging from 104 to 107 cm, using a characteristic flow structure extracted from the 3D SPH simulations. We find shock waves in these 1D high-resolution simulations, one of which triggers a detonation wave. However, we must be careful of the fact that, if the shock wave emerged in an outer region, it could not trigger the detonation wave due to low density. Note that the 1D initial conditions lack accuracy to precisely determine where a shock wave emerges. We need to perform 3D simulations with ≲106 cm space resolution in order to conclude that WD TDEs become optical transients powered by radioactive nuclei.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less

  3. Recent advances on terrain database correlation testing

    NASA Astrophysics Data System (ADS)

    Sakude, Milton T.; Schiavone, Guy A.; Morelos-Borja, Hector; Martin, Glenn; Cortes, Art

    1998-08-01

    Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.

  4. Monge-Ampére simulation of fourth order PDEs in two dimensions with application to elastic-electrostatic contact problems

    NASA Astrophysics Data System (ADS)

    DiPietro, Kelsey L.; Lindsay, Alan E.

    2017-11-01

    We present an efficient moving mesh method for the simulation of fourth order nonlinear partial differential equations (PDEs) in two dimensions using the Parabolic Monge-Ampére (PMA) equation. PMA methods have been successfully applied to the simulation of second order problems, but not on systems with higher order equations which arise in many topical applications. Our main application is the resolution of fine scale behavior in PDEs describing elastic-electrostatic interactions. The PDE system considered has multiple parameter dependent singular solution modalities, including finite time singularities and sharp interface dynamics. We describe how to construct a dynamic mesh algorithm for such problems which incorporates known self similar or boundary layer scalings of the underlying equation to locate and dynamically resolve fine scale solution features in these singular regimes. We find a key step in using the PMA equation for mesh generation in fourth order problems is the adoption of a high order representation of the transformation from the computational to physical mesh. We demonstrate the efficacy of the new method on a variety of examples and establish several new results and conjectures on the nature of self-similar singularity formation in higher order PDEs.

  5. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  6. Intrusion-Tolerant Location Information Services in Intelligent Vehicular Networks

    NASA Astrophysics Data System (ADS)

    Yan, Gongjun; Yang, Weiming; Shaner, Earl F.; Rawat, Danda B.

    Intelligent Vehicular Networks, known as Vehicle-to-Vehicle and Vehicle-to-Roadside wireless communications (also called Vehicular Ad hoc Networks), are revolutionizing our daily driving with better safety and more infortainment. Most, if not all, applications will depend on accurate location information. Thus, it is of importance to provide intrusion-tolerant location information services. In this paper, we describe an adaptive algorithm that detects and filters the false location information injected by intruders. Given a noisy environment of mobile vehicles, the algorithm estimates the high resolution location of a vehicle by refining low resolution location input. We also investigate results of simulations and evaluate the quality of the intrusion-tolerant location service.

  7. Metal-dielectric composites for beam splitting and far-field deep sub-wavelength resolution for visible wavelengths.

    PubMed

    Yan, Changchun; Zhang, Dao Hua; Zhang, Yuan; Li, Dongdong; Fiddy, M A

    2010-07-05

    We report beam splitting in a metamaterial composed of a silver-alumina composite covered by a layer of chromium containing one slit. By simulating distributions of energy flow in the metamaterial for H-polarized waves, we find that the beam splitting occurs when the width of the slit is shorter than the wavelength, which is conducive to making a beam splitter in sub-wavelength photonic devices. We also find that the metamaterial possesses deep sub-wavelength resolution capabilities in the far field when there are two slits and the central silver layer is at least 36 nm in thickness, which has potential applications in superresolution imaging.

  8. Multifocal interferometric synthetic aperture microscopy

    PubMed Central

    Xu, Yang; Chng, Xiong Kai Benjamin; Adie, Steven G.; Boppart, Stephen A.; Scott Carney, P.

    2014-01-01

    There is an inherent trade-off between transverse resolution and depth of field (DOF) in optical coherence tomography (OCT) which becomes a limiting factor for certain applications. Multifocal OCT and interferometric synthetic aperture microscopy (ISAM) each provide a distinct solution to the trade-off through modification to the experiment or via post-processing, respectively. In this paper, we have solved the inverse problem of multifocal OCT and present a general algorithm for combining multiple ISAM datasets. Multifocal ISAM (MISAM) uses a regularized combination of the resampled datasets to bring advantages of both multifocal OCT and ISAM to achieve optimal transverse resolution, extended effective DOF and improved signal-to-noise ratio. We present theory, simulation and experimental results. PMID:24977909

  9. The Generic Resolution Advisor and Conflict Evaluator (GRACE) for Detect-And-Avoid (DAA) Systems

    NASA Technical Reports Server (NTRS)

    Abramson, Michael; Refai, Mohamad; Santiago, Confesor

    2017-01-01

    The paper describes the Generic Resolution Advisor and Conflict Evaluator (GRACE), a novel alerting and guidance algorithm that combines flexibility, robustness, and computational efficiency. GRACE is "generic" in that it makes no assumptions regarding temporal or spatial scales, aircraft performance, or its sensor and communication systems. Accordingly, GRACE is well suited to research applications where alerting and guidance is a central feature and requirements are fluid involving a wide range of aviation technologies. GRACE has been used at NASA in a number of real-time and fast-time experiments supporting evolving requirements of DAA research, including parametric studies, NAS-wide simulations, human-in-the-loop experiments, and live flight tests.

  10. Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.

    PubMed

    Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang

    2017-01-01

    Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.

  11. Multi-resolution MPS method

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayuki; Cardoso, Rui; Bahai, Hamid

    2018-04-01

    In this work, the Moving Particle Semi-implicit (MPS) method is enhanced for multi-resolution problems with different resolutions at different parts of the domain utilising a particle splitting algorithm for the finer resolution and a particle merging algorithm for the coarser resolution. The Least Square MPS (LSMPS) method is used for higher stability and accuracy. Novel boundary conditions are developed for the treatment of wall and pressure boundaries for the Multi-Resolution LSMPS method. A wall is represented by polygons for effective simulations of fluid flows with complex wall geometries and the pressure boundary condition allows arbitrary inflow and outflow, making the method easier to be used in flow simulations of channel flows. By conducting simulations of channel flows and free surface flows, the accuracy of the proposed method was verified.

  12. Impacts of spectral nudging on the simulated surface air temperature in summer compared with the selection of shortwave radiation and land surface model physics parameterization in a high-resolution regional atmospheric model

    NASA Astrophysics Data System (ADS)

    Park, Jun; Hwang, Seung-On

    2017-11-01

    The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.

  13. High-resolution Hydrodynamic Simulation of Tidal Detonation of a Helium White Dwarf by an Intermediate Mass Black Hole

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru

    2018-05-01

    We demonstrate tidal detonation during a tidal disruption event (TDE) of a helium (He) white dwarf (WD) with 0.45 M ⊙ by an intermediate mass black hole using extremely high-resolution simulations. Tanikawa et al. have shown tidal detonation in results of previous studies from unphysical heating due to low-resolution simulations, and such unphysical heating occurs in three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations even with 10 million SPH particles. In order to avoid such unphysical heating, we perform 3D SPH simulations up to 300 million SPH particles, and 1D mesh simulations using flow structure in the 3D SPH simulations for 1D initial conditions. The 1D mesh simulations have higher resolutions than the 3D SPH simulations. We show that tidal detonation occurs and confirm that this result is perfectly converged with different space resolution in both 3D SPH and 1D mesh simulations. We find that detonation waves independently arise in leading parts of the WD, and yield large amounts of 56Ni. Although detonation waves are not generated in trailing parts of the WD, the trailing parts would receive detonation waves generated in the leading parts and would leave large amounts of Si group elements. Eventually, this He WD TDE would synthesize 56Ni of 0.30 M ⊙ and Si group elements of 0.08 M ⊙, and could be observed as a luminous thermonuclear transient comparable to SNe Ia.

  14. Spatial Modeling and Uncertainty Assessment of Fine Scale Surface Processes Based on Coarse Terrain Elevation Data

    NASA Astrophysics Data System (ADS)

    Rasera, L. G.; Mariethoz, G.; Lane, S. N.

    2017-12-01

    Frequent acquisition of high-resolution digital elevation models (HR-DEMs) over large areas is expensive and difficult. Satellite-derived low-resolution digital elevation models (LR-DEMs) provide extensive coverage of Earth's surface but at coarser spatial and temporal resolutions. Although useful for large scale problems, LR-DEMs are not suitable for modeling hydrologic and geomorphic processes at scales smaller than their spatial resolution. In this work, we present a multiple-point geostatistical approach for downscaling a target LR-DEM based on available high-resolution training data and recurrent high-resolution remote sensing images. The method aims at generating several equiprobable HR-DEMs conditioned to a given target LR-DEM by borrowing small scale topographic patterns from an analogue containing data at both coarse and fine scales. An application of the methodology is demonstrated by using an ensemble of simulated HR-DEMs as input to a flow-routing algorithm. The proposed framework enables a probabilistic assessment of the spatial structures generated by natural phenomena operating at scales finer than the available terrain elevation measurements. A case study in the Swiss Alps is provided to illustrate the methodology.

  15. Simulation-driven machine learning: Bearing fault classification

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  16. Electric-Field Sensing with a Scanning Fiber-Coupled Quantum Dot

    NASA Astrophysics Data System (ADS)

    Cadeddu, D.; Munsch, M.; Rossi, N.; Gérard, J.-M.; Claudon, J.; Warburton, R. J.; Poggio, M.

    2017-09-01

    We demonstrate the application of a fiber-coupled quantum dot (QD) in a tip as a scanning probe for electric-field imaging. We map the out-of-plane component of the electric field induced by a pair of electrodes by the measurement of the quantum-confined Stark effect induced on a QD spectral line. Our results are in agreement with finite-element simulations of the experiment. Furthermore, we present results from analytic calculations and simulations which are relevant to any electric-field sensor embedded in a dielectric tip. In particular, we highlight the impact of the tip geometry on both the resolution and sensitivity.

  17. DESDynI Lidar for Solid Earth Applications

    NASA Technical Reports Server (NTRS)

    Sauber, Jeanne; Hofton, Michelle; Bruhn, Ronald; Lutchke, Scott; Blair, Bryan

    2011-01-01

    As part of the NASA's DESDynI mission, global elevation profiles from contiguous 25 m footprint Lidar measurements will be made. Here we present results of a performance simulation of a single pass of the multi-beam Lidar instrument over uplifted marine terraces in southern Alaska. The significance of the Lidar simulations is that surface topography would be captured at sufficient resolution for mapping uplifted terraces features but it will be hard to discern I-2m topographic change over features less than tens of meters in width. Since Lidar would penetrate most vegetation, the accurate bald Earth elevation profiles will give new elevation information beyond the standard 30-m OEM.

  18. [Study on the effect of solar spectra on the retrieval of atmospheric CO2 concentration using high resolution absorption spectra].

    PubMed

    Hu, Zhen-Hua; Huang, Teng; Wang, Ying-Ping; Ding, Lei; Zheng, Hai-Yang; Fang, Li

    2011-06-01

    Taking solar source as radiation in the near-infrared high-resolution absorption spectrum is widely used in remote sensing of atmospheric parameters. The present paper will take retrieval of the concentration of CO2 for example, and study the effect of solar spectra resolution. Retrieving concentrations of CO2 by using high resolution absorption spectra, a method which uses the program provided by AER to calculate the solar spectra at the top of atmosphere as radiation and combine with the HRATS (high resolution atmospheric transmission simulation) to simulate retrieving concentration of CO2. Numerical simulation shows that the accuracy of solar spectrum is important to retrieval, especially in the hyper-resolution spectral retrieavl, and the error of retrieval concentration has poor linear relation with the resolution of observation, but there is a tendency that the decrease in the resolution requires low resolution of solar spectrum. In order to retrieve the concentration of CO2 of atmosphere, the authors' should take full advantage of high-resolution solar spectrum at the top of atmosphere.

  19. Analysis of energy resolution in the KURRI-LINAC pulsed neutron facility

    NASA Astrophysics Data System (ADS)

    Sano, Tadafumi; Hori, Jun-ichi; Takahashi, Yoshiyuki; Yashima, Hiroshi; Lee, Jaehong; Harada, Hideo

    2017-09-01

    In this study, we carried out Monte Carlo simulations to obtain the energy resolution of the neutron flux for TOF measurements in the KURRI-LINAC pulsed neutron facility. The simulation was performed on the moderated neutron flux from the pac-man type moderator at the energy range from 0.1 eV to 10 keV. As the result, we obtained the energy resolutions (ΔE/E) of about 0.7% to 1.3% between 0.1 eV to 10 keV. The energy resolution obtained from Monte Carlo simulation agreed with the resolution using the simplified evaluation formula. In addition, we compared the energy resolution among KURRI-LINAC and other TOF facilities, the energy dependency of the energy resolution with the pac-man type moderator in KURRI-LINAC was similar to the J-PARC ANNRI for the single-bunch mode.

  20. Human cadavers Vs. multimedia simulation: A study of student learning in anatomy.

    PubMed

    Saltarelli, Andrew J; Roseth, Cary J; Saltarelli, William A

    2014-01-01

    Multimedia and simulation programs are increasingly being used for anatomy instruction, yet it remains unclear how learning with these technologies compares with learning with actual human cadavers. Using a multilevel, quasi-experimental-control design, this study compared the effects of "Anatomy and Physiology Revealed" (APR) multimedia learning system with a traditional undergraduate human cadaver laboratory. APR is a model-based multimedia simulation tool that uses high-resolution pictures to construct a prosected cadaver. APR also provides animations showing the function of specific anatomical structures. Results showed that the human cadaver laboratory offered a significant advantage over the multimedia simulation program on cadaver-based measures of identification and explanatory knowledge. These findings reinforce concerns that incorporating multimedia simulation into anatomy instruction requires careful alignment between learning tasks and performance measures. Findings also imply that additional pedagogical strategies are needed to support transfer from simulated to real-world application of anatomical knowledge. © 2014 American Association of Anatomists.

  1. Passive millimeter wave simulation in blender

    NASA Astrophysics Data System (ADS)

    Murakowski, Maciej

    Imaging in the millimeter wave (mmW) frequency range is being explored for applications where visible or infrared (IR) imaging fails, such as through atmospheric obscurants. However, mmW imaging is still in its infancy and imager systems are still bulky, expensive, and fragile, so experiments on imaging in real-world scenarios are difficult or impossible to perform. Therefore, a simulation system capable of predicting mmW phenomenology would be valuable in determining the requirements (e.g. resolution or noise floor) of an imaging system for a particular scenario and aid in the design of such an imager. Producing simulation software for this purpose is the objective of the work described in this thesis. The 3D software package Blender was modified to simulate the images produced by a passive mmW imager, based on a Geometrical Optics approach. Simulated imagery was validated against experimental data and the software was applied to novel imaging scenarios. Additionally, a database of material properties for use in the simulation was collected.

  2. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  3. Models and Simulations as a Service: Exploring the Use of Galaxy for Delivering Computational Models

    PubMed Central

    Walker, Mark A.; Madduri, Ravi; Rodriguez, Alex; Greenstein, Joseph L.; Winslow, Raimond L.

    2016-01-01

    We describe the ways in which Galaxy, a web-based reproducible research platform, can be used for web-based sharing of complex computational models. Galaxy allows users to seamlessly customize and run simulations on cloud computing resources, a concept we refer to as Models and Simulations as a Service (MaSS). To illustrate this application of Galaxy, we have developed a tool suite for simulating a high spatial-resolution model of the cardiac Ca2+ spark that requires supercomputing resources for execution. We also present tools for simulating models encoded in the SBML and CellML model description languages, thus demonstrating how Galaxy’s reproducible research features can be leveraged by existing technologies. Finally, we demonstrate how the Galaxy workflow editor can be used to compose integrative models from constituent submodules. This work represents an important novel approach, to our knowledge, to making computational simulations more accessible to the broader scientific community. PMID:26958881

  4. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE PAGES

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    2017-05-17

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  5. Design Studies of a CZT-based Detector Combined with a Pixel-Geometry-Matching Collimator for SPECT Imaging.

    PubMed

    Weng, Fenghua; Bagchi, Srijeeta; Huang, Qiu; Seo, Youngho

    2013-10-01

    Single Photon Emission Computed Tomography (SPECT) suffers limited efficiency due to the need for collimators. Collimator properties largely decide the data statistics and image quality. Various materials and configurations of collimators have been investigated in many years. The main thrust of our study is to evaluate the design of pixel-geometry-matching collimators to investigate their potential performances using Geant4 Monte Carlo simulations. Here, a pixel-geometry-matching collimator is defined as a collimator which is divided into the same number of pixels as the detector's and the center of each pixel in the collimator is a one-to-one correspondence to that in the detector. The detector is made of Cadmium Zinc Telluride (CZT), which is one of the most promising materials for applications to detect hard X-rays and γ -rays due to its ability to obtain good energy resolution and high light output at room temperature. For our current project, we have designed a large-area, CZT-based gamma camera (20.192 cm×20.192 cm) with a small pixel pitch (1.60 mm). The detector is pixelated and hence the intrinsic resolution can be as small as the size of the pixel. Materials of collimator, collimator hole geometry, detection efficiency, and spatial resolution of the CZT detector combined with the pixel-matching collimator were calculated and analyzed under different conditions. From the simulation studies, we found that such a camera using rectangular holes has promising imaging characteristics in terms of spatial resolution, detection efficiency, and energy resolution.

  6. Numerical Simulation and Mechanical Design for TPS Electron Beam Position Monitors

    NASA Astrophysics Data System (ADS)

    Hsueh, H. P.; Kuan, C. K.; Ueng, T. S.; Hsiung, G. Y.; Chen, J. R.

    2007-01-01

    Comprehensive study on the mechanical design and numerical simulation for the high resolution electron beam position monitors are key steps to build the newly proposed 3rd generation synchrotron radiation research facility, Taiwan Photon Source (TPS). With more advanced electromagnetic simulation tool like MAFIA tailored specifically for particle accelerator, the design for the high resolution electron beam position monitors can be tested in such environment before they are experimentally tested. The design goal of our high resolution electron beam position monitors is to get the best resolution through sensitivity and signal optimization. The definitions and differences between resolution and sensitivity of electron beam position monitors will be explained. The design consideration is also explained. Prototype deign has been carried out and the related simulations were also carried out with MAFIA. The results are presented here. Sensitivity as high as 200 in x direction has been achieved in x direction at 500 MHz.

  7. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  8. High Resolution Regional Climate Simulations over Alaska

    NASA Astrophysics Data System (ADS)

    Monaghan, A. J.; Clark, M. P.; Arnold, J.; Newman, A. J.; Musselman, K. N.; Barlage, M. J.; Xue, L.; Liu, C.; Gutmann, E. D.; Rasmussen, R.

    2016-12-01

    In order to appropriately plan future projects to build and maintain infrastructure (e.g., dams, dikes, highways, airports), a number of U.S. federal agencies seek to better understand how hydrologic regimes may shift across the country due to climate change. Building on the successful completion of a series of high-resolution WRF simulations over the Colorado River Headwaters and contiguous USA, our team is now extending these simulations over the challenging U.S. States of Alaska and Hawaii. In this presentation we summarize results from a newly completed 4-km resolution WRF simulation over Alaska spanning 2002-2016 at 4-km spatial resolution. Our aim is to gain insight into the thermodynamics that drive key precipitation processes, particularly the extremes that are most damaging to infrastructure.

  9. Eddy Fluxes and Sensitivity of the Water Cycle to Spatial Resolution in Idealized Regional Aquaplanet Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Leung, Lai-Yung R.; Gustafson, William I.

    2014-02-28

    A multi-scale moisture budget analysis is used to identify the mechanisms responsible for the sensitivity of the water cycle to spatial resolution using idealized regional aquaplanet simulations. In the higher resolution simulations, moisture transport by eddies fluxes dry the boundary layer enhancing evaporation and precipitation. This effect of eddies, which is underestimated by the physics parameterizations in the low-resolution simulations, is found to be responsible for the sensitivity of the water cycle both directly, and through its upscale effect, on the mean circulation. Correlations among moisture transport by eddies at adjacent ranges of scales provides the potential for reducing thismore » sensitivity by representing the unresolved eddies by their marginally resolved counterparts.« less

  10. A Comparison of HWRF, ARW and NMM Models in Hurricane Katrina (2005) Simulation

    PubMed Central

    Dodla, Venkata B.; Desamsetti, Srinivas; Yerramilli, Anjaneyulu

    2011-01-01

    The life cycle of Hurricane Katrina (2005) was simulated using three different modeling systems of Weather Research and Forecasting (WRF) mesoscale model. These are, HWRF (Hurricane WRF) designed specifically for hurricane studies and WRF model with two different dynamic cores as the Advanced Research WRF (ARW) model and the Non-hydrostatic Mesoscale Model (NMM). The WRF model was developed and sourced from National Center for Atmospheric Research (NCAR), incorporating the advances in atmospheric simulation system suitable for a broad range of applications. The HWRF modeling system was developed at the National Centers for Environmental Prediction (NCEP) based on the NMM dynamic core and the physical parameterization schemes specially designed for tropics. A case study of Hurricane Katrina was chosen as it is one of the intense hurricanes that caused severe destruction along the Gulf Coast from central Florida to Texas. ARW, NMM and HWRF models were designed to have two-way interactive nested domains with 27 and 9 km resolutions. The three different models used in this study were integrated for three days starting from 0000 UTC of 27 August 2005 to capture the landfall of hurricane Katrina on 29 August. The initial and time varying lateral boundary conditions were taken from NCEP global FNL (final analysis) data available at 1 degree resolution for ARW and NMM models and from NCEP GFS data at 0.5 degree resolution for HWRF model. The results show that the models simulated the intensification of Hurricane Katrina and the landfall on 29 August 2005 agreeing with the observations. Results from these experiments highlight the superior performance of HWRF model over ARW and NMM models in predicting the track and intensification of Hurricane Katrina. PMID:21776239

  11. A comparison of HWRF, ARW and NMM models in Hurricane Katrina (2005) simulation.

    PubMed

    Dodla, Venkata B; Desamsetti, Srinivas; Yerramilli, Anjaneyulu

    2011-06-01

    The life cycle of Hurricane Katrina (2005) was simulated using three different modeling systems of Weather Research and Forecasting (WRF) mesoscale model. These are, HWRF (Hurricane WRF) designed specifically for hurricane studies and WRF model with two different dynamic cores as the Advanced Research WRF (ARW) model and the Non-hydrostatic Mesoscale Model (NMM). The WRF model was developed and sourced from National Center for Atmospheric Research (NCAR), incorporating the advances in atmospheric simulation system suitable for a broad range of applications. The HWRF modeling system was developed at the National Centers for Environmental Prediction (NCEP) based on the NMM dynamic core and the physical parameterization schemes specially designed for tropics. A case study of Hurricane Katrina was chosen as it is one of the intense hurricanes that caused severe destruction along the Gulf Coast from central Florida to Texas. ARW, NMM and HWRF models were designed to have two-way interactive nested domains with 27 and 9 km resolutions. The three different models used in this study were integrated for three days starting from 0000 UTC of 27 August 2005 to capture the landfall of hurricane Katrina on 29 August. The initial and time varying lateral boundary conditions were taken from NCEP global FNL (final analysis) data available at 1 degree resolution for ARW and NMM models and from NCEP GFS data at 0.5 degree resolution for HWRF model. The results show that the models simulated the intensification of Hurricane Katrina and the landfall on 29 August 2005 agreeing with the observations. Results from these experiments highlight the superior performance of HWRF model over ARW and NMM models in predicting the track and intensification of Hurricane Katrina.

  12. Searching for the right scale in catchment hydrology: the effect of soil spatial variability in simulated states and fluxes

    NASA Astrophysics Data System (ADS)

    Baroni, Gabriele; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine

    2017-04-01

    The advances in computer science and the availability of new detailed data-sets have led to a growing number of distributed hydrological models applied to finer and finer grid resolutions for larger and larger catchment areas. It was argued, however, that this trend does not necessarily guarantee better understanding of the hydrological processes or it is even not necessary for specific modelling applications. In the present study, this topic is further discussed in relation to the soil spatial heterogeneity and its effect on simulated hydrological state and fluxes. To this end, three methods are developed and used for the characterization of the soil heterogeneity at different spatial scales. The methods are applied at the soil map of the upper Neckar catchment (Germany), as example. The different soil realizations are assessed regarding their impact on simulated state and fluxes using the distributed hydrological model mHM. The results are analysed by aggregating the model outputs at different spatial scales based on the Representative Elementary Scale concept (RES) proposed by Refsgaard et al. (2016). The analysis is further extended in the present study by aggregating the model output also at different temporal scales. The results show that small scale soil variabilities are not relevant when the integrated hydrological responses are considered e.g., simulated streamflow or average soil moisture over sub-catchments. On the contrary, these small scale soil variabilities strongly affect locally simulated states and fluxes i.e., soil moisture and evapotranspiration simulated at the grid resolution. A clear trade-off is also detected by aggregating the model output by spatial and temporal scales. Despite the scale at which the soil variabilities are (or are not) relevant is not universal, the RES concept provides a simple and effective framework to quantify the predictive capability of distributed models and to identify the need for further model improvements e.g., finer resolution input. For this reason, the integration in this analysis of all the relevant input factors (e.g., precipitation, vegetation, geology) could provide a strong support for the definition of the right scale for each specific model application. In this context, however, the main challenge for a proper model assessment will be the correct characterization of the spatio- temporal variability of each input factor. Refsgaard, J.C., Højberg, A.L., He, X., Hansen, A.L., Rasmussen, S.H., Stisen, S., 2016. Where are the limits of model predictive capabilities?: Representative Elementary Scale - RES. Hydrol. Process. doi:10.1002/hyp.11029

  13. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    NASA Astrophysics Data System (ADS)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.

  14. Climate simulations and projections with a super-parameterized climate model

    DOE PAGES

    Stan, Cristiana; Xu, Li

    2014-07-01

    The mean climate and its variability are analyzed in a suite of numerical experiments with a fully coupled general circulation model in which subgrid-scale moist convection is explicitly represented through embedded 2D cloud-system resolving models. Control simulations forced by the present day, fixed atmospheric carbon dioxide concentration are conducted using two horizontal resolutions and validated against observations and reanalyses. The mean state simulated by the higher resolution configuration has smaller biases. Climate variability also shows some sensitivity to resolution but not as uniform as in the case of mean state. The interannual and seasonal variability are better represented in themore » simulation at lower resolution whereas the subseasonal variability is more accurate in the higher resolution simulation. The equilibrium climate sensitivity of the model is estimated from a simulation forced by an abrupt quadrupling of the atmospheric carbon dioxide concentration. The equilibrium climate sensitivity temperature of the model is 2.77 °C, and this value is slightly smaller than the mean value (3.37 °C) of contemporary models using conventional representation of cloud processes. As a result, the climate change simulation forced by the representative concentration pathway 8.5 scenario projects an increase in the frequency of severe droughts over most of the North America.« less

  15. Playing With Conflict: Teaching Conflict Resolution through Simulations and Games

    ERIC Educational Resources Information Center

    Powers, Richard B.; Kirkpatrick, Kat

    2013-01-01

    Playing With Conflict is a weekend course for graduate students in Portland State University's Conflict Resolution program and undergraduates in all majors. Students participate in simulations, games, and experiential exercises to learn and practice conflict resolution skills. Graduate students create a guided role-play of a conflict. In addition…

  16. Superconducting High Resolution Fast-Neutron Spectrometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hau, Ionel Dragos

    2006-01-01

    Superconducting high resolution fast-neutron calorimetric spectrometers based on 6LiF and TiB{sub 2} absorbers have been developed. These novel cryogenic spectrometers measure the temperature rise produced in exothermal (n, α) reactions with fast neutrons in 6Li and 10B-loaded materials with heat capacity C operating at temperatures T close to 0.1 K. Temperature variations on the order of 0.5 mK are measured with a Mo/Cu thin film multilayer operated in the transition region between its superconducting and its normal state. The advantage of calorimetry for high resolution spectroscopy is due to the small phonon excitation energies k BT on the order ofmore » μeV that serve as signal carriers, resulting in an energy resolution ΔE ~ (k BT 2C) 1/2, which can be well below 10 keV. An energy resolution of 5.5 keV has been obtained with a Mo/Cu superconducting sensor and a TiB 2 absorber using thermal neutrons from a 252Cf neutron source. This resolution is sufficient to observe the effect of recoil nuclei broadening in neutron spectra, which has been related to the lifetime of the first excited state in 7Li. Fast-neutron spectra obtained with a 6Li-enriched LiF absorber show an energy resolution of 16 keV FWHM, and a response in agreement with the 6Li(n, α) 3H reaction cross section and Monte Carlo simulations for energies up to several MeV. The energy resolution of order of a few keV makes this novel instrument applicable to fast-neutron transmission spectroscopy based on the unique elemental signature provided by the neutron absorption and scattering resonances. The optimization of the energy resolution based on analytical and numerical models of the detector response is discussed in the context of these applications.« less

  17. Impact of model resolution on simulating the water vapor transport through the central Himalayas: implication for models' wet bias over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Lin, Changgui; Chen, Deliang; Yang, Kun; Ou, Tinghai

    2018-01-01

    Current climate models commonly overestimate precipitation over the Tibetan Plateau (TP), which limits our understanding of past and future water balance in the region. Identifying sources of such models' wet bias is therefore crucial. The Himalayas is considered a major pathway of water vapor transport (WVT) towards the TP. Their steep terrain, together with associated small-scale processes, cannot be resolved by coarse-resolution models, which may result in excessive WVT towards the TP. This paper, therefore, investigated the resolution dependency of simulated WVT through the central Himalayas and its further impact on precipitation bias over the TP. According to a summer monsoon season of simulations conducted using the weather research forecasting (WRF) model with resolutions of 30, 10, and 2 km, the study found that finer resolutions (especially 2 km) diminish the positive precipitation bias over the TP. The higher-resolution simulations produce more precipitation over the southern Himalayan slopes and weaker WVT towards the TP, explaining the reduced wet bias. The decreased WVT is reflected mostly in the weakened wind speed, which is due to the fact that the high resolution can improve resolving orographic drag over a complex terrain and other processes associated with heterogeneous surface forcing. A significant difference was particularly found when the model resolution is changed from 30 to 10 km, suggesting that a resolution of approximately 10 km represents a good compromise between a more spatially detailed simulation of WVT and computational cost for a domain covering the whole TP.

  18. Non-periodic multi-slit masking for a single counter rotating 2-disc chopper and channeling guides for high resolution and high intensity neutron TOF spectroscopy

    NASA Astrophysics Data System (ADS)

    Bartkowiak, M.; Hofmann, T.; Stüßer, N.

    2017-02-01

    Energy resolution is an important design goal for time-of-flight instruments and neutron spectroscopy. For high-resolution applications, it is required that the burst times of choppers be short, going down to the μs-range. To produce short pulses while maintaining high neutron flux, we propose beam masks with more than two slits on a counter-rotating 2-disc chopper, behind specially adapted focusing multi-channel guides. A novel non-regular arrangement of the slits ensures that the beam opens only once per chopper cycle, when the masks are congruently aligned. Additionally, beam splitting and intensity focusing by guides before and after the chopper position provide high intensities even for small samples. Phase-space analysis and Monte Carlo simulations on examples of four-slit masks with adapted guide geometries show the potential of the proposed setup.

  19. The technical consideration of multi-beam mask writer for production

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hee; Ahn, Byung-Sup; Choi, Jin; Shin, In Kyun; Tamamushi, Shuichi; Jeon, Chan-Uk

    2016-10-01

    Multi-beam mask writer is under development to solve the throughput and patterning resolution problems in VSB mask writer. Theoretically, the writing time is appropriate for future design node and the resolution is improved with multi-beam mask writer. Many previous studies show the feasible results of resolution, CD control and registration. Although such technical results of development tool seem to be enough for mass production, there are still many unexpected problems for real mass production. In this report, the technical challenges of multi-beam mask writer are discussed in terms of production and application. The problems and issues are defined based on the performance of current development tool compared with the requirements of mask quality. Using the simulation and experiment, we analyze the specific characteristics of electron beam in multi-beam mask writer scheme. Consequently, we suggest necessary specifications for mass production with multi-beam mask writer in the future.

  20. High-speed X-ray microscopy by use of high-resolution zone plates and synchrotron radiation.

    PubMed

    Hou, Qiyue; Wang, Zhili; Gao, Kun; Pan, Zhiyun; Wang, Dajiang; Ge, Xin; Zhang, Kai; Hong, Youli; Zhu, Peiping; Wu, Ziyu

    2012-09-01

    X-ray microscopy based on synchrotron radiation has become a fundamental tool in biology and life sciences to visualize the morphology of a specimen. These studies have particular requirements in terms of radiation damage and the image exposure time, which directly determines the total acquisition speed. To monitor and improve these key parameters, we present a novel X-ray microscopy method using a high-resolution zone plate as the objective and the matching condenser. Numerical simulations based on the scalar wave field theory validate the feasibility of the method and also indicate the performance of X-ray microscopy is optimized most with sub-10-nm-resolution zone plates. The proposed method is compatible with conventional X-ray microscopy techniques, such as computed tomography, and will find wide applications in time-resolved and/or dose-sensitive studies such as living cell imaging.

  1. Near-field examination of perovskite-based superlenses and superlens-enhanced probe-object coupling.

    PubMed

    Kehr, S C; Liu, Y M; Martin, L W; Yu, P; Gajek, M; Yang, S-Y; Yang, C-H; Wenzel, M T; Jacob, R; von Ribbeck, H-G; Helm, M; Zhang, X; Eng, L M; Ramesh, R

    2011-01-01

    A planar slab of negative-index material works as a superlens with sub-diffraction-limited resolution, as propagating waves are focused and, moreover, evanescent waves are reconstructed in the image plane. Here we demonstrate a superlens for electric evanescent fields with low losses using perovskites in the mid-infrared regime. The combination of near-field microscopy with a tunable free-electron laser allows us to address precisely the polariton modes, which are critical for super-resolution imaging. We spectrally study the lateral and vertical distributions of evanescent waves around the image plane of such a lens, and achieve imaging resolution of λ/14 at the superlensing wavelength. Interestingly, at certain distances between the probe and sample surface, we observe a maximum of these evanescent fields. Comparisons with numerical simulations indicate that this maximum originates from an enhanced coupling between probe and object, which might be applicable for multifunctional circuits, infrared spectroscopy and thermal sensors.

  2. On the use of satellite-based estimates of rainfall temporal distribution to simulate the potential for malaria transmission in rural Africa

    NASA Astrophysics Data System (ADS)

    Yamana, Teresa K.; Eltahir, Elfatih A. B.

    2011-02-01

    This paper describes the use of satellite-based estimates of rainfall to force the Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS), a hydrology-based mechanistic model of malaria transmission. We first examined the temporal resolution of rainfall input required by HYDREMATS. Simulations conducted over Banizoumbou village in Niger showed that for reasonably accurate simulation of mosquito populations, the model requires rainfall data with at least 1 h resolution. We then investigated whether HYDREMATS could be effectively forced by satellite-based estimates of rainfall instead of ground-based observations. The Climate Prediction Center morphing technique (CMORPH) precipitation estimates distributed by the National Oceanic and Atmospheric Administration are available at a 30 min temporal resolution and 8 km spatial resolution. We compared mosquito populations simulated by HYDREMATS when the model is forced by adjusted CMORPH estimates and by ground observations. The results demonstrate that adjusted rainfall estimates from satellites can be used with a mechanistic model to accurately simulate the dynamics of mosquito populations.

  3. Understanding climate variability and global climate change using high-resolution GCM simulations

    NASA Astrophysics Data System (ADS)

    Feng, Xuelei

    In this study, three climate processes are examined using long-term simulations from multiple climate models with increasing horizontal resolutions. These simulations include the European Center for Medium-range Weather Forecasts (ECMWF) atmospheric general circulation model (AGCM) runs forced with observed sea surface temperatures (SST) (the Athena runs) and a set of coupled ocean-atmosphere seasonal hindcasts (the Minerva runs). Both sets of runs use different AGCM resolutions, the highest at 16 km. A pair of the Community Climate System Model (CCSM) simulations with ocean general circulation model (OGCM) resolutions at 100 and 10 km are also examined. The higher resolution CCSM run fully resolves oceanic mesoscale eddies. The resolution influence on the precipitation climatology over the Gulf Stream (GS) region is first investigated. In the Athena simulations, the resolution increase generates enhanced mean GS precipitation moderately in both large-scale and sub-scale rainfalls in the North Atlantic, with the latter more tightly confined near the oceanic front. However, the non-eddy resolving OGCM in the Minerva runs simulates a weaker oceanic front and weakens the mean GS precipitation response. On the other hand, an increase in CCSM oceanic resolutions from non-eddy-resolving to eddy resolving regimes greatly improves the model's GS precipitation climatology, resulting in both stronger intensity and more realistic structure. Further analyses show that the improvement of the GS precipitation climatology due to resolution increases is caused by the enhanced atmospheric response to an increased SST gradient near the oceanic front, which leads to stronger surface convergence and upper level divergence. Another focus of this study is on the global warming impacts on precipitation characteristic changes using the high-resolution Athena simulations under the SST forcing from the observations and a global warming scenario. As a comparison, results from the coarse resolution simulation are also analyzed to examine the dependence on resolution. The increasing rates of globally averaged precipitation amount for the high and low resolution simulations are 1.7%/K-1 and 1.8%/K-1, respectively. The sensitivities for heavy, moderate, light and drizzle rain are 6.8, -1.2, 0.0, 0.2%/K-1 for low and 6.3, -1.5, 0.4, -0.2%/K -1 for high resolution simulations. The number of rainy days decreases in a warming scenario, by 3.4 and 4.2 day/year-1, respectively. The most sensitive response of 6.3-6.8%/K-1 for the heavy rain approaches that of the 7%/K-1 for the Clausius-Clapeyron scaling limit. During the twenty-first century simulation, the increases in precipitation are larger over high latitude and wet regions in low and mid-latitudes. Over the dry regions, such as the subtropics, the precipitation amount and frequency decrease. There is a higher occurrence of low and heavy rain from the tropics to mid-latitudes at the expense of the decreases in the frequency of moderate rain. In the third part, the inter-annual variability of the northern hemisphere storm tracks is examined. In the Athena simulations, the leading modes of the observed storm track variability are reproduced realistically by all runs. In general, the fluctuations of the model storm tracks in the North Pacific and Atlantic basins are largely independent of each other. Within each basin, the variations are characterized by the intensity change near the climatological center and the meridional shift of the storm track location. These two modes are associated with major teleconnection patterns of the low frequency atmospheric variations. These model results are not sensitive to resolution. Using the Minerva hindcast initialized in November, it is shown that a portion of the winter (December-January) storm track variability is predictable, mainly due to the influences of the atmospheric wave trains induced by the El Nino and Southern Oscillation.

  4. Does Explosive Nuclear Burning Occur in Tidal Disruption Events of White Dwarfs by Intermediate-mass Black Holes?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanikawa, Ataru; Sato, Yushi; Hachisu, Izumi

    We investigate nucleosynthesis in tidal disruption events (TDEs) of white dwarfs (WDs) by intermediate-mass black holes. We consider various types of WDs with different masses and compositions by means of three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations. We model these WDs with different numbers of SPH particles, N , from a few 10{sup 4} to a few 10{sup 7} in order to check mass resolution convergence, where SPH simulations with N > 10{sup 7} (or a space resolution of several 10{sup 6} cm) have unprecedentedly high resolution in this kind of simulation. We find that nuclear reactions become less activemore » with increasing N and that these nuclear reactions are excited by spurious heating due to low resolution. Moreover, we find no shock wave generation. In order to investigate the reason for the absence of a shock wave, we additionally perform one-dimensional (1D) SPH and mesh-based simulations with a space resolution ranging from 10{sup 4} to 10{sup 7} cm, using a characteristic flow structure extracted from the 3D SPH simulations. We find shock waves in these 1D high-resolution simulations, one of which triggers a detonation wave. However, we must be careful of the fact that, if the shock wave emerged in an outer region, it could not trigger the detonation wave due to low density. Note that the 1D initial conditions lack accuracy to precisely determine where a shock wave emerges. We need to perform 3D simulations with ≲10{sup 6} cm space resolution in order to conclude that WD TDEs become optical transients powered by radioactive nuclei.« less

  5. Moving Contact Lines: Linking Molecular Dynamics and Continuum-Scale Modeling.

    PubMed

    Smith, Edward R; Theodorakis, Panagiotis E; Craster, Richard V; Matar, Omar K

    2018-05-17

    Despite decades of research, the modeling of moving contact lines has remained a formidable challenge in fluid dynamics whose resolution will impact numerous industrial, biological, and daily life applications. On the one hand, molecular dynamics (MD) simulation has the ability to provide unique insight into the microscopic details that determine the dynamic behavior of the contact line, which is not possible with either continuum-scale simulations or experiments. On the other hand, continuum-based models provide a link to the macroscopic description of the system. In this Feature Article, we explore the complex range of physical factors, including the presence of surfactants, which governs the contact line motion through MD simulations. We also discuss links between continuum- and molecular-scale modeling and highlight the opportunities for future developments in this area.

  6. Tomographic reconstruction of melanin structures of optical coherence tomography via the finite-difference time-domain simulation

    NASA Astrophysics Data System (ADS)

    Huang, Shi-Hao; Wang, Shiang-Jiu; Tseng, Snow H.

    2015-03-01

    Optical coherence tomography (OCT) provides high resolution, cross-sectional image of internal microstructure of biological tissue. We use the Finite-Difference Time-Domain method (FDTD) to analyze the data acquired by OCT, which can help us reconstruct the refractive index of the biological tissue. We calculate the refractive index tomography and try to match the simulation with the data acquired by OCT. Specifically, we try to reconstruct the structure of melanin, which has complex refractive indices and is the key component of human pigment system. The results indicate that better reconstruction can be achieved for homogenous sample, whereas the reconstruction is degraded for samples with fine structure or with complex interface. Simulation reconstruction shows structures of the Melanin that may be useful for biomedical optics applications.

  7. Effects of Drake Passage on a strongly eddying global ocean

    NASA Astrophysics Data System (ADS)

    Viebahn, Jan; von der Heydt, Anna S.; Dijkstra, Henk A.

    2015-04-01

    During the past 65 Million (Ma) years, Earth's climate has undergone a major change from warm 'greenhouse' to colder 'icehouse' conditions with extensive ice sheets in the polar regions of both hemispheres. The Eocene-Oligocene (~34 Ma) and Oligocene-Miocene (~23 Ma) boundaries reflect major transitions in Cenozoic global climate change. Proposed mechanisms of these transitions include reorganization of ocean circulation due to critical gateway opening/deepening, changes in atmospheric CO2-concentration, and feedback mechanisms related to land-ice formation. Drake Passage (DP) is an intensively studied gateway because it plays a central role in closing the transport pathways of heat and chemicals in the ocean. The climate response to a closed DP has been explored with a variety of general circulation models, however, all of these models employ low model-grid resolutions such that the effects of subgrid-scale fluctuations ('eddies') are parameterized. We present results of the first high-resolution (0.1° horizontally) realistic global ocean model simulation with a closed DP in which the eddy field is largely resolved. The simulation extends over more than 200 years such that the strong transient adjustment process is passed and a near-equilibrium ocean state is reached. The effects of DP are diagnosed by comparing with both an open DP high-resolution control simulation (of same length) and corresponding low-resolution simulations. By focussing on the heat/tracer transports we demonstrate that the results are twofold: Considering spatially integrated transports the overall response to a closed DP is well captured by low-resolution simulations. However, looking at the actual spatial distributions drastic differences appear between far-scattered high-resolution and laminar-uniform low-resolution fields. We conclude that sparse and highly localized tracer proxy observations have to be interpreted carefully with the help of high-resolution model simulations.

  8. Global high-resolution simulations of tropospheric nitrogen dioxide using CHASER V4.0

    NASA Astrophysics Data System (ADS)

    Sekiya, Takashi; Miyazaki, Kazuyuki; Ogochi, Koji; Sudo, Kengo; Takigawa, Masayuki

    2018-03-01

    We evaluate global tropospheric nitrogen dioxide (NO2) simulations using the CHASER V4.0 global chemical transport model (CTM) at horizontal resolutions of 0.56, 1.1, and 2.8°. Model evaluation was conducted using satellite tropospheric NO2 retrievals from the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment-2 (GOME-2) and aircraft observations from the 2014 Front Range Air Pollution and Photochemistry Experiment (FRAPPÉ). Agreement against satellite retrievals improved greatly at 1.1 and 0.56° resolutions (compared to 2.8° resolution) over polluted and biomass burning regions. The 1.1° simulation generally captured the regional distribution of the tropospheric NO2 column well, whereas 0.56° resolution was necessary to improve the model performance over areas with strong local sources, with mean bias reductions of 67 % over Beijing and 73 % over San Francisco in summer. Validation using aircraft observations indicated that high-resolution simulations reduced negative NO2 biases below 700 hPa over the Denver metropolitan area. These improvements in high-resolution simulations were attributable to (1) closer spatial representativeness between simulations and observations and (2) better representation of large-scale concentration fields (i.e., at 2.8°) through the consideration of small-scale processes. Model evaluations conducted at 0.5 and 2.8° bin grids indicated that the contributions of both these processes were comparable over most polluted regions, whereas the latter effect (2) made a larger contribution over eastern China and biomass burning areas. The evaluations presented in this paper demonstrate the potential of using a high-resolution global CTM for studying megacity-scale air pollutants across the entire globe, potentially also contributing to global satellite retrievals and chemical data assimilation.

  9. Middle atmosphere simulated with high vertical and horizontal resolution versions of a GCM: Improvements in the cold pole bias and generation of a QBO-like oscillation in the tropics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, K.; Wilson, R.J.; Hemler, R.S.

    1999-11-15

    The large-scale circulation in the Geophysical Fluid Dynamics Laboratory SKYHI troposphere-stratosphere-mesosphere finite-difference general circulation model is examined as a function of vertical and horizontal resolution. The experiments examined include one with horizontal grid spacing of {approximately}35 km and another with {approximately}100 km horizontal grid spacing but very high vertical resolution (160 levels between the ground and about 85 km). The simulation of the middle-atmospheric zonal-mean winds and temperatures in the extratropics is found to be very sensitive to horizontal resolution. For example, in the early Southern Hemisphere winter the South Pole near 1 mb in the model is colder thanmore » observed, but the bias is reduced with improved horizontal resolution (from {approximately}70 C in a version with {approximately}300 km grid spacing to less than 10 C in the {approximately}35 km version). The extratropical simulation is found to be only slightly affected by enhancements of the vertical resolution. By contrast, the tropical middle-atmospheric simulation is extremely dependent on the vertical resolution employed. With level spacing in the lower stratosphere {approximately}1.5 km, the lower stratospheric zonal-mean zonal winds in the equatorial region are nearly constant in time. When the vertical resolution is doubled, the simulated stratospheric zonal winds exhibit a strong equatorially centered oscillation with downward propagation of the wind reversals and with formation of strong vertical shear layers. This appears to be a spontaneous internally generated oscillation and closely resembles the observed QBO in many respects, although the simulated oscillation has a period less than half that of the real QBO.« less

  10. Application of STORMTOOLS's simplified flood inundation model with sea level rise to assess impacts to RI coastal areas

    NASA Astrophysics Data System (ADS)

    Spaulding, M. L.

    2015-12-01

    The vision for STORMTOOLS is to provide access to a suite of coastal planning tools (numerical models et al), available as a web service, that allows wide spread accessibly and applicability at high resolution for user selected coastal areas of interest. The first product developed under this framework were flood inundation maps, with and without sea level rise, for varying return periods for RI coastal waters. The flood mapping methodology is based on using the water level vs return periods at a primary NOAA water level gauging station and then spatially scaling the values, based on the predictions of high resolution, storm and wave simulations performed by Army Corp of Engineers, North Atlantic Comprehensive Coastal Study (NACCS) for tropical and extratropical storms on an unstructured grid, to estimate inundation levels for varying return periods. The scaling for the RI application used Newport, RI water levels as the reference point. Predictions are provided for once in 25, 50, and 100 yr return periods (at the upper 95% confidence level), with sea level rises of 1, 2, 3, and 5 ft. Simulations have also been performed for historical hurricane events including 1938, Carol (1954), Bob (1991), and Sandy (2012) and nuisance flooding events with return periods of 1, 3, 5, and 10 yr. Access to the flooding maps is via a web based, map viewer that seamlessly covers all coastal waters of the state at one meter resolution. The GIS structure of the map viewer allows overlays of additional relevant data sets (roads and highways, wastewater treatment facilities, schools, hospitals, emergency evacuation routes, etc.) as desired by the user. The simplified flooding maps are publically available and are now being implemented for state and community resilience planning and vulnerability assessment activities in response to climate change impacts.

  11. Stochastic Analysis and Probabilistic Downscaling of Soil Moisture

    NASA Astrophysics Data System (ADS)

    Deshon, J. P.; Niemann, J. D.; Green, T. R.; Jones, A. S.

    2017-12-01

    Soil moisture is a key variable for rainfall-runoff response estimation, ecological and biogeochemical flux estimation, and biodiversity characterization, each of which is useful for watershed condition assessment. These applications require not only accurate, fine-resolution soil-moisture estimates but also confidence limits on those estimates and soil-moisture patterns that exhibit realistic statistical properties (e.g., variance and spatial correlation structure). The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales coarse-resolution (9-40 km) soil moisture from satellite remote sensing or land-surface models to produce fine-resolution (10-30 m) estimates. The model was designed to produce accurate deterministic soil-moisture estimates at multiple points, but the resulting patterns do not reproduce the variance or spatial correlation of observed soil-moisture patterns. The primary objective of this research is to generalize the EMT+VS model to produce a probability density function (pdf) for soil moisture at each fine-resolution location and time. Each pdf has a mean that is equal to the deterministic soil-moisture estimate, and the pdf can be used to quantify the uncertainty in the soil-moisture estimates and to simulate soil-moisture patterns. Different versions of the generalized model are hypothesized based on how uncertainty enters the model, whether the uncertainty is additive or multiplicative, and which distributions describe the uncertainty. These versions are then tested by application to four catchments with detailed soil-moisture observations (Tarrawarra, Satellite Station, Cache la Poudre, and Nerrigundah). The performance of the generalized models is evaluated by comparing the statistical properties of the simulated soil-moisture patterns to those of the observations and the deterministic EMT+VS model. The versions of the generalized EMT+VS model with normally distributed stochastic components produce soil-moisture patterns with more realistic statistical properties than the deterministic model. Additionally, the results suggest that the variance and spatial correlation of the stochastic soil-moisture variations do not vary consistently with the spatial-average soil moisture.

  12. Dynamically downscaling predictions for deciduous tree leaf emergence in California under current and future climate.

    PubMed

    Medvigy, David; Kim, Seung Hee; Kim, Jinwon; Kafatos, Menas C

    2016-07-01

    Models that predict the timing of deciduous tree leaf emergence are typically very sensitive to temperature. However, many temperature data products, including those from climate models, have been developed at a very coarse spatial resolution. Such coarse-resolution temperature products can lead to highly biased predictions of leaf emergence. This study investigates how dynamical downscaling of climate models impacts simulations of deciduous tree leaf emergence in California. Models for leaf emergence are forced with temperatures simulated by a general circulation model (GCM) at ~200-km resolution for 1981-2000 and 2031-2050 conditions. GCM simulations are then dynamically downscaled to 32- and 8-km resolution, and leaf emergence is again simulated. For 1981-2000, the regional average leaf emergence date is 30.8 days earlier in 32-km simulations than in ~200-km simulations. Differences between the 32 and 8 km simulations are small and mostly local. The impact of downscaling from 200 to 8 km is ~15 % smaller in 2031-2050 than in 1981-2000, indicating that the impacts of downscaling are unlikely to be stationary.

  13. Impact of high resolution land surface initialization in Indian summer monsoon simulation using a regional climate model

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, C. K.; Rajeevan, M.; Rao, S. Vijaya Bhaskara

    2016-06-01

    The direct impact of high resolution land surface initialization on the forecast bias in a regional climate model in recent years over Indian summer monsoon region is investigated. Two sets of regional climate model simulations are performed, one with a coarse resolution land surface initial conditions and second one used a high resolution land surface data for initial condition. The results show that all monsoon years respond differently to the high resolution land surface initialization. The drought monsoon year 2009 and extended break periods were more sensitive to the high resolution land surface initialization. These results suggest that the drought monsoon year predictions can be improved with high resolution land surface initialization. Result also shows that there are differences in the response to the land surface initialization within the monsoon season. Case studies of heat wave and a monsoon depression simulation show that, the model biases were also improved with high resolution land surface initialization. These results show the need for a better land surface initialization strategy in high resolution regional models for monsoon forecasting.

  14. Simulation the spatial resolution of an X-ray imager based on zinc oxide nanowires in anodic aluminium oxide membrane by using MCNP and OPTICS Codes

    NASA Astrophysics Data System (ADS)

    Samarin, S. N.; Saramad, S.

    2018-05-01

    The spatial resolution of a detector is a very important parameter for x-ray imaging. A bulk scintillation detector because of spreading of light inside the scintillator does't have a good spatial resolution. The nanowire scintillators because of their wave guiding behavior can prevent the spreading of light and can improve the spatial resolution of traditional scintillation detectors. The zinc oxide (ZnO) scintillator nanowire, with its simple construction by electrochemical deposition in regular hexagonal structure of Aluminum oxide membrane has many advantages. The three dimensional absorption of X-ray energy in ZnO scintillator is simulated by a Monte Carlo transport code (MCNP). The transport, attenuation and scattering of the generated photons are simulated by a general-purpose scintillator light response simulation code (OPTICS). The results are compared with a previous publication which used a simulation code of the passage of particles through matter (Geant4). The results verify that this scintillator nanowire structure has a spatial resolution less than one micrometer.

  15. Accurate and general treatment of electrostatic interaction in Hamiltonian adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.

    2016-10-01

    In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.

  16. Predictive displays for a process-control schematic interface.

    PubMed

    Yin, Shanqing; Wickens, Christopher D; Helander, Martin; Laberge, Jason C

    2015-02-01

    Our objective was to examine the extent to which increasing precision of predictive (rate of change) information in process control will improve performance on a simulated process-control task. Predictive displays have been found to be useful in process control (as well as aviation and maritime industries). However, authors of prior research have not examined the extent to which predictive value is increased by increasing predictor resolution, nor has such research tied potential improvements to changes in process control strategy. Fifty nonprofessional participants each controlled a simulated chemical mixture process (honey mixer simulation) that simulated the operations found in process control. Participants in each of five groups controlled with either no predictor or a predictor ranging in the resolution of prediction of the process. Increasing detail resolution generally increased the benefit of prediction over the control condition although not monotonically so. The best overall performance, combining quality and predictive ability, was obtained by the display of intermediate resolution. The two displays with the lowest resolution were clearly inferior. Predictors with higher resolution are of value but may trade off enhanced sensitivity to variable change (lower-resolution discrete state predictor) with smoother control action (higher-resolution continuous predictors). The research provides guidelines to the process-control industry regarding displays that can most improve operator performance.

  17. Sensitivity of Hydrologic Extremes to Spatial Resolution of Meteorological Forcings: A Case Study of the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Kao, S. C.; Naz, B. S.; Gangrade, S.; Ashfaq, M.; Rastogi, D.

    2016-12-01

    The magnitude and frequency of hydroclimate extremes are projected to increase in the conterminous United States (CONUS) with significant implications for future water resource planning and flood risk management. Nevertheless, apart from the change of natural environment, the choice of model spatial resolution could also artificially influence the features of simulated extremes. To better understand how the spatial resolution of meteorological forcings may affect hydroclimate projections, we test the runoff sensitivity using the Variable Infiltration Capacity (VIC) model that was calibrated for each CONUS 8-digit hydrologic unit (HUC8) at 1/24° ( 4km) grid resolution. The 1980-2012 gridded Daymet and PRISM meteorological observations are used to conduct the 1/24° resolution control simulation. Comparative simulations are achieved by smoothing the 1/24° forcing into 1/12° and 1/8° resolutions which are then used to drive the VIC model for the CONUS. In addition, we also test how the simulated high and low runoff conditions would react to change in precipitation (±10%) and temperature (+1°C). The results are further analyzed for various types of hydroclimate extremes across different watersheds in the CONUS. This work helps us understand the sensitivity of simulated runoff to different spatial resolutions of climate forcings and also its sensitivity to different watershed sizes and characteristics of extreme events in the future climate conditions.

  18. Uncertainty of global summer precipitation in the CMIP5 models: a comparison between high-resolution and low-resolution models

    NASA Astrophysics Data System (ADS)

    Huang, Danqing; Yan, Peiwen; Zhu, Jian; Zhang, Yaocun; Kuang, Xueyuan; Cheng, Jing

    2018-04-01

    The uncertainty of global summer precipitation simulated by the 23 CMIP5 CGCMs and the possible impacts of model resolutions are investigated in this study. Large uncertainties exist over the tropical and subtropical regions, which can be mainly attributed to convective precipitation simulation. High-resolution models (HRMs) and low-resolution models (LRMs) are further investigated to demonstrate their different contributions to the uncertainties of the ensemble mean. It shows that the high-resolution model ensemble means (HMME) and low-resolution model ensemble mean (LMME) mitigate the biases between the MME and observation over most continents and oceans, respectively. The HMME simulates more precipitation than the LMME over most oceans, but less precipitation over some continents. The dominant precipitation category in the HRMs (LRMs) is the heavy precipitation (moderate precipitation) over the tropic regions. The combinations of convective and stratiform precipitation are also quite different: the HMME has much higher ratio of stratiform precipitation while the LMME has more convective precipitation. Finally, differences in precipitation between the HMME and LMME can be traced to their differences in the SST simulations via the local and remote air-sea interaction.

  19. Numerical Hydrodynamics in Special Relativity.

    PubMed

    Martí, José Maria; Müller, Ewald

    2003-01-01

    This review is concerned with a discussion of numerical methods for the solution of the equations of special relativistic hydrodynamics (SRHD). Particular emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods in SRHD. Results of a set of demanding test bench simulations obtained with different numerical SRHD methods are compared. Three applications (astrophysical jets, gamma-ray bursts and heavy ion collisions) of relativistic flows are discussed. An evaluation of various SRHD methods is presented, and future developments in SRHD are analyzed involving extension to general relativistic hydrodynamics and relativistic magneto-hydrodynamics. The review further provides FORTRAN programs to compute the exact solution of a 1D relativistic Riemann problem with zero and nonzero tangential velocities, and to simulate 1D relativistic flows in Cartesian Eulerian coordinates using the exact SRHD Riemann solver and PPM reconstruction. Supplementary material is available for this article at 10.12942/lrr-2003-7 and is accessible for authorized users.

  20. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-02-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second generation DGVM LPJ-GUESS to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, that increased the model's speed by approximately the factor 8, we were able to faster detect shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south-transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  1. Recent progress in simulating galaxy formation from the largest to the smallest scales

    NASA Astrophysics Data System (ADS)

    Faucher-Giguère, Claude-André

    2018-05-01

    Galaxy formation simulations are an essential part of the modern toolkit of astrophysicists and cosmologists alike. Astrophysicists use the simulations to study the emergence of galaxy populations from the Big Bang, as well as the formation of stars and supermassive black holes. For cosmologists, galaxy formation simulations are needed to understand how baryonic processes affect measurements of dark matter and dark energy. Owing to the extreme dynamic range of galaxy formation, advances are driven by novel approaches using simulations with different tradeoffs between volume and resolution. Large-volume but low-resolution simulations provide the best statistics, while higher-resolution simulations of smaller cosmic volumes can be evolved with self-consistent physics and reveal important emergent phenomena. I summarize recent progress in galaxy formation simulations, including major developments in the past five years, and highlight some key areas likely to drive further advances over the next decade.

  2. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    NASA Astrophysics Data System (ADS)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  3. Ultra-wideband sensors for improved magnetic resonance imaging, cardiovascular monitoring and tumour diagnostics.

    PubMed

    Thiel, Florian; Kosch, Olaf; Seifert, Frank

    2010-01-01

    The specific advantages of ultra-wideband electromagnetic remote sensing (UWB radar) make it a particularly attractive technique for biomedical applications. We partially review our activities in utilizing this novel approach for the benefit of high and ultra-high field magnetic resonance imaging (MRI) and other applications, e.g., for intensive care medicine and biomedical research. We could show that our approach is beneficial for applications like motion tracking for high resolution brain imaging due to the non-contact acquisition of involuntary head motions with high spatial resolution, navigation for cardiac MRI due to our interpretation of the detected physiological mechanical contraction of the heart muscle and for MR safety, since we have investigated the influence of high static magnetic fields on myocardial mechanics. From our findings we could conclude, that UWB radar can serve as a navigator technique for high and ultra-high field magnetic resonance imaging and can be beneficial preserving the high resolution capability of this imaging modality. Furthermore it can potentially be used to support standard ECG analysis by complementary information where sole ECG analysis fails. Further analytical investigations have proven the feasibility of this method for intracranial displacements detection and the rendition of a tumour's contrast agent based perfusion dynamic. Beside these analytical approaches we have carried out FDTD simulations of a complex arrangement mimicking the illumination of a human torso model incorporating the geometry of the antennas applied.

  4. Semi-physical Simulation of the Airborne InSAR based on Rigorous Geometric Model and Real Navigation Data

    NASA Astrophysics Data System (ADS)

    Changyong, Dou; Huadong, Guo; Chunming, Han; yuquan, Liu; Xijuan, Yue; Yinghui, Zhao

    2014-03-01

    Raw signal simulation is a useful tool for the system design, mission planning, processing algorithm testing, and inversion algorithm design of Synthetic Aperture Radar (SAR). Due to the wide and high frequent variation of aircraft's trajectory and attitude, and the low accuracy of the Position and Orientation System (POS)'s recording data, it's difficult to quantitatively study the sensitivity of the key parameters, i.e., the baseline length and inclination, absolute phase and the orientation of the antennas etc., of the airborne Interferometric SAR (InSAR) system, resulting in challenges for its applications. Furthermore, the imprecise estimation of the installation offset between the Global Positioning System (GPS), Inertial Measurement Unit (IMU) and the InSAR antennas compounds the issue. An airborne interferometric SAR (InSAR) simulation based on the rigorous geometric model and real navigation data is proposed in this paper, providing a way for quantitatively studying the key parameters and for evaluating the effect from the parameters on the applications of airborne InSAR, as photogrammetric mapping, high-resolution Digital Elevation Model (DEM) generation, and surface deformation by Differential InSAR technology, etc. The simulation can also provide reference for the optimal design of the InSAR system and the improvement of InSAR data processing technologies such as motion compensation, imaging, image co-registration, and application parameter retrieval, etc.

  5. The influence of atmospheric grid resolution in a climate model-forced ice sheet simulation

    NASA Astrophysics Data System (ADS)

    Lofverstrom, Marcus; Liakka, Johan

    2018-04-01

    Coupled climate-ice sheet simulations have been growing in popularity in recent years. Experiments of this type are however challenging as ice sheets evolve over multi-millennial timescales, which is beyond the practical integration limit of most Earth system models. A common method to increase model throughput is to trade resolution for computational efficiency (compromise accuracy for speed). Here we analyze how the resolution of an atmospheric general circulation model (AGCM) influences the simulation quality in a stand-alone ice sheet model. Four identical AGCM simulations of the Last Glacial Maximum (LGM) were run at different horizontal resolutions: T85 (1.4°), T42 (2.8°), T31 (3.8°), and T21 (5.6°). These simulations were subsequently used as forcing of an ice sheet model. While the T85 climate forcing reproduces the LGM ice sheets to a high accuracy, the intermediate resolution cases (T42 and T31) fail to build the Eurasian ice sheet. The T21 case fails in both Eurasia and North America. Sensitivity experiments using different surface mass balance parameterizations improve the simulations of the Eurasian ice sheet in the T42 case, but the compromise is a substantial ice buildup in Siberia. The T31 and T21 cases do not improve in the same way in Eurasia, though the latter simulates the continent-wide Laurentide ice sheet in North America. The difficulty to reproduce the LGM ice sheets in the T21 case is in broad agreement with previous studies using low-resolution atmospheric models, and is caused by a substantial deterioration of the model climate between the T31 and T21 resolutions. It is speculated that this deficiency may demonstrate a fundamental problem with using low-resolution atmospheric models in these types of experiments.

  6. Air quality modeling for the urban Jackson, Mississippi Region using a high resolution WRF/Chem model.

    PubMed

    Yerramilli, Anjaneyulu; Dodla, Venkata B; Desamsetti, Srinivas; Challa, Srinivas V; Young, John H; Patrick, Chuck; Baham, Julius M; Hughes, Robert L; Yerramilli, Sudha; Tuluri, Francis; Hardy, Mark G; Swanier, Shelton J

    2011-06-01

    In this study, an attempt was made to simulate the air quality with reference to ozone over the Jackson (Mississippi) region using an online WRF/Chem (Weather Research and Forecasting-Chemistry) model. The WRF/Chem model has the advantages of the integration of the meteorological and chemistry modules with the same computational grid and same physical parameterizations and includes the feedback between the atmospheric chemistry and physical processes. The model was designed to have three nested domains with the inner-most domain covering the study region with a resolution of 1 km. The model was integrated for 48 hours continuously starting from 0000 UTC of 6 June 2006 and the evolution of surface ozone and other precursor pollutants were analyzed. The model simulated atmospheric flow fields and distributions of NO2 and O3 were evaluated for each of the three different time periods. The GIS based spatial distribution maps for ozone, its precursors NO, NO2, CO and HONO and the back trajectories indicate that all the mobile sources in Jackson, Ridgeland and Madison contributing significantly for their formation. The present study demonstrates the applicability of WRF/Chem model to generate quantitative information at high spatial and temporal resolution for the development of decision support systems for air quality regulatory agencies and health administrators.

  7. Using triple gamma coincidences with a pixelated semiconductor Compton-PET scanner: a simulation study

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; Chmeissani, M.

    2016-01-01

    The Voxel Imaging PET (VIP) Pathfinder project presents a novel design using pixelated semiconductor detectors for nuclear medicine applications to achieve the intrinsic image quality limits set by physics. The conceptual design can be extended to a Compton gamma camera. The use of a pixelated CdTe detector with voxel sizes of 1 × 1 × 2 mm3 guarantees optimal energy and spatial resolution. However, the limited time resolution of semiconductor detectors makes it impossible to use Time Of Flight (TOF) with VIP PET. TOF is used in order to improve the signal to noise ratio (SNR) by using only the most probable portion of the Line-Of-Response (LOR) instead of its entire length. To overcome the limitation of CdTe time resolution, we present in this article a simulation study using β+-γ emitting isotopes with a Compton-PET scanner. When the β+ annihilates with an electron it produces two gammas which produce a LOR in the PET scanner, while the additional gamma, when scattered in the scatter detector, provides a Compton cone that intersects with the aforementioned LOR. The intersection indicates, within a few mm of uncertainty along the LOR, the origin of the beta-gamma decay. Hence, one can limit the part of the LOR used by the image reconstruction algorithm.

  8. Simulation of Deep Convective Clouds with the Dynamic Reconstruction Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Shi, X.; Chow, F. K.; Street, R. L.; Bryan, G. H.

    2017-12-01

    The terra incognita (TI), or gray zone, in simulations is a range of grid spacing comparable to the most energetic eddy diameter. Spacing in mesoscale and simulations is much larger than the eddies, and turbulence is parameterized with one-dimensional vertical-mixing. Large eddy simulations (LES) have grid spacing much smaller than the energetic eddies, and use three-dimensional models of turbulence. Studies of convective weather use convection-permitting resolutions, which are in the TI. Neither mesoscale-turbulence nor LES models are designed for the TI, so TI turbulence parameterization needs to be discussed. Here, the effects of sub-filter scale (SFS) closure schemes on the simulation of deep tropical convection are evaluated by comparing three closures, i.e. Smagorinsky model, Deardorff-type TKE model and the dynamic reconstruction model (DRM), which partitions SFS turbulence into resolvable sub-filter scales (RSFS) and unresolved sub-grid scales (SGS). The RSFS are reconstructed, and the SGS are modeled with a dynamic eddy viscosity/diffusivity model. The RSFS stresses/fluxes allow backscatter of energy/variance via counter-gradient stresses/fluxes. In high-resolution (100m) simulations of tropical convection use of these turbulence models did not lead to significant differences in cloud water/ice distribution, precipitation flux, or vertical fluxes of momentum and heat. When model resolutions are coarsened, the Smagorinsky and TKE models overestimate cloud ice and produces large-amplitude downward heat flux in the middle troposphere (not found in the high-resolution simulations). This error is a result of unrealistically large eddy diffusivities, i.e., the eddy diffusivity of the DRM is on the order of 1 for the coarse resolution simulations, the eddy diffusivity of the Smagorinsky and TKE model is on the order of 100. Splitting the eddy viscosity/diffusivity scalars into vertical and horizontal components by using different length scales and strain rate components helps to reduce the errors, but does not completely remedy the problem. In contrast, the coarse resolution simulations using the DRM produce results that are more consistent with the high-resolution results, suggesting that the DRM is a more appropriate turbulence model for simulating convection in the TI.

  9. A Study of the Unstable Modes in High Mach Number Gaseous Jets and Shear Layers

    NASA Astrophysics Data System (ADS)

    Bassett, Gene Marcel

    1993-01-01

    Instabilities affecting the propagation of supersonic gaseous jets have been studied using high resolution computer simulations with the Piecewise-Parabolic-Method (PPM). These results are discussed in relation to jets from galactic nuclei. These studies involve a detailed treatment of a single section of a very long jet, approximating the dynamics by using periodic boundary conditions. Shear layer simulations have explored the effects of shear layers on the growth of nonlinear instabilities. Convergence of the numerical approximations has been tested by comparing jet simulations with different grid resolutions. The effects of initial conditions and geometry on the dominant disruptive instabilities have also been explored. Simulations of shear layers with a variety of thicknesses, Mach numbers and densities perturbed by incident sound waves imply that the time for the excited kink modes to grow large in amplitude and disrupt the shear layer is taug = (546 +/- 24) (M/4)^{1.7 } (Apert/0.02) ^{-0.4} delta/c, where M is the jet Mach number, delta is the half-width of the shear layer, and A_ {pert} is the perturbation amplitude. For simulations of periodic jets, the initial velocity perturbations set up zig-zag shock patterns inside the jet. In each case a single zig-zag shock pattern (an odd mode) or a double zig-zag shock pattern (an even mode) grows to dominate the flow. The dominant kink instability responsible for these shock patterns moves approximately at the linear resonance velocity, nu_ {mode} = cextnu_ {relative}/(cjet + c_ {ext}). For high resolution simulations (those with 150 or more computational zones across the jet width), the even mode dominates if the even penetration is higher in amplitude initially than the odd perturbation. For low resolution simulations, the odd mode dominates even for a stronger even mode perturbation. In high resolution simulations the jet boundary rolls up and large amounts of external gas are entrained into the jet. In low resolution simulations this entrainment process is impeded by numerical viscosity. The three-dimensional jet simulations behave similarly to two-dimensional jet runs with the same grid resolutions.

  10. What have we learned from the German consortium project STORM aiming at high-resolution climate simulations?

    NASA Astrophysics Data System (ADS)

    von Storch, Jin-Song

    2014-05-01

    The German consortium STORM was built to explore high-resolution climate simulations using the high-performance computer stored at the German Climate Computer Center (DKRZ). One of the primary goals is to quantify the effect of unresolved (and parametrized) processes on climate sensitivity. We use ECHAM6/MPIOM, the coupled atmosphere-ocean model developed at the Max-Planck Institute for Meteorology. The resolution is T255L95 for the atmosphere and 1/10 degree and 80 vertical levels for the ocean. We discuss results of stand-alone runs, i.e. the ocean-only simulation driven by the NCEP/NCAR renalaysis and the atmosphere-only AMIP-type of simulation. Increasing resolution leads to a redistribution of biases, even though some improvements, both in the atmosphere and in the ocean, can clearly be attributed to the increase in resolution. We represent also new insights on ocean meso-scale eddies, in particular their effects on the ocean's energetics. Finally, we discuss the status and problems of the coupled high-resolution runs.

  11. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    NASA Astrophysics Data System (ADS)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological observations such as soil moisture and radar rainfall depth and by sharing the model and its codes in public domain, respectively.

  12. A 30m resolution hydrodynamic model of the entire conterminous United States.

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Neal, J. C.; Smith, A.; Sampson, C.; Johnson, K.; Wing, O.

    2016-12-01

    In this paper we describe the development and validation of a 30m resolution hydrodynamic model covering the entire conterminous United States. The model can be used to simulate inundation and water depths resulting from either return period flows (so equivalent to FEMA Flood Insurance Rate Maps), hindcasts of historic events or forecasts of future river flow from a rainfall-runoff or land surface model. As topographic data the model uses the U.S. Geological Survey National Elevation Dataset or NED, and return period flows are generated using a regional flood frequency analysis methodology (Smith et al., 2015. Worldwide flood frequency estimation. Water Resources Research, 51, 539-553). Flood defences nationwide are represented using data from the US Army Corps of Engineers. Using these data flows are simulated using an explicit and highly efficient finite difference solution of the local inertial form of the Shallow Water equations identical to that implemented in the LISFLOOD-FP model. Even with this efficient numerical solution a simulation at this resolution over a whole continent is a huge undertaking, and a variety of High Performance Computing technologies therefore need to be employed to make these simulations possible. The size of the output datasets is also challenging, and to solve this we use the GIS and graphical display functions of Google Earth Engine to facilitate easy visualisation and interrogation of the results. The model is validated against the return period flood extents contained in FEMA Flood Insurance Rate Maps and real flood event data from the Texas 2015 flood event which was hindcast using the model. Finally, we present an application of the model to the Upper Mississippi river basin where simulations both with and without flood defences are used to determine floodplain areas benefitting from protection in order to quantify the benefits of flood defence spending.

  13. Novel detector design for reducing intercell x-ray cross-talk in the variable resolution x-ray CT scanner: a Monte Carlo study.

    PubMed

    Arabi, Hosein; Asl, Ali Reza Kamali; Ay, Mohammad Reza; Zaidi, Habib

    2011-03-01

    The variable resolution x-ray (VRX) CT scanner provides substantial improvement in the spatial resolution by matching the scanner's field of view (FOV) to the size of the object being imaged. Intercell x-ray cross-talk is one of the most important factors limiting the spatial resolution of the VRX detector. In this work, a new cell arrangement in the VRX detector is suggested to decrease the intercell x-ray cross-talk. The idea is to orient the detector cells toward the opening end of the detector. Monte Carlo simulations were used for performance assessment of the oriented cell detector design. Previously published design parameters and simulation results of x-ray cross-talk for the VRX detector were used for model validation using the GATE Monte Carlo package. In the first step, the intercell x-ray cross-talk of the actual VRX detector model was calculated as a function of the FOV. The obtained results indicated an optimum cell orientation angle of 28 degrees to minimize the x-ray cross-talk in the VRX detector. Thereafter, the intercell x-ray cross-talk in the oriented cell detector was modeled and quantified. The intercell x-ray cross-talk in the actual detector model was considerably high, reaching up to 12% at FOVs from 24 to 38 cm. The x-ray cross-talk in the oriented cell detector was less than 5% for all possible FOVs, except 40 cm (maximum FOV). The oriented cell detector could provide considerable decrease in the intercell x-ray cross-talk for the VRX detector, thus leading to significant improvement in the spatial resolution and reduction in the spatial resolution nonuniformity across the detector length. The proposed oriented cell detector is the first dedicated detector design for the VRX CT scanners. Application of this concept to multislice and flat-panel VRX detectors would also result in higher spatial resolution.

  14. Asymmetric Eyewall Vertical Motion in a High-Resolution Simulation of Hurricane Bonnie (1998)

    NASA Technical Reports Server (NTRS)

    Braun, Scott A.; Montgomery, Michael T.; Pu, Zhao-Xia

    2003-01-01

    This study examines a high-resolution simulation of Hurricane Bonnie. Results from the simulation will be compared to the conceptual model of Heymsfield et al. (2001) to determine the extent to which this conceptual model explains vertical motions and precipitation growth in the eyewall.

  15. Uncertainty Propagation of Non-Parametric-Derived Precipitation Estimates into Multi-Hydrologic Model Simulations

    NASA Astrophysics Data System (ADS)

    Bhuiyan, M. A. E.; Nikolopoulos, E. I.; Anagnostou, E. N.

    2017-12-01

    Quantifying the uncertainty of global precipitation datasets is beneficial when using these precipitation products in hydrological applications, because precipitation uncertainty propagation through hydrologic modeling can significantly affect the accuracy of the simulated hydrologic variables. In this research the Iberian Peninsula has been used as the study area with a study period spanning eleven years (2000-2010). This study evaluates the performance of multiple hydrologic models forced with combined global rainfall estimates derived based on a Quantile Regression Forests (QRF) technique. In QRF technique three satellite precipitation products (CMORPH, PERSIANN, and 3B42 (V7)); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset are being utilized in this study. A high-resolution, ground-based observations driven precipitation dataset (named SAFRAN) available at 5 km/1 h resolution is used as reference. Through the QRF blending framework the stochastic error model produces error-adjusted ensemble precipitation realizations, which are used to force four global hydrological models (JULES (Joint UK Land Environment Simulator), WaterGAP3 (Water-Global Assessment and Prognosis), ORCHIDEE (Organizing Carbon and Hydrology in Dynamic Ecosystems) and SURFEX (Stands for Surface Externalisée) ) to simulate three hydrologic variables (surface runoff, subsurface runoff and evapotranspiration). The models are forced with the reference precipitation to generate reference-based hydrologic simulations. This study presents a comparative analysis of multiple hydrologic model simulations for different hydrologic variables and the impact of the blending algorithm on the simulated hydrologic variables. Results show how precipitation uncertainty propagates through the different hydrologic model structures to manifest in reduction of error in hydrologic variables.

  16. Partially coherent X-ray wavefront propagation simulations including grazing-incidence focusing optics.

    PubMed

    Canestrari, Niccolo; Chubar, Oleg; Reininger, Ruben

    2014-09-01

    X-ray beamlines in modern synchrotron radiation sources make extensive use of grazing-incidence reflective optics, in particular Kirkpatrick-Baez elliptical mirror systems. These systems can focus the incoming X-rays down to nanometer-scale spot sizes while maintaining relatively large acceptance apertures and high flux in the focused radiation spots. In low-emittance storage rings and in free-electron lasers such systems are used with partially or even nearly fully coherent X-ray beams and often target diffraction-limited resolution. Therefore, their accurate simulation and modeling has to be performed within the framework of wave optics. Here the implementation and benchmarking of a wave-optics method for the simulation of grazing-incidence mirrors based on the local stationary-phase approximation or, in other words, the local propagation of the radiation electric field along geometrical rays, is described. The proposed method is CPU-efficient and fully compatible with the numerical methods of Fourier optics. It has been implemented in the Synchrotron Radiation Workshop (SRW) computer code and extensively tested against the geometrical ray-tracing code SHADOW. The test simulations have been performed for cases without and with diffraction at mirror apertures, including cases where the grazing-incidence mirrors can be hardly approximated by ideal lenses. Good agreement between the SRW and SHADOW simulation results is observed in the cases without diffraction. The differences between the simulation results obtained by the two codes in diffraction-dominated cases for illumination with fully or partially coherent radiation are analyzed and interpreted. The application of the new method for the simulation of wavefront propagation through a high-resolution X-ray microspectroscopy beamline at the National Synchrotron Light Source II (Brookhaven National Laboratory, USA) is demonstrated.

  17. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-07-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  18. From Particles and Point Clouds to Voxel Models: High Resolution Modeling of Dynamic Landscapes in Open Source GIS

    NASA Astrophysics Data System (ADS)

    Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.

    2012-12-01

    Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.

  19. Community Radiative Transfer Model Applications - A Study of the Retrieval of Trace Gases in the Atmosphere from Cross-track Infrared Sounder (CrIS) Data of a Full-spectral Resolution

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Nalli, N. R.; Tan, C.; Zhang, K.; Iturbide, F.; Wilson, M.; Zhou, L.

    2015-12-01

    The Community Radiative Transfer Model (CRTM) [3] operationally supports satellite radiance assimilation for weather forecasting, sensor data verification, and the retrievals of satellite products. The CRTM has been applied to UV and visible sensors, infrared and microwave sensors. The paper will demonstrate the applications of the CRTM, in particular radiative transfer in the retrieva algorithm. The NOAA Unique CrIS/ATMS Processing System (NUCAPS) operationally generates vertical profiles of atmospheric temperature (AVTP) and moisture (AVMP) from Suomi NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) measurements. Current operational CrIS data have reduced spectral resolution: 1.25 cm-1 for a middle wave band and 2.5 cm-1 for a short-wave wave band [1]. The reduced spectral data largely degraded the retrieval accuracy of trace gases. CrIS full spectral data are also available now which have single spectral resolution of 0.625 cm-1 for all of the three bands: long-wave band, middle wave band, and short-wave band. The CrIS full-spectral resolution data is critical to the retrieval of trace gases such as O3, CO [2], CO2, and CH4. In this paper, we use the Community Radiative Transfer Model (CRTM) to study the impact of the CrIS spectral resolution on the retrieval accuracy of trace gases. The newly released CRTM version 2.2.1 can simulates Hamming-apodized CrIS radiance of a full-spectral resolution. We developed a small utility that can convert the CRTM simulated radiance to un-apodized radiance. The latter has better spectral information which can be helpful to the retrievals of the trace gases. The retrievals will be validated using both NWP model data as well as the data collected during AEROSE expeditions [4]. We will also discuss the sensitivity on trace gases between apodized and un-apodized radiances. References[1] Gambacorta, A., et al.(2013), IEEE Lett., 11(9), doi:10.1109/LGRS.2014.230364, 1639-1643. [2] Han, Y., et al. (2013), JGR.,118, 12,734-12,748, doi:10.1002/2013JD020344. [3] Liu, Q., and S. Boukabara (2013), Remote Sen. Environ., 140 (2014) 744-754. [4] Nalli, N. R. et al(2011) . Bulletin of the American Meteorological Society, (92), 765-789.

  20. A New High Resolution Climate Dataset for Climate Change Impacts Assessments in New England

    NASA Astrophysics Data System (ADS)

    Komurcu, M.; Huber, M.

    2016-12-01

    Assessing regional impacts of climate change (such as changes in extreme events, land surface hydrology, water resources, energy, ecosystems and economy) requires much higher resolution climate variables than those available from global model projections. While it is possible to run global models in higher resolution, the high computational cost associated with these simulations prevent their use in such manner. To alleviate this problem, dynamical downscaling offers a method to deliver higher resolution climate variables. As part of an NSF EPSCoR funded interdisciplinary effort to assess climate change impacts on New Hampshire ecosystems, hydrology and economy (the New Hampshire Ecosystems and Society project), we create a unique high-resolution climate dataset for New England. We dynamically downscale global model projections under a high impact emissions scenario using the Weather Research and Forecasting model (WRF) with three nested grids of 27, 9 and 3 km horizontal resolution with the highest resolution innermost grid focusing over New England. We prefer dynamical downscaling over other methods such as statistical downscaling because it employs physical equations to progressively simulate climate variables as atmospheric processes interact with surface processes, emissions, radiation, clouds, precipitation and other model components, hence eliminates fix relationships between variables. In addition to simulating mean changes in regional climate, dynamical downscaling also allows for the simulation of climate extremes that significantly alter climate change impacts. We simulate three time slices: 2006-2015, 2040-2060 and 2080-2100. This new high-resolution climate dataset (with more than 200 variables saved in hourly (six hourly) intervals for the highest resolution domain (outer two domains)) along with model input and restart files used in our WRF simulations will be publicly available for use to the broader scientific community to support in-depth climate change impacts assessments for New England. We present results focusing on future changes in New England extreme events.

  1. Fast-time Simulation of an Automated Conflict Detection and Resolution Concept

    NASA Technical Reports Server (NTRS)

    Windhorst, Robert; Erzberger, Heinz

    2006-01-01

    This paper investigates the effect on the National Airspace System of reducing air traffc controller workload by automating conflict detection and resolution. The Airspace Concept Evaluation System is used to perform simulations of the Cleveland Center with conventional and with automated conflict detection and resolution concepts. Results show that the automated conflict detection and resolution concept significantly decreases growth of delay as traffic demand is increased in en-route airspace.

  2. How to understand atomistic molecular dynamics simulations of RNA and protein-RNA complexes?

    PubMed

    Šponer, Jiří; Krepl, Miroslav; Banáš, Pavel; Kührová, Petra; Zgarbová, Marie; Jurečka, Petr; Havrila, Marek; Otyepka, Michal

    2017-05-01

    We provide a critical assessment of explicit-solvent atomistic molecular dynamics (MD) simulations of RNA and protein/RNA complexes, written primarily for non-specialists with an emphasis to explain the limitations of MD. MD simulations can be likened to hypothetical single-molecule experiments starting from single atomistic conformations and investigating genuine thermal sampling of the biomolecules. The main advantage of MD is the unlimited temporal and spatial resolution of positions of all atoms in the simulated systems. Fundamental limitations are the short physical time-scale of simulations, which can be partially alleviated by enhanced-sampling techniques, and the highly approximate atomistic force fields describing the simulated molecules. The applicability and present limitations of MD are demonstrated on studies of tetranucleotides, tetraloops, ribozymes, riboswitches and protein/RNA complexes. Wisely applied simulations respecting the approximations of the model can successfully complement structural and biochemical experiments. WIREs RNA 2017, 8:e1405. doi: 10.1002/wrna.1405 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  3. Validation of a small-animal PET simulation using GAMOS: a GEANT4-based framework

    NASA Astrophysics Data System (ADS)

    Cañadas, M.; Arce, P.; Rato Mendes, P.

    2011-01-01

    Monte Carlo-based modelling is a powerful tool to help in the design and optimization of positron emission tomography (PET) systems. The performance of these systems depends on several parameters, such as detector physical characteristics, shielding or electronics, whose effects can be studied on the basis of realistic simulated data. The aim of this paper is to validate a comprehensive study of the Raytest ClearPET small-animal PET scanner using a new Monte Carlo simulation platform which has been developed at CIEMAT (Madrid, Spain), called GAMOS (GEANT4-based Architecture for Medicine-Oriented Simulations). This toolkit, based on the GEANT4 code, was originally designed to cover multiple applications in the field of medical physics from radiotherapy to nuclear medicine, but has since been applied by some of its users in other fields of physics, such as neutron shielding, space physics, high energy physics, etc. Our simulation model includes the relevant characteristics of the ClearPET system, namely, the double layer of scintillator crystals in phoswich configuration, the rotating gantry, the presence of intrinsic radioactivity in the crystals or the storage of single events for an off-line coincidence sorting. Simulated results are contrasted with experimental acquisitions including studies of spatial resolution, sensitivity, scatter fraction and count rates in accordance with the National Electrical Manufacturers Association (NEMA) NU 4-2008 protocol. Spatial resolution results showed a discrepancy between simulated and measured values equal to 8.4% (with a maximum FWHM difference over all measurement directions of 0.5 mm). Sensitivity results differ less than 1% for a 250-750 keV energy window. Simulated and measured count rates agree well within a wide range of activities, including under electronic saturation of the system (the measured peak of total coincidences, for the mouse-sized phantom, was 250.8 kcps reached at 0.95 MBq mL-1 and the simulated peak was 247.1 kcps at 0.87 MBq mL-1). Agreement better than 3% was obtained in the scatter fraction comparison study. We also measured and simulated a mini-Derenzo phantom obtaining images with similar quality using iterative reconstruction methods. We concluded that the overall performance of the simulation showed good agreement with the measured results and validates the GAMOS package for PET applications. Furthermore, its ease of use and flexibility recommends it as an excellent tool to optimize design features or image reconstruction techniques.

  4. Lagrangian transported MDF methods for compressible high speed flows

    NASA Astrophysics Data System (ADS)

    Gerlinger, Peter

    2017-06-01

    This paper deals with the application of thermochemical Lagrangian MDF (mass density function) methods for compressible sub- and supersonic RANS (Reynolds Averaged Navier-Stokes) simulations. A new approach to treat molecular transport is presented. This technique on the one hand ensures numerical stability of the particle solver in laminar regions of the flow field (e.g. in the viscous sublayer) and on the other hand takes differential diffusion into account. It is shown in a detailed analysis, that the new method correctly predicts first and second-order moments on the basis of conventional modeling approaches. Moreover, a number of challenges for MDF particle methods in high speed flows is discussed, e.g. high cell aspect ratio grids close to solid walls, wall heat transfer, shock resolution, and problems from statistical noise which may cause artificial shock systems in supersonic flows. A Mach 2 supersonic mixing channel with multiple shock reflection and a model rocket combustor simulation demonstrate the eligibility of this technique to practical applications. Both test cases are simulated successfully for the first time with a hybrid finite-volume (FV)/Lagrangian particle solver (PS).

  5. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  6. All-Atom Multiscale Molecular Dynamics Theory and Simulation of Self-Assembly, Energy Transfer and Structural Transition in Nanosystems

    NASA Astrophysics Data System (ADS)

    Espinosa Duran, John Michael

    The study of nanosystems and their emergent properties requires the development of multiscale computational models, theories and methods that preserve atomic and femtosecond resolution, to reveal details that cannot be resolved experimentally today. Considering this, three long time scale phenomena were studied using molecular dynamics and multiscale methods: self-assembly of organic molecules on graphite, energy transfer in nanosystems, and structural transition in vault nanoparticles. Molecular dynamics simulations of the self-assembly of alkoxybenzonitriles with different tail lengths on graphite were performed to learn about intermolecular interactions and phases exhibited by self-organized materials. This is important for the design of ordered self-assembled organic photovoltaic materials with greater efficiency than the disordered blends. Simulations revealed surface dynamical behaviors that cannot be resolved experimentally today due to the lack of spatiotemporal resolution. Atom-resolved structures predicted by simulations agreed with scanning tunneling microscopy images and unit cell measurements. Then, a multiscale theory based on the energy density as a field variable is developed to study energy transfer in nanoscale systems. For applications like photothermal microscopy or cancer phototherapy is required to understand how the energy is transferred to/from nanosystems. This multiscale theory could be applied in this context and here is tested for cubic nanoparticles immersed in water for energy being transferred to/from the nanoparticle. The theory predicts the energy transfer dynamics and reveals phenomena that cannot be described by current phenomenological theories. Finally, temperature-triggered structural transitions were revealed for vault nanoparticles using molecular dynamics and multiscale simulations. Vault is a football-shaped supramolecular assembly very distinct from the commonly observed icosahedral viruses. It has very promising applications in drug delivery and has been extensively studied experimentally. Sub-microsecond multiscale simulations at 310 K on the vault revealed the opening and closing of fractures near the shoulder while preserving the overall structure. This fracture mechanism could explain the uptake and release of small drugs while maintaining the overall structure. Higher temperature simulations show the generation of large fractures near the waist, which enables interaction of the external medium with the inner vault residues. Simulation results agreed with microscopy and spectroscopy measurements, and revealed new structures and mechanisms.

  7. Algorithmic trends in computational fluid dynamics; The Institute for Computer Applications in Science and Engineering (ICASE)/LaRC Workshop, NASA Langley Research Center, Hampton, VA, US, Sep. 15-17, 1991

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y. (Editor); Kumar, A. (Editor); Salas, M. D. (Editor)

    1993-01-01

    The purpose here is to assess the state of the art in the areas of numerical analysis that are particularly relevant to computational fluid dynamics (CFD), to identify promising new developments in various areas of numerical analysis that will impact CFD, and to establish a long-term perspective focusing on opportunities and needs. Overviews are given of discretization schemes, computational fluid dynamics, algorithmic trends in CFD for aerospace flow field calculations, simulation of compressible viscous flow, and massively parallel computation. Also discussed are accerelation methods, spectral and high-order methods, multi-resolution and subcell resolution schemes, and inherently multidimensional schemes.

  8. Predictive searching algorithm for Fourier ptychography

    NASA Astrophysics Data System (ADS)

    Li, Shunkai; Wang, Yifan; Wu, Weichen; Liang, Yanmei

    2017-12-01

    By capturing a set of low-resolution images under different illumination angles and stitching them together in the Fourier domain, Fourier ptychography (FP) is capable of providing high-resolution image with large field of view. Despite its validity, long acquisition time limits its real-time application. We proposed an incomplete sampling scheme in this paper, termed the predictive searching algorithm to shorten the acquisition and recovery time. Informative sub-regions of the sample’s spectrum are searched and the corresponding images of the most informative directions are captured for spectrum expansion. Its effectiveness is validated by both simulated and experimental results, whose data requirement is reduced by ˜64% to ˜90% without sacrificing image reconstruction quality compared with the conventional FP method.

  9. [Transmission efficiency analysis of near-field fiber probe using FDTD simulation].

    PubMed

    Huang, Wei; Dai, Song-Tao; Wang, Huai-Yu; Zhou, Yun-Song

    2011-10-01

    A fiber probe is the key component of near-field optical technology which is widely used in high resolution imaging, spectroscopy detection and nano processing. How to improve the transmission efficiency of the fiber probe is a very important problem in the application of near-field optical technology. Based on the results of 3D-FDTD computation, the dependence of the transmission efficiency on the cone angle, the aperture diameter, the wavelength and the thickness of metal cladding is revealed. The authors have also made a comparison between naked probe and the probe with metal cladding in terms of transmission efficiency and spatial resolution. In addition, the authors have discovered the fluctuation phenomena of transmission efficiency as the wavelength of incident laser increases.

  10. A 4.5 km resolution Arctic Ocean simulation with the global multi-resolution model FESOM 1.4

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Wekerle, Claudia; Danilov, Sergey; Wang, Xuezhu; Jung, Thomas

    2018-04-01

    In the framework of developing a global modeling system which can facilitate modeling studies on Arctic Ocean and high- to midlatitude linkage, we evaluate the Arctic Ocean simulated by the multi-resolution Finite Element Sea ice-Ocean Model (FESOM). To explore the value of using high horizontal resolution for Arctic Ocean modeling, we use two global meshes differing in the horizontal resolution only in the Arctic Ocean (24 km vs. 4.5 km). The high resolution significantly improves the model's representation of the Arctic Ocean. The most pronounced improvement is in the Arctic intermediate layer, in terms of both Atlantic Water (AW) mean state and variability. The deepening and thickening bias of the AW layer, a common issue found in coarse-resolution simulations, is significantly alleviated by using higher resolution. The topographic steering of the AW is stronger and the seasonal and interannual temperature variability along the ocean bottom topography is enhanced in the high-resolution simulation. The high resolution also improves the ocean surface circulation, mainly through a better representation of the narrow straits in the Canadian Arctic Archipelago (CAA). The representation of CAA throughflow not only influences the release of water masses through the other gateways but also the circulation pathways inside the Arctic Ocean. However, the mean state and variability of Arctic freshwater content and the variability of freshwater transport through the Arctic gateways appear not to be very sensitive to the increase in resolution employed here. By highlighting the issues that are independent of model resolution, we address that other efforts including the improvement of parameterizations are still required.

  11. The Impact of Varying the Physics Grid Resolution Relative to the Dynamical Core Resolution in CAM-SE-CSLAM

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.

    2017-12-01

    The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.

  12. Sensitivity of chemistry-transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.

    2016-05-01

    Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  13. A Validation Study of the Compressible Rayleigh–Taylor Instability Comparing the Ares and Miranda Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.

    In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less

  14. A Validation Study of the Compressible Rayleigh–Taylor Instability Comparing the Ares and Miranda Codes

    DOE PAGES

    Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.

    2017-04-20

    In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less

  15. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    NASA Astrophysics Data System (ADS)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  16. High sensitivity pressure transducer based on the phase characteristics of GMI magnetic sensors

    NASA Astrophysics Data System (ADS)

    Benavides, L. S.; Costa Silva, E.; Costa Monteiro, E.; Hall Barbosa, C. R.

    2018-03-01

    This paper presents a new configuration for a GMI pressure transducer based on the reading of the phase characteristics of GMI sensor, intended for biomedical applications. The development process of this new class of magnetic field transducers is discussed, beginning with the definition of the ideal conditioning of the GMI sensor elements (dc level and frequency of the excitation current and sample length) and continuing with computational simulations of the full electronic circuit performed using the experimental data obtained from measured GMI curves, and have shown that the improvement in the sensitivity of GMI magnetometers is larger when phase-based transducers are used instead of magnitude-based transducers. Parameters of interest of the developed prototype are thoroughly analyzed, such as: sensitivity, linearity and frequency response. Also, the spectral noise density of the developed pressure transducer is evaluated and its resolution in the passband is estimated. A low-cost GMI pressure transducer was developed, presenting high resolution, high sensitivity and a frequency bandwidth compatible to the desired biomedical applications.

  17. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less

  18. Simulation of Subsurface Multiphase Contaminant Extraction Using a Bioslurping Well Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matos de Souza, Michelle; Oostrom, Mart; White, Mark D.

    2016-07-12

    Subsurface simulation of multiphase extraction from wells is notoriously difficult. Explicit representation of well geometry requires small grid resolution, potentially leading to large computational demands. To reduce the problem dimensionality, multiphase extraction is mostly modeled using vertically-averaged approaches. In this paper, a multiphase well model approach is presented as an alternative to simplify the application. The well model, a multiphase extension of the classic Peaceman model, has been implemented in the STOMP simulator. The numerical solution approach accounts for local conditions and gradients in the exchange of fluids between the well and the aquifer. Advantages of this well model implementationmore » include the option to simulate the effects of well characteristics and operation. Simulations were conducted investigating the effects of extraction location, applied vacuum pressure, and a number of hydraulic properties. The obtained results were all consistent and logical. A major outcome of the test simulations is that, in contrast with common recommendations to extract from either the gas-NAPL or the NAPL-aqueous phase interface, the optimum extraction location should be in between these two levels. The new model implementation was also used to simulate extraction at a field site in Brazil. The simulation shows a good match with the field data, suggesting that the new STOMP well module may correctly represent oil removal. The field simulations depend on the quality of the site conceptual model, including the porous media and contaminant properties and the boundary and extraction conditions adopted. The new module may potentially be used to design field applications and analyze extraction data.« less

  19. [The interactive neuroanatomical simulation and practical application of frontotemporal transsylvian exposure in neurosurgery].

    PubMed

    Balogh, Attila; Czigléczki, Gábor; Papal, Zsolt; Preul, Mark C; Banczerowski, Péter

    2014-11-30

    There is an increased need for new digital education tools in neurosurgical training. Illustrated textbooks offer anatomic and technical reference but do not substitute hands-on experience provided by surgery or cadaver dissection. Due to limited availability of cadaver dissections the need for development of simulation tools has been augmented. We explored simulation technology for producing virtual reality-like reconstructions of simulated surgical approaches on cadaver. Practical application of the simulation tool has been presented through frontotemporal transsylvian exposure. The dissections were performed on two cadaveric heads. Arteries and veins were prepared and injected with colorful silicon rubber. The heads were rigidly fixed in Mayfield headholder. A robotic microscope with two digital cameras in inverted cone method of image acquisition was used to capture images around a pivot point in several phases of dissections. Multilayered, high-resolution images have been built into interactive 4D environment by custom developed software. We have developed the simulation module of the frontotemporal transsylvian approach. The virtual specimens can be rotated or tilted to any selected angles and examined from different surgical perspectives at any stage of dissections. Important surgical issues such as appropriate head positioning or surgical maneuvers to expose deep situated neuroanatomic structures can be simulated and studied by using the module. The simulation module of the frontotemporal transsylvian exposure helps to examine effect of head positioning on the visibility of deep situated neuroanatomic structures and study surgical maneuvers required to achieve optimal exposure of deep situated anatomic structures. The simulation program is a powerful tool to study issues of preoperative planning and well suited for neurosurgical training.

  20. Simulation of Mean Flow and Turbulence over a 2D Building Array Using High-Resolution CFD and a Distributed Drag Force Approach

    DTIC Science & Technology

    2016-06-16

    procedure. The predictive capabilities of the high-resolution computational fluid dynamics ( CFD ) simulations of urban flow are validated against a very...turbulence over a 2D building array using high-resolution CFD and a distributed drag force approach a Department of Mechanical Engineering, University

  1. Numerical simulations of significant orographic precipitation in Madeira island

    NASA Astrophysics Data System (ADS)

    Couto, Flavio Tiago; Ducrocq, Véronique; Salgado, Rui; Costa, Maria João

    2016-03-01

    High-resolution simulations of high precipitation events with the MESO-NH model are presented, and also used to verify that increasing horizontal resolution in zones of complex orography, such as in Madeira island, improve the simulation of the spatial distribution and total precipitation. The simulations succeeded in reproducing the general structure of the cloudy systems over the ocean in the four periods considered of significant accumulated precipitation. The accumulated precipitation over the Madeira was better represented with the 0.5 km horizontal resolution and occurred under four distinct synoptic situations. Different spatial patterns of the rainfall distribution over the Madeira have been identified.

  2. Novel inter-crystal scattering event identification method for PET detectors

    NASA Astrophysics Data System (ADS)

    Lee, Min Sun; Kang, Seung Kwan; Lee, Jae Sung

    2018-06-01

    Here, we propose a novel method to identify inter-crystal scattering (ICS) events from a PET detector that is even applicable to light-sharing designs. In the proposed method, the detector observation was considered as a linear problem and ICS events were identified by solving this problem. Two ICS identification methods were suggested for solving the linear problem, pseudoinverse matrix calculation and convex constrained optimization. The proposed method was evaluated based on simulation and experimental studies. For the simulation study, an 8  ×  8 photo sensor was coupled to 8  ×  8, 10  ×  10 and 12  ×  12 crystal arrays to simulate a one-to-one coupling and two light-sharing detectors, respectively. The identification rate, the rate that the identified ICS events correctly include the true first interaction position and the energy linearity were evaluated for the proposed ICS identification methods. For the experimental study, a digital silicon photomultiplier was coupled with 8  ×  8 and 10  ×  10 arrays of 3  ×  3  ×  20 mm3 LGSO crystals to construct the one-to-one coupling and light-sharing detectors, respectively. Intrinsic spatial resolutions were measured for two detector types. The proposed ICS identification methods were implemented, and intrinsic resolutions were compared with and without ICS recovery. As a result, the simulation study showed that the proposed convex optimization method yielded robust energy estimation and high ICS identification rates of 0.93 and 0.87 for the one-to-one and light-sharing detectors, respectively. The experimental study showed a resolution improvement after recovering the identified ICS events into the first interaction position. The average intrinsic spatial resolutions for the one-to-one and light-sharing detector were 1.95 and 2.25 mm in the FWHM without ICS recovery, respectively. These values improved to 1.72 and 1.83 mm after ICS recovery, respectively. In conclusion, our proposed method showed good ICS identification in both one-to-one coupling and light-sharing detectors. We experimentally validated that the ICS recovery based on the proposed identification method led to an improved resolution.

  3. High Performance Computing-based Assessment of the Impacts of Climate Change on the Santa Cruz and San Pedro River Basin at Very High Resolution

    NASA Astrophysics Data System (ADS)

    Robles-Morua, A.; Vivoni, E. R.; Rivera-Fernandez, E. R.; Dominguez, F.; Meixner, T.

    2012-12-01

    Assessing the impact of climate change on large river basins in the southwestern United States is important given the natural water scarcity in the region. The bimodal distribution of annual precipitation also presents a challenge as differential climate impacts during the winter and summer seasons are not currently well understood. In this work, we focus on the hydrological consequences of climate change in the Santa Cruz and San Pedro river basins along the Arizona-Sonora border at high spatiotemporal resolutions (~100 m and ~1 hour). These river systems support rich ecological communities along riparian corridors that provide habitat to migratory birds and support recreational and economic activities. Determining the climate impacts on riparian communities involves assessing how river flows and groundwater recharge will change with altered temperature and precipitation regimes. In this study, we use a distributed hydrologic model, known as the TIN-based Real-time Integrated Basin Simulator (tRIBS), to generate simulated hydrological fields under historical (1991-2000) and climate change (2031-2040) scenarios obtained from an application of the Weather Research and Forecast (WRF) model. Using the distributed model, we transform the meteorological scenarios from WRF at 10-km, hourly resolution into predictions of the annual water budget, seasonal land surface fluxes and individual hydrographs of flood and recharge events. For this contribution, we selected two full years in the historical period and in the future scenario that represent wet and dry conditions for each decade. Given the size of the two basins, we rely on a high performance computing platform and a parallel domain discretization using sub-basin partitioning with higher resolutions maintained at experimental catchments in each river basin. Model simulations utilize best-available data across the Arizona-Sonora border on topography, land cover and soils obtained from analysis of remotely-sensed imagery and government databases. For the historical period, we build confidence in the model simulations through comparisons with streamflow estimates in the region. We also evaluate the WRF forcing outcomes with respect to meteorological inputs from ground rain gauges and the North American Land Data Assimilation System (NLDAS). We then analyze the high-resolution spatiotemporal predictions of soil moisture, evapotranspiration, runoff generation and recharge under past conditions and for the climate change scenario. A comparison with the historical period will yield a first-of-its-kind assessment at very high spatiotemporal resolution on the impacts of climate change on the hydrologic response of two large semiarid river basins of the southwestern United States.

  4. Generic Sensor Modeling Using Pulse Method

    NASA Technical Reports Server (NTRS)

    Helder, Dennis L.; Choi, Taeyoung

    2005-01-01

    Recent development of high spatial resolution satellites such as IKONOS, Quickbird and Orbview enable observation of the Earth's surface with sub-meter resolution. Compared to the 30 meter resolution of Landsat 5 TM, the amount of information in the output image was dramatically increased. In this era of high spatial resolution, the estimation of spatial quality of images is gaining attention. Historically, the Modulation Transfer Function (MTF) concept has been used to estimate an imaging system's spatial quality. Sometimes classified by target shapes, various methods were developed in laboratory environment utilizing sinusoidal inputs, periodic bar patterns and narrow slits. On-orbit sensor MTF estimation was performed on 30-meter GSD Landsat4 Thematic Mapper (TM) data from the bridge pulse target as a pulse input . Because of a high resolution sensor s small Ground Sampling Distance (GSD), reasonably sized man-made edge, pulse, and impulse targets can be deployed on a uniform grassy area with accurate control of ground targets using tarps and convex mirrors. All the previous work cited calculated MTF without testing the MTF estimator's performance. In previous report, a numerical generic sensor model had been developed to simulate and improve the performance of on-orbit MTF estimating techniques. Results from the previous sensor modeling report that have been incorporated into standard MTF estimation work include Fermi edge detection and the newly developed 4th order modified Savitzky-Golay (MSG) interpolation technique. Noise sensitivity had been studied by performing simulations on known noise sources and a sensor model. Extensive investigation was done to characterize multi-resolution ground noise. Finally, angle simulation was tested by using synthetic pulse targets with angles from 2 to 15 degrees, several brightness levels, and different noise levels from both ground targets and imaging system. As a continuing research activity using the developed sensor model, this report was dedicated to MTF estimation via pulse input method characterization using the Fermi edge detection and 4th order MSG interpolation method. The relationship between pulse width and MTF value at Nyquist was studied including error detection and correction schemes. Pulse target angle sensitivity was studied by using synthetic targets angled from 2 to 12 degrees. In this report, from the ground and system noise simulation, a minimum SNR value was suggested for a stable MTF value at Nyquist for the pulse method. Target width error detection and adjustment technique based on a smooth transition of MTF profile is presented, which is specifically applicable only to the pulse method with 3 pixel wide targets.

  5. Development of microwave rainfall retrieval algorithm for climate applications

    NASA Astrophysics Data System (ADS)

    KIM, J. H.; Shin, D. B.

    2014-12-01

    With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.

  6. Analyzing the Effects of Horizontal Resolution on Long-Term Coupled WRF-CMAQ Simulations

    EPA Science Inventory

    The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. To this end, WRF-CMAQ simulations over the co...

  7. Qubit Architecture with High Coherence and Fast Tunable Coupling.

    PubMed

    Chen, Yu; Neill, C; Roushan, P; Leung, N; Fang, M; Barends, R; Kelly, J; Campbell, B; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Megrant, A; Mutus, J Y; O'Malley, P J J; Quintana, C M; Sank, D; Vainsencher, A; Wenner, J; White, T C; Geller, Michael R; Cleland, A N; Martinis, John M

    2014-11-28

    We introduce a superconducting qubit architecture that combines high-coherence qubits and tunable qubit-qubit coupling. With the ability to set the coupling to zero, we demonstrate that this architecture is protected from the frequency crowding problems that arise from fixed coupling. More importantly, the coupling can be tuned dynamically with nanosecond resolution, making this architecture a versatile platform with applications ranging from quantum logic gates to quantum simulation. We illustrate the advantages of dynamical coupling by implementing a novel adiabatic controlled-z gate, with a speed approaching that of single-qubit gates. Integrating coherence and scalable control, the introduced qubit architecture provides a promising path towards large-scale quantum computation and simulation.

  8. Evolution of a Simulation Testbed into an Operational Tool

    NASA Technical Reports Server (NTRS)

    Sheth, Kapil; Bilimoria, Karl D.; Sridhar, Banavar; Sterenchuk, Mike; Niznik, Tim; O'Neill, Tom; Clymer, Alexis; Gutierrez Nolasco, Sebastian; Edholm, Kaj; Shih, Fu-Tai

    2017-01-01

    This paper describes the evolution over a 20-year period of the Future ATM (Air Traffic Management) Concepts Evaluation Tool (FACET) from a National Airspace System (NAS) based simulation testbed into an operational tool. FACET was developed as a testbed for assessing futuristic ATM concepts, e.g., automated conflict detection and resolution. NAS Constraint Evaluation and Notification Tool (NASCENT) is an application, within FACET, for alerting airspace users of inefficiencies in flight operations and advising time- and fuel-saving reroutes.It is currently in use at American Airlines Integrated Operations Center in Fort Worth, TX. The concepts assessed,research conducted, and the operational capability developed, along with the NASA support and achievements are presented in this paper.

  9. GePEToS: A Geant4 Monte Carlo Simulation Package for Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Jan, S.; Collot, J.; Gallin-Martel, M.-L.; Martin, P.; Mayet, F.; Tournefier, E.

    2005-02-01

    GePEToS is a simulation framework developed over the last few years for assessing the instrumental performance of future positron emission tomography (PET) scanners. It is based on Geant4, written in object-oriented C++ and runs on Linux platforms. The validity of GePEToS has been tested on the well-known Siemens ECAT EXACT HR+ camera. The results of two application examples are presented: the design optimization of a liquid Xe /spl mu/PET camera dedicated to small animal imaging as well as the evaluation of the effect of a strong axial magnetic field on the image resolution of a Concorde P4 /spl mu/PET camera.

  10. Integration of adaptive optics into highEnergy laser modeling and simulation

    DTIC Science & Technology

    2017-06-01

    astronomy [1], where AO is often used to improve image resolution. Likewise, AO shows promise in improving HEL performance. To better understand how much...the focus of the beam on target. In astronomy , the target is an imaging sensor and the source is an astronomical object, while in the application of...mirror [21]. While AO in laser weapons is still a developing field, the technology has been used for several decades on telescopes in astronomy to

  11. Analysis of two dimensional signals via curvelet transform

    NASA Astrophysics Data System (ADS)

    Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.

    2007-04-01

    This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.

  12. REMO poor man's reanalysis

    NASA Astrophysics Data System (ADS)

    Ries, H.; Moseley, C.; Haensler, A.

    2012-04-01

    Reanalyses depict the state of the atmosphere as a best fit in space and time of many atmospheric observations in a physically consistent way. By essentially solving the data assimilation problem in a very accurate manner, reanalysis results can be used as reference for model evaluation procedures and as forcing data sets for different model applications. However, the spatial resolution of the most common and accepted reanalysis data sets (e.g. JRA25, ERA-Interim) ranges from approximately 124 km to 80 km. This resolution is too coarse to simulate certain small scale processes often associated with extreme events. In addition, many models need higher resolved forcing data ( e.g. land-surface models, tools for identifying and assessing hydrological extremes). Therefore we downscaled the ERA-Interim reanalysis over the EURO-CORDEX-Domain for the time period 1989 to 2008 to a horizontal resolution of approximately 12 km. The downscaling is performed by nudging REMO-simulations to lower and lateral boundary conditions of the reanalysis, and by re-initializing the model every 24 hours ("REMO in forecast mode"). In this study the three following questions will be addressed: 1.) Does the REMO poor man's reanalysis meet the needs (accuracy, extreme value distribution) in validation and forcing? 2.) What lessons can be learned about the model used for downscaling? As REMO is used as a pure downscaling procedure, any systematic deviations from ERA-Interim result from poor process modelling but not from predictability limitations. 3.) How much small scale information generated by the downscaling model is lost with frequent initializations? A comparison to a simulation that is performed in climate mode will be presented.

  13. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  14. Modelled air pollution levels versus EC air quality legislation - results from high resolution simulation.

    PubMed

    Chervenkov, Hristo

    2013-12-01

    An appropriate method for evaluating the air quality of a certain area is to contrast the actual air pollution levels to the critical ones, prescribed in the legislative standards. The application of numerical simulation models for assessing the real air quality status is allowed by the legislation of the European Community (EC). This approach is preferable, especially when the area of interest is relatively big and/or the network of measurement stations is sparse, and the available observational data are scarce, respectively. Such method is very efficient for similar assessment studies due to continuous spatio-temporal coverage of the obtained results. In the study the values of the concentration of the harmful substances sulphur dioxide, (SO2), nitrogen dioxide (NO2), particulate matter - coarse (PM10) and fine (PM2.5) fraction, ozone (O3), carbon monoxide (CO) and ammonia (NH3) in the surface layer obtained from modelling simulations with resolution 10 km on hourly bases are taken to calculate the necessary statistical quantities which are used for comparison with the corresponding critical levels, prescribed in the EC directives. For part of them (PM2.5, CO and NH3) this is done for first time with such resolution. The computational grid covers Bulgaria entirely and some surrounding territories and the calculations are made for every year in the period 1991-2000. The averaged over the whole time slice results can be treated as representative for the air quality situation of the last decade of the former century.

  15. Development of an interpretive simulation tool for the proton radiography technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, M. C., E-mail: levymc@stanford.edu; Lawrence Livermore National Laboratory, Livermore, California 94551; Ryutov, D. D.

    2015-03-15

    Proton radiography is a useful diagnostic of high energy density (HED) plasmas under active theoretical and experimental development. In this paper, we describe a new simulation tool that interacts realistic laser-driven point-like proton sources with three dimensional electromagnetic fields of arbitrary strength and structure and synthesizes the associated high resolution proton radiograph. The present tool’s numerical approach captures all relevant physics effects, including effects related to the formation of caustics. Electromagnetic fields can be imported from particle-in-cell or hydrodynamic codes in a streamlined fashion, and a library of electromagnetic field “primitives” is also provided. This latter capability allows users tomore » add a primitive, modify the field strength, rotate a primitive, and so on, while quickly generating a high resolution radiograph at each step. In this way, our tool enables the user to deconstruct features in a radiograph and interpret them in connection to specific underlying electromagnetic field elements. We show an example application of the tool in connection to experimental observations of the Weibel instability in counterstreaming plasmas, using ∼10{sup 8} particles generated from a realistic laser-driven point-like proton source, imaging fields which cover volumes of ∼10 mm{sup 3}. Insights derived from this application show that the tool can support understanding of HED plasmas.« less

  16. Precise 3D Track Reconstruction Algorithm for the ICARUS T600 Liquid Argon Time Projection Chamber Detector

    DOE PAGES

    Antonello, M.; Baibussinov, B.; Benetti, P.; ...

    2013-01-15

    Liquid Argon Time Projection Chamber (LAr TPC) detectors offer charged particle imaging capability with remarkable spatial resolution. Precise event reconstruction procedures are critical in order to fully exploit the potential of this technology. In this paper we present a new, general approach to 3D reconstruction for the LAr TPC with a practical application to the track reconstruction. The efficiency of the method is evaluated on a sample of simulated tracks. We present also the application of the method to the analysis of stopping particle tracks collected during the ICARUS T600 detector operation with the CNGS neutrino beam.

  17. Application of optical interferometry in focused acoustic field measurement

    NASA Astrophysics Data System (ADS)

    Wang, Yuebing; Sun, Min; Cao, Yonggang; Zhu, Jiang

    2018-07-01

    Optical interferometry has been successfully applied in measuring acoustic pressures in plane-wave fields and spherical-wave fields. In this paper, the "effective" refractive index for focused acoustic fields was developed, through numerical simulation and experiments, the feasibility of the optical method in measuring acoustic fields of focused transducers was proved. Compared with the results from a membrane hydrophone, it was concluded that the optical method has good spatial resolution and is suitable for detecting focused fields with fluctuant distributions. The influences of a few factors (the generated lamb wave, laser beam directivity, etc.) were analyzed, and corresponding suggestions were proposed for effective application of this technology.

  18. Image scanning fluorescence emission difference microscopy based on a detector array.

    PubMed

    Li, Y; Liu, S; Liu, D; Sun, S; Kuang, C; Ding, Z; Liu, X

    2017-06-01

    We propose a novel imaging method that enables the enhancement of three-dimensional resolution of confocal microscopy significantly and achieve experimentally a new fluorescence emission difference method for the first time, based on the parallel detection with a detector array. Following the principles of photon reassignment in image scanning microscopy, images captured by the detector array were arranged. And by selecting appropriate reassign patterns, the imaging result with enhanced resolution can be achieved with the method of fluorescence emission difference. Two specific methods are proposed in this paper, showing that the difference between an image scanning microscopy image and a confocal image will achieve an improvement of transverse resolution by approximately 43% compared with that in confocal microscopy, and the axial resolution can also be enhanced by at least 22% experimentally and 35% theoretically. Moreover, the methods presented in this paper can improve the lateral resolution by around 10% than fluorescence emission difference and 15% than Airyscan. The mechanism of our methods is verified by numerical simulations and experimental results, and it has significant potential in biomedical applications. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  19. 3D near-infrared imaging based on a single-photon avalanche diode array sensor

    NASA Astrophysics Data System (ADS)

    Mata Pavia, Juan; Charbon, Edoardo; Wolf, Martin

    2011-07-01

    An imager for optical tomography was designed based on a detector with 128×128 single-photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contact-less setup has been conceived in which scanning of the object is not necessary. This enables one to perform high-resolution optical tomography with much higher acquisition rate, which is fundamental in clinical applications. The setup has a resolution of 97ps and operates with a laser source with an average power of 3mW. This new imaging system generated a high amount of data that could not be processed by established methods, therefore new concepts and algorithms were developed to take full advantage of it. Images were generated using a new reconstruction algorithm that combined general inverse problem methods with Fourier transforms in order to reduce the complexity of the problem. Simulations show that the potential resolution of the new setup is in the order of millimeters. Experiments have been performed to confirm this potential. Images derived from the measurements demonstrate that we have already reached a resolution of 5mm.

  20. Application of Radioxenon Stack Emission Data in High-Resolution Atmospheric Transport Modelling

    NASA Astrophysics Data System (ADS)

    Kusmierczyk-Michulec, J.; Schoeppner, M.; Kalinowski, M.; Bourgouin, P.; Kushida, N.; Barè, J.

    2017-12-01

    The Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) has developed the capability to run high-resolution atmospheric transport modelling by employing WRF and Flexpart-WRF. This new capability is applied to simulate the impact of stack emission data on simulated concentrations and how the availability of such data improves the overall accuracy of atmospheric transport modelling. The presented case study focuses on xenon-133 emissions from IRE, a medical isotope production facility in Belgium, and air concentrations detected at DEX33, a monitoring station close to Freiburg, Germany. The CTBTO is currently monitoring the atmospheric concentration of xenon-133 at 25 stations and will further expand the monitoring efforts to 40 stations worldwide. The incentive is the ability to detect xenon-133 that has been produced and released from a nuclear explosion. A successful detection can be used to prove the nuclear nature of an explosion and even support localization efforts. However, xenon-133 is also released from nuclear power plants and to a larger degree from medical isotope production facilities. The availability of stack emission data in combination with atmospheric transport modelling can greatly facilitate the understanding of xenon-133 concentrations detected at monitoring stations to distinguish between xenon-133 that has been emitted from a nuclear explosion and from civilian sources. Newly available stack emission data is used with a high-resolution version of the Flexpart atmospheric transport model, namely Flexpart-WRF, to assess the impact of the emissions on the detected concentrations and the advantage gained from the availability of such stack emission data. The results are analyzed with regard to spatial and time resolution of the high-resolution model and in comparison to conventional atmospheric transport models with and without stack emission data.

Top