Effect of Local TOF Kernel Miscalibrations on Contrast-Noise in TOF PET
NASA Astrophysics Data System (ADS)
Clementel, Enrico; Mollet, Pieter; Vandenberghe, Stefaan
2013-06-01
TOF PET imaging requires specific calibrations: accurate characterization of the system timing resolution and timing offset is required to achieve the full potential image quality. Current system models used in image reconstruction assume a spatially uniform timing resolution kernel. Furthermore, although the timing offset errors are often pre-corrected, this correction becomes less accurate with the time since, especially in older scanners, the timing offsets are often calibrated only during the installation, as the procedure is time-consuming. In this study, we investigate and compare the effects of local mismatch of timing resolution when a uniform kernel is applied to systems with local variations in timing resolution and the effects of uncorrected time offset errors on image quality. A ring-like phantom was acquired on a Philips Gemini TF scanner and timing histograms were obtained from coincidence events to measure timing resolution along all sets of LORs crossing the scanner center. In addition, multiple acquisitions of a cylindrical phantom, 20 cm in diameter with spherical inserts, and a point source were simulated. A location-dependent timing resolution was simulated, with a median value of 500 ps and increasingly large local variations, and timing offset errors ranging from 0 to 350 ps were also simulated. Images were reconstructed with TOF MLEM with a uniform kernel corresponding to the effective timing resolution of the data, as well as with purposefully mismatched kernels. To CRC vs noise curves were measured over the simulated cylinder realizations, while the simulated point source was processed to generate timing histograms of the data. Results show that timing resolution is not uniform over the FOV of the considered scanner. The simulated phantom data indicate that CRC is moderately reduced in data sets with locally varying timing resolution reconstructed with a uniform kernel, while still performing better than non-TOF reconstruction. On the other hand, uncorrected offset errors in our setup have a larger potential for decreasing image quality and can lead to a reduction of CRC of up to 15% and an increase in the measured timing resolution kernel up to 40%. However, in realistic conditions in frequently calibrated systems, using a larger effective timing kernel in image reconstruction can compensate uncorrected offset errors.
Energy resolution of pulsed neutron beam provided by the ANNRI beamline at the J-PARC/MLF
NASA Astrophysics Data System (ADS)
Kino, K.; Furusaka, M.; Hiraga, F.; Kamiyama, T.; Kiyanagi, Y.; Furutaka, K.; Goko, S.; Hara, K. Y.; Harada, H.; Harada, M.; Hirose, K.; Kai, T.; Kimura, A.; Kin, T.; Kitatani, F.; Koizumi, M.; Maekawa, F.; Meigo, S.; Nakamura, S.; Ooi, M.; Ohta, M.; Oshima, M.; Toh, Y.; Igashira, M.; Katabuchi, T.; Mizumoto, M.; Hori, J.
2014-02-01
We studied the energy resolution of the pulsed neutron beam of the Accurate Neutron-Nucleus Reaction Measurement Instrument (ANNRI) at the Japan Proton Accelerator Research Complex/Materials and Life Science Experimental Facility (J-PARC/MLF). A simulation in the energy region from 0.7 meV to 1 MeV was performed and measurements were made at thermal (0.76-62 meV) and epithermal energies (4.8-410 eV). The neutron energy resolution of ANNRI determined by the time-of-flight technique depends on the time structure of the neutron pulse. We obtained the neutron energy resolution as a function of the neutron energy by the simulation in the two operation modes of the neutron source: double- and single-bunch modes. In double-bunch mode, the resolution deteriorates above about 10 eV because the time structure of the neutron pulse splits into two peaks. The time structures at 13 energy points from measurements in the thermal energy region agree with those of the simulation. In the epithermal energy region, the time structures at 17 energy points were obtained from measurements and agree with those of the simulation. The FWHM values of the time structures by the simulation and measurements were found to be almost consistent. In the single-bunch mode, the energy resolution is better than about 1% between 1 meV and 10 keV at a neutron source operation of 17.5 kW. These results confirm the energy resolution of the pulsed neutron beam produced by the ANNRI beamline.
Assessment of the effects of horizontal grid resolution on long ...
The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.
In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less
Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.
2017-04-20
In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less
The viability of ADVANTG deterministic method for synthetic radiography generation
NASA Astrophysics Data System (ADS)
Bingham, Andrew; Lee, Hyoung K.
2018-07-01
Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.
Abstract ID: 242 Simulation of a Fast Timing Micro-Pattern Gaseous Detector for TOF-PET.
Radogna, Raffaella; Verwilligen, Piet
2018-01-01
Micro-Pattern Gas Detectors (MPGDs) are a new generation of gaseous detectors that have been developed thanks to advances in micro-structure technology. The main features of the MPGDs are: high rate capability (>50 MHz/cm 2 ); excellent spatial resolution (down to 50 μm); good time resolution (down to 3 ns); reduced radiation length, affordable costs, and possible flexible geometries. A new detector layout has been recently proposed that aims at combining both the high spatial resolution and high rate capability (100 MHz/cm 2 ) of the current state-of-the-art MPGDs with a high time resolution. This new type of MPGD is named the Fast Timing MPGD (FTM) detector [1,2]. The FTM developed for detecting charged particles can potentially reach sub-millimeter spatial resolution and 100 ps time resolution. This contribution introduces a Fast Timing MPGD technology optimized to detect photons, as an innovative PET imaging detector concept and emphases the importance of full detector simulation to guide the design of the detector geometry. The design and development of a new FTM, combining excellent time and spatial resolution, while exploiting the advantages of a reasonable energy resolution, will be a boost for the design of affordable TOF-PET scanner with improved image contrast. The use of such an affordable gas detector allows to instrument large areas in a cost-effective way, and to increase in image contrast for shorter scanning times (lowering the risk for the patient) and better diagnosis of the disease. In this report a dedicated simulation study is performed to optimize the detector design in the contest of the INFN project MPGD-Fatima. Results are obtained with ANSYS, COMSOL, GARFIELD++ and GEANT4 simulation tools. The final detector layout will be trade-off between fast time and good energy resolution. Copyright © 2017.
A method for generating high resolution satellite image time series
NASA Astrophysics Data System (ADS)
Guo, Tao
2014-10-01
There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.
NASA Astrophysics Data System (ADS)
Robinson, Matthew S.; Lane, Paul D.; Wann, Derek A.
2016-02-01
A novel compact electron gun for use in time-resolved gas electron diffraction experiments has recently been designed and commissioned. In this paper we present and discuss the extensive simulations that were performed to underpin the design in terms of the spatial and temporal qualities of the pulsed electron beam created by the ionisation of a gold photocathode using a femtosecond laser. The response of the electron pulses to a solenoid lens used to focus the electron beam has also been studied. The simulated results show that focussing the electron beam affects the overall spatial and temporal resolution of the experiment in a variety of ways, and that factors that improve the resolution of one parameter can often have a negative effect on the other. A balance must, therefore, be achieved between spatial and temporal resolution. The optimal experimental time resolution for the apparatus is predicted to be 416 fs for studies of gas-phase species, while the predicted spatial resolution of better than 2 nm-1 compares well with traditional time-averaged electron diffraction set-ups.
The dataset represents the data depicted in the Figures and Tables of a Journal Manuscript with the following abstract: The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions.This dataset is associated with the following publication
Roncali, Emilie; Schmall, Jeffrey P.; Viswanath, Varsha; Berg, Eric; Cherry, Simon R.
2014-01-01
Current developments in positron emission tomography (PET) focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate (LSO) crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm3 crystals coupled to a photomultiplier tube (PMT). Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface. PMID:24694727
Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim
2018-06-01
We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.
Real-time haptic cutting of high-resolution soft tissues.
Wu, Jun; Westermann, Rüdiger; Dick, Christian
2014-01-01
We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.
Satellite image time series simulation for environmental monitoring
NASA Astrophysics Data System (ADS)
Guo, Tao
2014-11-01
The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to generate satellite images in higher spatial and temporal details, which further enables satellite image time series simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and the temporal change is then projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only, and usage of costly high resolution data can be reduced as much as possible, and it presents an efficient solution with great cost performance to build up an economically operational monitoring service for environment, agriculture, forest, land use investigation, and other applications.
Above-real-time training (ARTT) improves transfer to a simulated flight control task.
Donderi, D C; Niall, Keith K; Fish, Karyn; Goldstein, Benjamin
2012-06-01
The aim of this study was to measure the effects of above-real-time-training (ARTT) speed and screen resolution on a simulated flight control task. ARTT has been shown to improve transfer to the criterion task in some military simulation experiments. We tested training speed and screen resolution in a project, sponsored by Defence Research and Development Canada, to develop components for prototype air mission simulators. For this study, 54 participants used a single-screen PC-based flight simulation program to learn to chase and catch an F-18A fighter jet with another F-18A while controlling the chase aircraft with a throttle and side-stick controller. Screen resolution was varied between participants, and training speed was varied factorially across two sessions within participants. Pretest and posttest trials were at high resolution and criterion (900 knots) speed. Posttest performance was best with high screen resolution training and when one ARTT training session was followed by a session of criterion speed training. ARTT followed by criterion training improves performance on a visual-motor coordination task. We think that ARTT influences known facilitators of transfer, including similarity to the criterion task and contextual interference. Use high-screen resolution, start with ARTT, and finish with criterion speed training when preparing a mission simulation.
NASA Astrophysics Data System (ADS)
Fairchild, A.; Chirayath, V.; Gladen, R.; McDonald, A.; Lim, Z.; Chrysler, M.; Koymen, A.; Weiss, A.
Simion 8.1®simulations were used to determine the energy resolution of a 1 meter long Time of Flight Positron annihilation induced Auger Electron Spectrometer (TOF-PAES). The spectrometer consists of: 1. a magnetic gradient section used to parallelize the electrons leaving the sample along the beam axis, 2. an electric field free time of flight tube and 3. a detection section with a set of ExB plates that deflect electrons exiting the TOF tube into a Micro-Channel Plate (MCP). Simulations of the time of flight distribution of electrons emitted according to a known secondary electron emission distribution, for various sample biases, were compared to experimental energy calibration peaks and found to be in excellent agreement. The TOF spectra at the highest sample bias was used to determine the timing resolution function describing the timing spread due to the electronics. Simulations were then performed to calculate the energy resolution at various electron energies in order to deconvolute the combined influence of the magnetic field parallelizer, the timing resolution, and the voltage gradient at the ExB plates. The energy resolution of the 1m TOF-PAES was compared to a newly constructed 3 meter long system. The results were used to optimize the geometry and the potentials of the ExB plates for obtaining the best energy resolution. This work was supported by NSF Grant NSF Grant No. DMR 1508719 and DMR 1338130.
NASA Astrophysics Data System (ADS)
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-01
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-10
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
UWB Tracking Algorithms: AOA and TDOA
NASA Technical Reports Server (NTRS)
Ni, Jianjun David; Arndt, D.; Ngo, P.; Gross, J.; Refford, Melinda
2006-01-01
Ultra-Wideband (UWB) tracking prototype systems are currently under development at NASA Johnson Space Center for various applications on space exploration. For long range applications, a two-cluster Angle of Arrival (AOA) tracking method is employed for implementation of the tracking system; for close-in applications, a Time Difference of Arrival (TDOA) positioning methodology is exploited. Both AOA and TDOA are chosen to utilize the achievable fine time resolution of UWB signals. This talk presents a brief introduction to AOA and TDOA methodologies. The theoretical analysis of these two algorithms reveal the affecting parameters impact on the tracking resolution. For the AOA algorithm, simulations show that a tracking resolution less than 0.5% of the range can be achieved with the current achievable time resolution of UWB signals. For the TDOA algorithm used in close-in applications, simulations show that the (sub-inch) high tracking resolution is achieved with a chosen tracking baseline configuration. The analytical and simulated results provide insightful guidance for the UWB tracking system design.
Development of high resolution simulations of the atmospheric environment using the MASS model
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Zack, John W.; Karyampudi, V. Mohan
1989-01-01
Numerical simulations were performed with a very high resolution (7.25 km) version of the MASS model (Version 4.0) in an effort to diagnose the vertical wind shear and static stability structure during the Shuttle Challenger disaster which occurred on 28 January 1986. These meso-beta scale simulations reveal that the strongest vertical wind shears were concentrated in the 200 to 150 mb layer at 1630 GMT, i.e., at about the time of the disaster. These simulated vertical shears were the result of two primary dynamical processes. The juxtaposition of both of these processes produced a shallow (30 mb deep) region of strong vertical wind shear, and hence, low Richardson number values during the launch time period. Comparisons with the Cape Canaveral (XMR) rawinsonde indicates that the high resolution MASS 4.0 simulation more closely emulated nature than did previous simulations of the same event with the GMASS model.
NASA Technical Reports Server (NTRS)
Baker, R. David; Wang, Yansen; Tao, Wei-Kuo; Wetzel, Peter; Belcher, Larry R.
2004-01-01
High-resolution mesoscale model simulations of the 6-7 May 2000 Missouri flash flood event were performed to test the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation. In this flash flood event, a mesoscale convective system (MCS) produced over 340 mm of rain in roughly 9 hours in some locations. Two different types of model initialization were employed: 1) NCEP global reanalysis with 2.5-degree grid spacing and 12-hour temporal resolution, and 2) Eta reanalysis with 40- km grid spacing and $hour temporal resolution. In addition, two different land surface treatments were considered. A simple land scheme. (SLAB) keeps soil moisture fixed at initial values throughout the simulation, while a more sophisticated land model (PLACE) allows for r interactive feedback. Simulations with high-resolution Eta model initialization show considerable improvement in the intensity of precipitation due to the presence in the initialization of a residual mesoscale convective vortex (hlCV) from a previous MCS. Simulations with the PLACE land model show improved location of heavy precipitation. Since soil moisture can vary over time in the PLACE model, surface energy fluxes exhibit strong spatial gradients. These surface energy flux gradients help produce a strong low-level jet (LLJ) in the correct location. The LLJ then interacts with the cold outflow boundary of the MCS to produce new convective cells. The simulation with both high-resolution model initialization and time-varying soil moisture test reproduces the intensity and location of observed rainfall.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Chengguang; Drinkwater, Bruce W.
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method.more » However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.« less
NASA Astrophysics Data System (ADS)
Michael, Scott; Steiman-Cameron, Thomas Y.; Durisen, Richard H.; Boley, Aaron C.
2012-02-01
We conduct a convergence study of a protostellar disk, subject to a constant global cooling time and susceptible to gravitational instabilities (GIs), at a time when heating and cooling are roughly balanced. Our goal is to determine the gravitational torques produced by GIs, the level to which transport can be represented by a simple α-disk formulation, and to examine fragmentation criteria. Four simulations are conducted, identical except for the number of azimuthal computational grid points used. A Fourier decomposition of non-axisymmetric density structures in cos (mphi), sin (mphi) is performed to evaluate the amplitudes Am of these structures. The Am , gravitational torques, and the effective Shakura & Sunyaev α arising from gravitational stresses are determined for each resolution. We find nonzero Am for all m-values and that Am summed over all m is essentially independent of resolution. Because the number of measurable m-values is limited to half the number of azimuthal grid points, higher-resolution simulations have a larger fraction of their total amplitude in higher-order structures. These structures act more locally than lower-order structures. Therefore, as the resolution increases the total gravitational stress decreases as well, leading higher-resolution simulations to experience weaker average gravitational torques than lower-resolution simulations. The effective α also depends upon the magnitude of the stresses, thus αeff also decreases with increasing resolution. Our converged αeff is consistent with predictions from an analytic local theory for thin disks by Gammie, but only over many dynamic times when averaged over a substantial volume of the disk.
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
Fast-time Simulation of an Automated Conflict Detection and Resolution Concept
NASA Technical Reports Server (NTRS)
Windhorst, Robert; Erzberger, Heinz
2006-01-01
This paper investigates the effect on the National Airspace System of reducing air traffc controller workload by automating conflict detection and resolution. The Airspace Concept Evaluation System is used to perform simulations of the Cleveland Center with conventional and with automated conflict detection and resolution concepts. Results show that the automated conflict detection and resolution concept significantly decreases growth of delay as traffic demand is increased in en-route airspace.
Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.
2015-02-01
Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh
2014-06-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
Computational Models of Protein Kinematics and Dynamics: Beyond Simulation
Gipson, Bryant; Hsu, David; Kavraki, Lydia E.; Latombe, Jean-Claude
2016-01-01
Physics-based simulation represents a powerful method for investigating the time-varying behavior of dynamic protein systems at high spatial and temporal resolution. Such simulations, however, can be prohibitively difficult or lengthy for large proteins or when probing the lower-resolution, long-timescale behaviors of proteins generally. Importantly, not all questions about a protein system require full space and time resolution to produce an informative answer. For instance, by avoiding the simulation of uncorrelated, high-frequency atomic movements, a larger, domain-level picture of protein dynamics can be revealed. The purpose of this review is to highlight the growing body of complementary work that goes beyond simulation. In particular, this review focuses on methods that address kinematics and dynamics, as well as those that address larger organizational questions and can quickly yield useful information about the long-timescale behavior of a protein. PMID:22524225
The North American Regional Climate Change Assessment Program (NARCCAP): Status and results
NASA Astrophysics Data System (ADS)
Gutowski, W. J.
2009-12-01
NARCCAP is a multi-institutional program that is investigating systematically the uncertainties in regional scale simulations of contemporary climate and projections of future climate. NARCCAP is supported by multiple federal agencies. NARCCAP is producing an ensemble of high-resolution climate-change scenarios by nesting multiple RCMs in reanalyses and multiple atmosphere-ocean GCM simulations of contemporary and future-scenario climates. The RCM domains cover the contiguous U.S., northern Mexico, and most of Canada. The simulation suite also includes time-slice, high resolution GCMs that use sea-surface temperatures from parent atmosphere-ocean GCMs. The baseline resolution of the RCMs and time-slice GCMs is 50 km. Simulations use three sources of boundary conditions: National Centers for Environmental Prediction (NCEP)/Department of Energy (DOE) AMIP-II Reanalysis, GCMs simulating contemporary climate and GCMs using the A2 SRES emission scenario for the twenty-first century. Simulations cover 1979-2004 and 2038-2060, with the first 3 years discarded for spin-up. The resulting RCM and time-slice simulations offer opportunity for extensive analysis of RCM simulations as well as a basis for multiple high-resolution climate scenarios for climate change impacts assessments. Geophysical statisticians are developing measures of uncertainty from the ensemble. To enable very high-resolution simulations of specific regions, both RCM and high-resolution time-slice simulations are saving output needed for further downscaling. All output is publically available to the climate analysis and the climate impacts assessment community, through an archiving and data-distribution plan. Some initial results show that the models closely reproduce ENSO-related precipitation variations in coastal California, where the correlation between the simulated and observed monthly time series exceeds 0.94 for all models. The strong El Nino events of 1982-83 and 1997-98 are well reproduced for the Pacific coastal region of the U.S. in all models. ENSO signals are less well reproduced in other regions. The models also produce well extreme monthly precipitation in coastal California and the Upper Midwest. Model performance tends to deteriorate from west to east across the domain, or roughly from the inflow boundary toward the outflow boundary. This deterioration with distance from the inflow boundary is ameliorated to some extent in models formulated such that large-scale information is included in the model solution, whether implemented by spectral nudging or by use of a perturbation form of the governing equations.
Low-resolution simulations of vesicle suspensions in 2D
NASA Astrophysics Data System (ADS)
Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George
2018-03-01
Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Imaging performance of a LaBr3-based PET scanner
Daube-Witherspoon, M E; Surti, S; Perkins, A; Kyba, C C M; Wiener, R; Werner, M E; Kulp, R; Karp, J S
2010-01-01
A prototype time-of-flight (TOF) PET scanner based on cerium-doped lanthanum bromide [LaBr3 (5% Ce)] has been developed. LaBr3 has high light output, excellent energy resolution, and fast timing properties that have been predicted to lead to good image quality. Intrinsic performance measurements of spatial resolution, sensitivity, and scatter fraction demonstrate good conventional PET performance; the results agree with previous simulation studies. Phantom measurements show the excellent image quality achievable with the prototype system. Phantom measurements and corresponding simulations show a faster and more uniform convergence rate, as well as more uniform quantification, for TOF reconstruction of the data, which have 375-ps intrinsic timing resolution, compared to non-TOF images. Measurements and simulations of a hot and cold sphere phantom show that the 7% energy resolution helps to mitigate residual errors in the scatter estimate because a high energy threshold (>480 keV) can be used to restrict the amount of scatter accepted without a loss of true events. Preliminary results with incorporation of a model of detector blurring in the iterative reconstruction algorithm show improved contrast recovery but also point out the importance of an accurate resolution model of the tails of LaBr3’s point spread function. The LaBr3 TOF-PET scanner has demonstrated the impact of superior timing and energy resolutions on image quality. PMID:19949259
Utilization of Short-Simulations for Tuning High-Resolution Climate Model
NASA Astrophysics Data System (ADS)
Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.
2016-12-01
Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.
A High-Resolution Capability for Large-Eddy Simulation of Jet Flows
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2011-01-01
A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.
NASA Astrophysics Data System (ADS)
Zhang, Jiaying; Gang, Tie; Ye, Chaofeng; Cong, Sen
2018-04-01
Linear-chirp-Golay (LCG)-coded excitation combined with pulse compression is proposed in this paper to improve the time resolution and suppress sidelobe in ultrasonic testing. The LCG-coded excitation is binary complementary pair Golay signal with linear-chirp signal applied on every sub pulse. Compared with conventional excitation which is a common ultrasonic testing method using a brief narrow pulse as exciting signal, the performances of LCG-coded excitation, in terms of time resolution improvement and sidelobe suppression, are studied via numerical and experimental investigations. The numerical simulations are implemented using Matlab K-wave toolbox. It is seen from the simulation results that time resolution of LCG excitation is 35.5% higher and peak sidelobe level (PSL) is 57.6 dB lower than linear-chirp excitation with 2.4 MHz chirp bandwidth and 3 μs time duration. In the B-scan experiment, time resolution of LCG excitation is higher and PSL is lower than conventional brief pulse excitation and chirp excitation. In terms of time resolution, LCG-coded signal has better performance than chirp signal. Moreover, the impact of chirp bandwidth on LCG-coded signal is less than that on chirp signal. In addition, the sidelobe of LCG-coded signal is lower than that of chirp signal with pulse compression.
Resolution recovery for Compton camera using origin ensemble algorithm.
Andreyev, A; Celler, A; Ozsahin, I; Sitek, A
2016-08-01
Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.
A New Approach to Modeling Jupiter's Magnetosphere
NASA Astrophysics Data System (ADS)
Fukazawa, K.; Katoh, Y.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.
2017-12-01
The scales in planetary magnetospheres range from 10s of planetary radii to kilometers. For a number of years we have studied the magnetospheres of Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations. However, we have not been able to reach even the limits of the MHD approximation because of the large amount of computer resources required. Recently thanks to the progress in supercomputer systems, we have obtained the capability to simulate Jupiter's magnetosphere with 1000 times the number of grid points used in our previous simulations. This has allowed us to combine the high resolution global simulation with a micro-scale simulation of the Jovian magnetosphere. In particular we can combine a hybrid (kinetic ions and fluid electrons) simulation with the MHD simulation. In addition, the new capability enables us to run multi-parameter survey simulations of the Jupiter-solar wind system. In this study we performed a high-resolution simulation of Jovian magnetosphere to connect with the hybrid simulation, and lower resolution simulations under the various solar wind conditions to compare with Hisaki and Juno observations. In the high-resolution simulation we used a regular Cartesian gird with 0.15 RJ grid spacing and placed the inner boundary at 7 RJ. From these simulation settings, we provide the magnetic field out to around 20 RJ from Jupiter as a background field for the hybrid simulation. For the first time we have been able to resolve Kelvin Helmholtz waves on the magnetopause. We have investigated solar wind dynamic pressures between 0.01 and 0.09 nPa for a number of IMF values. These simulation data are open for the registered users to download the raw data. We have compared the results of these simulations with Hisaki auroral observations.
Non-technical skills of surgeons and anaesthetists in simulated operating theatre crises.
Doumouras, A G; Hamidi, M; Lung, K; Tarola, C L; Tsao, M W; Scott, J W; Smink, D S; Yule, S
2017-07-01
Deficiencies in non-technical skills (NTS) have been increasingly implicated in avoidable operating theatre errors. Accordingly, this study sought to characterize the impact of surgeon and anaesthetist non-technical skills on time to crisis resolution in a simulated operating theatre. Non-technical skills were assessed during 26 simulated crises (haemorrhage and airway emergency) performed by surgical teams. Teams consisted of surgeons, anaesthetists and nurses. Behaviour was assessed by four trained raters using the Non-Technical Skills for Surgeons (NOTSS) and Anaesthetists' Non-Technical Skills (ANTS) rating scales before and during the crisis phase of each scenario. The primary endpoint was time to crisis resolution; secondary endpoints included NTS scores before and during the crisis. A cross-classified linear mixed-effects model was used for the final analysis. Thirteen different surgical teams were assessed. Higher NTS ratings resulted in significantly faster crisis resolution. For anaesthetists, every 1-point increase in ANTS score was associated with a decrease of 53·50 (95 per cent c.i. 31·13 to 75·87) s in time to crisis resolution (P < 0·001). Similarly, for surgeons, every 1-point increase in NOTSS score was associated with a decrease of 64·81 (26·01 to 103·60) s in time to crisis resolution in the haemorrhage scenario (P = 0·001); however, this did not apply to the difficult airway scenario. Non-technical skills scores were lower during the crisis phase of the scenarios than those measured before the crisis for both surgeons and anaesthetists. A higher level of NTS of surgeons and anaesthetists led to quicker crisis resolution in a simulated operating theatre environment. © 2017 BJS Society Ltd Published by John Wiley & Sons Ltd.
Watching proteins function with picosecond X-ray crystallography and molecular dynamics simulations.
NASA Astrophysics Data System (ADS)
Anfinrud, Philip
2006-03-01
Time-resolved electron density maps of myoglobin, a ligand-binding heme protein, have been stitched together into movies that unveil with < 2-å spatial resolution and 150-ps time-resolution the correlated protein motions that accompany and/or mediate ligand migration within the hydrophobic interior of a protein. A joint analysis of all-atom molecular dynamics (MD) calculations and picosecond time-resolved X-ray structures provides single-molecule insights into mechanisms of protein function. Ensemble-averaged MD simulations of the L29F mutant of myoglobin following ligand dissociation reproduce the direction, amplitude, and timescales of crystallographically-determined structural changes. This close agreement with experiments at comparable resolution in space and time validates the individual MD trajectories, which identify and structurally characterize a conformational switch that directs dissociated ligands to one of two nearby protein cavities. This unique combination of simulation and experiment unveils functional protein motions and illustrates at an atomic level relationships among protein structure, dynamics, and function. In collaboration with Friedrich Schotte and Gerhard Hummer, NIH.
Simulation of a small muon tomography station system based on RPCs
NASA Astrophysics Data System (ADS)
Chen, S.; Li, Q.; Ma, J.; Kong, H.; Ye, Y.; Gao, J.; Jiang, Y.
2014-10-01
In this work, Monte Carlo simulations were used to study the performance of a small muon Tomography Station based on four glass resistive plate chambers(RPCs) with a spatial resolution of approximately 1.0mm (FWHM). We developed a simulation code to generate cosmic ray muons with the appropriate distribution of energies and angles. PoCA and EM algorithm were used to rebuild the objects for comparison. We compared Z discrimination time with and without muon momentum measurement. The relation between Z discrimination time and spatial resolution was also studied. Simulation results suggest that mean scattering angle is a better Z indicator and upgrading to larger RPCs will improve reconstruction image quality.
Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.
2015-01-01
Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559
NASA Astrophysics Data System (ADS)
Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.
2015-07-01
Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3× 3× 20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162+/- 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreyev, A.
Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less
Tang, Yunqing; Dai, Luru; Zhang, Xiaoming; Li, Junbai; Hendriks, Johnny; Fan, Xiaoming; Gruteser, Nadine; Meisenberg, Annika; Baumann, Arnd; Katranidis, Alexandros; Gensch, Thomas
2015-01-01
Single molecule localization based super-resolution fluorescence microscopy offers significantly higher spatial resolution than predicted by Abbe’s resolution limit for far field optical microscopy. Such super-resolution images are reconstructed from wide-field or total internal reflection single molecule fluorescence recordings. Discrimination between emission of single fluorescent molecules and background noise fluctuations remains a great challenge in current data analysis. Here we present a real-time, and robust single molecule identification and localization algorithm, SNSMIL (Shot Noise based Single Molecule Identification and Localization). This algorithm is based on the intrinsic nature of noise, i.e., its Poisson or shot noise characteristics and a new identification criterion, QSNSMIL, is defined. SNSMIL improves the identification accuracy of single fluorescent molecules in experimental or simulated datasets with high and inhomogeneous background. The implementation of SNSMIL relies on a graphics processing unit (GPU), making real-time analysis feasible as shown for real experimental and simulated datasets. PMID:26098742
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiswell, S
2009-01-11
Assimilation of radar velocity and precipitation fields into high-resolution model simulations can improve precipitation forecasts with decreased 'spin-up' time and improve short-term simulation of boundary layer winds (Benjamin, 2004 & 2007; Xiao, 2008) which is critical to improving plume transport forecasts. Accurate description of wind and turbulence fields is essential to useful atmospheric transport and dispersion results, and any improvement in the accuracy of these fields will make consequence assessment more valuable during both routine operation as well as potential emergency situations. During 2008, the United States National Weather Service (NWS) radars implemented a significant upgrade which increased the real-timemore » level II data resolution to 8 times their previous 'legacy' resolution, from 1 km range gate and 1.0 degree azimuthal resolution to 'super resolution' 250 m range gate and 0.5 degree azimuthal resolution (Fig 1). These radar observations provide reflectivity, velocity and returned power spectra measurements at a range of up to 300 km (460 km for reflectivity) at a frequency of 4-5 minutes and yield up to 13.5 million point observations per level in super-resolution mode. The migration of National Weather Service (NWS) WSR-88D radars to super resolution is expected to improve warning lead times by detecting small scale features sooner with increased reliability; however, current operational mesoscale model domains utilize grid spacing several times larger than the legacy data resolution, and therefore the added resolution of radar data is not fully exploited. The assimilation of super resolution reflectivity and velocity data into high resolution numerical weather model forecasts where grid spacing is comparable to the radar data resolution is investigated here to determine the impact of the improved data resolution on model predictions.« less
NASA Astrophysics Data System (ADS)
Philip, S.; Martin, R. V.; Keller, C. A.
2015-11-01
Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Bugbee, Bruce; Gotseff, Peter
Capturing technical and economic impacts of solar photovoltaics (PV) and other distributed energy resources (DERs) on electric distribution systems can require high-time resolution (e.g. 1 minute), long-duration (e.g. 1 year) simulations. However, such simulations can be computationally prohibitive, particularly when including complex control schemes in quasi-steady-state time series (QSTS) simulation. Various approaches have been used in the literature to down select representative time segments (e.g. days), but typically these are best suited for lower time resolutions or consider only a single data stream (e.g. PV production) for selection. We present a statistical approach that combines stratified sampling and bootstrapping tomore » select representative days while also providing a simple method to reassemble annual results. We describe the approach in the context of a recent study with a utility partner. This approach enables much faster QSTS analysis by simulating only a subset of days, while maintaining accurate annual estimates.« less
NASA Technical Reports Server (NTRS)
Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)
2001-01-01
We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.
Speeding up N-body simulations of modified gravity: chameleon screening models
NASA Astrophysics Data System (ADS)
Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo
2017-02-01
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.
Mondal, Nagendra Nath
2009-01-01
This study presents Monte Carlo Simulation (MCS) results of detection efficiencies, spatial resolutions and resolving powers of a time-of-flight (TOF) PET detector systems. Cerium activated Lutetium Oxyorthosilicate (Lu2SiO5: Ce in short LSO), Barium Fluoride (BaF2) and BriLanCe 380 (Cerium doped Lanthanum tri-Bromide, in short LaBr3) scintillation crystals are studied in view of their good time and energy resolutions and shorter decay times. The results of MCS based on GEANT show that spatial resolution, detection efficiency and resolving power of LSO are better than those of BaF2 and LaBr3, although it possesses inferior time and energy resolutions. Instead of the conventional position reconstruction method, newly established image reconstruction (talked about in the previous work) method is applied to produce high-tech images. Validation is a momentous step to ensure that this imaging method fulfills all purposes of motivation discussed by reconstructing images of two tumors in a brain phantom. PMID:20098551
Medvigy, David; Kim, Seung Hee; Kim, Jinwon; Kafatos, Menas C
2016-07-01
Models that predict the timing of deciduous tree leaf emergence are typically very sensitive to temperature. However, many temperature data products, including those from climate models, have been developed at a very coarse spatial resolution. Such coarse-resolution temperature products can lead to highly biased predictions of leaf emergence. This study investigates how dynamical downscaling of climate models impacts simulations of deciduous tree leaf emergence in California. Models for leaf emergence are forced with temperatures simulated by a general circulation model (GCM) at ~200-km resolution for 1981-2000 and 2031-2050 conditions. GCM simulations are then dynamically downscaled to 32- and 8-km resolution, and leaf emergence is again simulated. For 1981-2000, the regional average leaf emergence date is 30.8 days earlier in 32-km simulations than in ~200-km simulations. Differences between the 32 and 8 km simulations are small and mostly local. The impact of downscaling from 200 to 8 km is ~15 % smaller in 2031-2050 than in 1981-2000, indicating that the impacts of downscaling are unlikely to be stationary.
Impacts of high resolution data on traveler compliance levels in emergency evacuation simulations
Lu, Wei; Han, Lee D.; Liu, Cheng; ...
2016-05-05
In this article, we conducted a comparison study of evacuation assignment based on Traffic Analysis Zones (TAZ) and high resolution LandScan USA Population Cells (LPC) with detailed real world roads network. A platform for evacuation modeling built on high resolution population distribution data and activity-based microscopic traffic simulation was proposed. This platform can be extended to any cities in the world. The results indicated that evacuee compliance behavior affects evacuation efficiency with traditional TAZ assignment, but it did not significantly compromise the performance with high resolution LPC assignment. The TAZ assignment also underestimated the real travel time during evacuation. Thismore » suggests that high data resolution can improve the accuracy of traffic modeling and simulation. The evacuation manager should consider more diverse assignment during emergency evacuation to avoid congestions.« less
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Tao, Wei-Kuo; Wu, Man-Li C.
2010-01-01
In this study, extended -range (30 -day) high-resolution simulations with the NASA global mesoscale model are conducted to simulate the initiation and propagation of six consecutive African easterly waves (AEWs) from late August to September 2006 and their association with hurricane formation. It is shown that the statistical characteristics of individual AEWs are realistically simulated with larger errors in the 5th and 6th AEWs. Remarkable simulations of a mean African easterly jet (AEJ) are also obtained. Nine additional 30 -day experiments suggest that although land surface processes might contribute to the predictability of the AEJ and AEWs, the initiation and detailed evolution of AEWs still depend on the accurate representation of dynamic and land surface initial conditions and their time -varying nonlinear interactions. Of interest is the potential to extend the lead time for predicting hurricane formation (e.g., a lead time of up to 22 days) as the 4th AEW is realistically simulated.
NASA Astrophysics Data System (ADS)
Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson
2017-03-01
Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
A Variable Resolution Stretched Grid General Circulation Model: Regional Climate Simulation
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.; Suarez, Max J.
2000-01-01
The development of and results obtained with a variable resolution stretched-grid GCM for the regional climate simulation mode, are presented. A global variable resolution stretched- grid used in the study has enhanced horizontal resolution over the U.S. as the area of interest The stretched-grid approach is an ideal tool for representing regional to global scale interaction& It is an alternative to the widely used nested grid approach introduced over a decade ago as a pioneering step in regional climate modeling. The major results of the study are presented for the successful stretched-grid GCM simulation of the anomalous climate event of the 1988 U.S. summer drought- The straightforward (with no updates) two month simulation is performed with 60 km regional resolution- The major drought fields, patterns and characteristics such as the time averaged 500 hPa heights precipitation and the low level jet over the drought area. appear to be close to the verifying analyses for the stretched-grid simulation- In other words, the stretched-grid GCM provides an efficient down-scaling over the area of interest with enhanced horizontal resolution. It is also shown that the GCM skill is sustained throughout the simulation extended to one year. The developed and tested in a simulation mode stretched-grid GCM is a viable tool for regional and subregional climate studies and applications.
Motion control of 7-DOF arms - The configuration control approach
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Long, Mark K.; Lee, Thomas S.
1993-01-01
Graphics simulation and real-time implementation of configuration control schemes for a redundant 7-DOF Robotics Research arm are described. The arm kinematics and motion control schemes are described briefly. This is followed by a description of a graphics simulation environment for 7-DOF arm control on the Silicon Graphics IRIS Workstation. Computer simulation results are presented to demonstrate elbow control, collision avoidance, and optimal joint movement as redundancy resolution goals. The laboratory setup for experimental validation of motion control of the 7-DOF Robotics Research arm is then described. The configuration control approach is implemented on a Motorola-68020/VME-bus-based real-time controller, with elbow positioning for redundancy resolution. Experimental results demonstrate the efficacy of configuration control for real-time control.
Spatial resolution limitation of liquid crystal spatial light modulator
NASA Astrophysics Data System (ADS)
Wang, Xinghua; Wang, Bin; McManamon, Paul F., III; Pouch, John J.; Miranda, Felix A.; Anderson, James E.; Bos, Philip J.
2004-10-01
The effect of fringing electric fields in a liquid crystal (LC) Optical Phased Array (OPA), also referred to as a spatial light modulator (SLM), is a governing factor that determines the diffraction efficiency (DE) of the LC OPA for high resolution spatial phase modulation. In this article, the fringing field effect in a high resolution LC OPA is studied by accurate modeling the DE of the LC blazed gratings by LC director simulation and Finite Difference Time Domain (FDTD) simulation. Influence factors that contribute significantly to the DE are discussed. Such results provide fundamental understanding for high resolution LC devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Sownak; Li, Baojiu; He, Jian-hua
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less
Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction
Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald
2015-01-01
In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
NASA Astrophysics Data System (ADS)
Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.
2013-12-01
In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.
High-resolution regional climate model evaluation using variable-resolution CESM over California
NASA Astrophysics Data System (ADS)
Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.
2015-12-01
Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine-scale processes. This assessment is also relevant for addressing the scale limitation of current RCMs or VRGCMs when next-generation model resolution increases to ~10km and beyond.
NASA Technical Reports Server (NTRS)
da Silva, Arlindo M.; Putman, William; Nattala, J.
2014-01-01
This document describes the gridded output files produced by a two-year global, non-hydrostatic mesoscale simulation for the period 2005-2006 produced with the non-hydrostatic version of GEOS-5 Atmospheric Global Climate Model (AGCM). In addition to standard meteorological parameters (wind, temperature, moisture, surface pressure), this simulation includes 15 aerosol tracers (dust, sea-salt, sulfate, black and organic carbon), O3, CO and CO2. This model simulation is driven by prescribed sea-surface temperature and sea-ice, daily volcanic and biomass burning emissions, as well as high-resolution inventories of anthropogenic sources. A description of the GEOS-5 model configuration used for this simulation can be found in Putman et al. (2014). The simulation is performed at a horizontal resolution of 7 km using a cubed-sphere horizontal grid with 72 vertical levels, extending up to to 0.01 hPa (approximately 80 km). For user convenience, all data products are generated on two logically rectangular longitude-latitude grids: a full-resolution 0.0625 deg grid that approximately matches the native cubed-sphere resolution, and another 0.5 deg reduced-resolution grid. The majority of the full-resolution data products are instantaneous with some fields being time-averaged. The reduced-resolution datasets are mostly time-averaged, with some fields being instantaneous. Hourly data intervals are used for the reduced-resolution datasets, while 30-minute intervals are used for the full-resolution products. All full-resolution output is on the model's native 72-layer hybrid sigma-pressure vertical grid, while the reduced-resolution output is given on native vertical levels and on 48 pressure surfaces extending up to 0.02 hPa. Section 4 presents additional details on horizontal and vertical grids. Information of the model surface representation can be found in Appendix B. The GEOS-5 product is organized into file collections that are described in detail in Appendix C. Additional details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary. Documentation about the current access methods for products described in this document can be found on the GEOS-5 Nature Run portal: http://gmao.gsfc.nasa.gov/projects/G5NR. Information on the scientific quality of this simulation will appear in a forthcoming NASA Technical Report Series on Global Modeling and Data Assimilation to be available from http://gmao.gsfc.nasa.gov/pubs/tm/.
NASA Astrophysics Data System (ADS)
Lai, Hanh; McJunkin, Timothy R.; Miller, Carla J.; Scott, Jill R.; Almirall, José R.
2008-09-01
The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOSFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene, 2,7-dinitrofluorene, and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: (1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctly predict the ion drift times; (2) a drift gas composition study evaluates the accuracy in predicting the resolution; (3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanh Lai; Timothy R. McJunkin; Carla J. Miller
2008-09-01
The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: 1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctlymore » predict the ion drift times; 2) a drift gas composition study evaluates the accuracy in predicting the resolution; and 3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.« less
An Examination of Parameters Affecting Large Eddy Simulations of Flow Past a Square Cylinder
NASA Technical Reports Server (NTRS)
Mankbadi, M. R.; Georgiadis, N. J.
2014-01-01
Separated flow over a bluff body is analyzed via large eddy simulations. The turbulent flow around a square cylinder features a variety of complex flow phenomena such as highly unsteady vortical structures, reverse flow in the near wall region, and wake turbulence. The formation of spanwise vortices is often times artificially suppressed in computations by either insufficient depth or a coarse spanwise resolution. As the resolution is refined and the domain extended, the artificial turbulent energy exchange between spanwise and streamwise turbulence is eliminated within the wake region. A parametric study is performed highlighting the effects of spanwise vortices where the spanwise computational domain's resolution and depth are varied. For Re=22,000, the mean and turbulent statistics computed from the numerical large eddy simulations (NLES) are in good agreement with experimental data. Von-Karman shedding is observed in the wake of the cylinder. Mesh independence is illustrated by comparing a mesh resolution of 2 million to 16 million. Sensitivities to time stepping were minimized and sampling frequency sensitivities were nonpresent. While increasing the spanwise depth and resolution can be costly, this practice was found to be necessary to eliminating the artificial turbulent energy exchange.
Chelliah, Pandian; Sahoo, Trilochan; Singh, Sheela; Sujatha, Annie
2015-10-20
A Fourier transform spectrometer (FTS) used for interrogating a fiber Bragg grating (FBG) consists of a scanning-type interferometer. The FTS has a broad wavelength range of operation and good multiplexing capability. However, it has poor wavelength resolution and interrogation speed. We propose a modification to the FTS using path delay multiplexing to improve the same. Using this method, spatial resolution and interrogation time can be improved by n times by using n path delays. In this paper, simulation results for n=2, 5 are shown.
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.
Microdome-gooved Gd(2)O(2)S:Tb scintillator for flexible and high resolution digital radiography.
Jung, Phill Gu; Lee, Chi Hoon; Bae, Kong Myeong; Lee, Jae Min; Lee, Sang Min; Lim, Chang Hwy; Yun, Seungman; Kim, Ho Kyung; Ko, Jong Soo
2010-07-05
A flexible microdome-grooved Gd(2)O(2)S:Tb scintillator is simulated, fabricated, and characterized for digital radiography applications. According to Monte Carlo simulation results, the dome-grooved structure has a high spatial resolution, which is verified by X-ray image performance of the scintillator. The proposed scintillator has lower X-ray sensitivity than a nonstructured scintillator but almost two times higher spatial resolution at high spatial frequency. Through evaluation of the X-ray performance of the fabricated scintillators, we confirm that the microdome-grooved scintillator can be applied to next-generation flexible digital radiography systems requiring high spatial resolution.
Avalanche statistics from data with low time resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
Avalanche statistics from data with low time resolution
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.; ...
2016-11-22
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
Monte Carlo simulation of the resolution volume for the SEQUOIA spectrometer
NASA Astrophysics Data System (ADS)
Granroth, G. E.; Hahn, S. E.
2015-01-01
Monte Carlo ray tracing simulations, of direct geometry spectrometers, have been particularly useful in instrument design and characterization. However, these tools can also be useful for experiment planning and analysis. To this end, the McStas Monte Carlo ray tracing model of SEQUOIA, the fine resolution fermi chopper spectrometer at the Spallation Neutron Source (SNS) of Oak Ridge National Laboratory (ORNL), has been modified to include the time of flight resolution sample and detector components. With these components, the resolution ellipsoid can be calculated for any detector pixel and energy bin of the instrument. The simulation is split in two pieces. First, the incident beamline up to the sample is simulated for 1 × 1011 neutron packets (4 days on 30 cores). This provides a virtual source for the backend that includes the resolution sample and monitor components. Next, a series of detector and energy pixels are computed in parallel. It takes on the order of 30 s to calculate a single resolution ellipsoid on a single core. Python scripts have been written to transform the ellipsoid into the space of an oriented single crystal, and to characterize the ellipsoid in various ways. Though this tool is under development as a planning tool, we have successfully used it to provide the resolution function for convolution with theoretical models. Specifically, theoretical calculations of the spin waves in YFeO3 were compared to measurements taken on SEQUOIA. Though the overall features of the spectra can be explained while neglecting resolution effects, the variation in intensity of the modes is well described once the resolution is included. As this was a single sharp mode, the simulated half intensity value of the resolution ellipsoid was used to provide the resolution width. A description of the simulation, its use, and paths forward for this technique will be discussed.
NASA Astrophysics Data System (ADS)
Ko, A.; Mascaro, G.; Vivoni, E. R.
2017-12-01
Hyper-resolution (< 1 km) hydrological modeling is expected to support a range of studies related to the terrestrial water cycle. A critical need for increasing the utility of hyper-resolution modeling is the availability of meteorological forcings and land surface characteristics at high spatial resolution. Unfortunately, in many areas these datasets are only available at coarse (> 10 km) scales. In this study, we address some of the challenges by applying a parallel version of the Triangulated Irregular Network (TIN)-based Real Time Integrated Basin Simulator (tRIBS) to the Rio Sonora Basin (RSB) in northwest Mexico. The RSB is a large, semiarid watershed ( 21,000 km2) characterized by complex topography and a strong seasonality in vegetation conditions, due to the North American monsoon. We conducted simulations at an average spatial resolution of 88 m over a decadal (2004-2013) period using spatially-distributed forcings from remotely-sensed and reanalysis products. Meteorological forcings were derived from the North American Land Data Assimilation System (NLDAS) at the original resolution of 12 km and were downscaled at 1 km with techniques accounting for terrain effects. Two grids of soil properties were created from different sources, including: (i) CONABIO (Comisión Nacional para el Conocimiento y Uso de la Biodiversidad) at 6 km resolution; and (ii) ISRIC (International Soil Reference Information Centre) at 250 m. Time-varying vegetation parameters were derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) composite products. The model was first calibrated and validated through distributed soil moisture data from a network of 20 soil moisture stations during the monsoon season. Next, hydrologic simulations were conducted with five different combinations of coarse and downscaled forcings and soil properties. Outputs in the different configurations were then compared with independent observations of soil moisture, and with estimates of land surface temperature (1 km, daily) and evapotranspiration (1 km, monthly) from MODIS. This study is expected to support the community involved in hyper-resolution hydrologic modeling by identifying the crucial factors that, if available at higher resolution, lead to the largest improvement of the simulation prognostic capability.
a Spiral-Based Downscaling Method for Generating 30 M Time Series Image Data
NASA Astrophysics Data System (ADS)
Liu, B.; Chen, J.; Xing, H.; Wu, H.; Zhang, J.
2017-09-01
The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland) make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.
NASA Astrophysics Data System (ADS)
Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki
2017-12-01
This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrnstein, Aaron R.
An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less
Development of the GEOS-5 Atmospheric General Circulation Model: Evolution from MERRA to MERRA2.
NASA Technical Reports Server (NTRS)
Molod, Andrea; Takacs, Lawrence; Suarez, Max; Bacmeister, Julio
2014-01-01
The Modern-Era Retrospective Analysis for Research and Applications-2 (MERRA2) version of the GEOS-5 (Goddard Earth Observing System Model - 5) Atmospheric General Circulation Model (AGCM) is currently in use in the NASA Global Modeling and Assimilation Office (GMAO) at a wide range of resolutions for a variety of applications. Details of the changes in parameterizations subsequent to the version in the original MERRA reanalysis are presented here. Results of a series of atmosphere-only sensitivity studies are shown to demonstrate changes in simulated climate associated with specific changes in physical parameterizations, and the impact of the newly implemented resolution-aware behavior on simulations at different resolutions is demonstrated. The GEOS-5 AGCM presented here is the model used as part of the GMAO's MERRA2 reanalysis, the global mesoscale "nature run", the real-time numerical weather prediction system, and for atmosphere-only, coupled ocean-atmosphere and coupled atmosphere-chemistry simulations. The seasonal mean climate of the MERRA2 version of the GEOS-5 AGCM represents a substantial improvement over the simulated climate of the MERRA version at all resolutions and for all applications. Fundamental improvements in simulated climate are associated with the increased re-evaporation of frozen precipitation and cloud condensate, resulting in a wetter atmosphere. Improvements in simulated climate are also shown to be attributable to changes in the background gravity wave drag, and to upgrades in the relationship between the ocean surface stress and the ocean roughness. The series of "resolution aware" parameters related to the moist physics were shown to result in improvements at higher resolutions, and result in AGCM simulations that exhibit seamless behavior across different resolutions and applications.
Development of ALARO-Climate regional climate model for a very high resolution
NASA Astrophysics Data System (ADS)
Skalak, Petr; Farda, Ales; Brozkova, Radmila; Masek, Jan
2014-05-01
ALARO-Climate is a new regional climate model (RCM) derived from the ALADIN LAM model family. It is based on the numerical weather prediction model ALARO and developed at the Czech Hydrometeorological Institute. The model is expected to able to work in the so called "grey zone" physics (horizontal resolution of 4 - 7 km) and at the same time retain its ability to be operated in resolutions in between 20 and 50 km, which are typical for contemporary generation of regional climate models. Here we present the main results of the RCM ALARO-Climate model simulations in 25 and 6.25 km resolutions on the longer time-scale (1961-1990). The model was driven by the ERA-40 re-analyses and run on the integration domain of ~ 2500 x 2500 km size covering the central Europe. The simulated model climate was compared with the gridded observation of air temperature (mean, maximum, minimum) and precipitation from the E-OBS version dataset 8. Other simulated parameters (e.g., cloudiness, radiation or components of water cycle) were compared to the ERA-40 re-analyses. The validation of the first ERA-40 simulation in both, 25 km and 6.25 km resolutions, revealed significant cold biases in all seasons and overestimation of precipitation in the selected Central Europe target area (0° - 30° eastern longitude ; 40° - 60° northern latitude). The differences between these simulations were small and thus revealed a robustness of the model's physical parameterization on the resolution change. The series of 25 km resolution simulations with several model adaptations was carried out to study their effect on the simulated properties of climate variables and thus possibly identify a source of major errors in the simulated climate. The current investigation suggests the main reason for biases is related to the model physic. Acknowledgements: This study was performed within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation) and CzechGlobe Centre (CZ.1.05/1.1.00/02.0073). The partial support was also provided under the projects P209-11-0956 of the Czech Science Foundation and CZ.1.07/2.4.00/31.0056 (Operational Programme of Education for Competitiveness of Ministry of Education, Youth and Sports of the Czech Republic).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, K.; Wilson, R.J.; Hemler, R.S.
1999-11-15
The large-scale circulation in the Geophysical Fluid Dynamics Laboratory SKYHI troposphere-stratosphere-mesosphere finite-difference general circulation model is examined as a function of vertical and horizontal resolution. The experiments examined include one with horizontal grid spacing of {approximately}35 km and another with {approximately}100 km horizontal grid spacing but very high vertical resolution (160 levels between the ground and about 85 km). The simulation of the middle-atmospheric zonal-mean winds and temperatures in the extratropics is found to be very sensitive to horizontal resolution. For example, in the early Southern Hemisphere winter the South Pole near 1 mb in the model is colder thanmore » observed, but the bias is reduced with improved horizontal resolution (from {approximately}70 C in a version with {approximately}300 km grid spacing to less than 10 C in the {approximately}35 km version). The extratropical simulation is found to be only slightly affected by enhancements of the vertical resolution. By contrast, the tropical middle-atmospheric simulation is extremely dependent on the vertical resolution employed. With level spacing in the lower stratosphere {approximately}1.5 km, the lower stratospheric zonal-mean zonal winds in the equatorial region are nearly constant in time. When the vertical resolution is doubled, the simulated stratospheric zonal winds exhibit a strong equatorially centered oscillation with downward propagation of the wind reversals and with formation of strong vertical shear layers. This appears to be a spontaneous internally generated oscillation and closely resembles the observed QBO in many respects, although the simulated oscillation has a period less than half that of the real QBO.« less
NASA Technical Reports Server (NTRS)
Kavaya, Michael J.; Singh, Upendra N.; Koch, Grady J.; Yu, Jirong; Frehlich, Rod G.
2009-01-01
We present preliminary results of computer simulations of the error in measuring carbon dioxide mixing ratio profiles from earth orbit. The simulated sensor is a pulsed, 2-micron, coherent-detection lidar alternately operating on at least two wavelengths. The simulated geometry is a nadir viewing lidar measuring the column content signal. Atmospheric absorption is modeled using FASCODE3P software with the HITRAN 2004 absorption line data base. Lidar shot accumulation is employed up to the horizontal resolution limit. Horizontal resolutions of 50, 100, and 200 km are shown. Assuming a 400 km spacecraft orbit, the horizontal resolutions correspond to measurement times of about 7, 14, and 28 s. We simulate laser pulse-pair repetition frequencies from 1 Hz to 100 kHz. The range of shot accumulation is 7 to 2.8 million pulse-pairs. The resultant error is shown as a function of horizontal resolution, laser pulse-pair repetition frequency, and laser pulse energy. The effect of different on and off pulse energies is explored. The results are compared to simulation results of others and to demonstrated 2-micron operating points at NASA Langley.
A fast image simulation algorithm for scanning transmission electron microscopy.
Ophus, Colin
2017-01-01
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.
A fast image simulation algorithm for scanning transmission electron microscopy
Ophus, Colin
2017-05-10
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. Here, we present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this methodmore » with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.« less
NASA Technical Reports Server (NTRS)
Kessel, R. L.; Armstrong, T. P.; Nuber, R.; Bandle, J.
1985-01-01
Data were examined from two experiments aboard the Explorer 50 (IMP 8) spacecraft. The Johns Hopkins University/Applied Lab Charged Particle Measurement Experiment (CPME) provides 10.12 second resolution ion and electron count rates as well as 5.5 minute or longer averages of the same, with data sampled in the ecliptic plane. The high time resolution of the data allows for an explicit, point by point, merging of the magnetic field and particle data and thus a close examination of the pre- and post-shock conditions and particle fluxes associated with large angle oblique shocks in the interplanetary field. A computer simulation has been developed wherein sample particle trajectories, taken from observed fluxes, are allowed to interact with a planar shock either forward or backward in time. One event, the 1974 Day 312 shock, is examined in detail.
3D detectors with high space and time resolution
NASA Astrophysics Data System (ADS)
Loi, A.
2018-01-01
For future high luminosity LHC experiments it will be important to develop new detector systems with increased space and time resolution and also better radiation hardness in order to operate in high luminosity environment. A possible technology which could give such performances is 3D silicon detectors. This work explores the possibility of a pixel geometry by designing and simulating different solutions, using Sentaurus Tecnology Computer Aided Design (TCAD) as design and simulation tool, and analysing their performances. A key factor during the selection was the generated electric field and the carrier velocity inside the active area of the pixel.
Toward real-time regional earthquake simulation of Taiwan earthquakes
NASA Astrophysics Data System (ADS)
Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.
2013-12-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
NASA Astrophysics Data System (ADS)
López-Romero, Jose Maria; Baró, Rocío; Palacios-Peña, Laura; Jerez, Sonia; Jiménez-Guerrero, Pedro; Montávez, Juan Pedro
2016-04-01
Several studies have shown that a high spatial resolution in atmospheric model runs improves the simulation of some meteorological variables, such as precipitation, particularly extreme events and in regions with complex orography [1]. However, increasing model spatial resolution makes the computational time rise exponentially. Hence, very high resolution experiments on large domains can hamper the execution of climatic runs. This problem shoots up when using online-coupled chemistry climate models, making a careful evaluation of improvements versus costs mandatory. Under this umbrella, the objective of this work is to investigate the sensitivity of aerosol radiative feedbacks from online-coupled chemistry regional model simulations to the spatial resolution. For that, the WRF-Chem [2] model is used for a case study to simulate the episode occurring between July 25th and August 15th of 2010. It is characterized by a high loading of atmospheric aerosol particles coming mainly from wildfires over large European regions (Russia, Iberian Peninsula). Three spatial resolutions are used defined for Euro-Cordex compliant domains [3]: 0.44°, 0.22° and 0.11°. Anthropogenic emissions come from TNO databases [4]. The analysis focuses on air quality variables (mainly PM10, PM2.5), meteorological variables (temperature, radiation) and other aerosol optical properties (aerosol optical depth). The CPU time ratio for the different domains is 1 (0.44°), 4(0.22°) and 28(0.11°) (normalized times). Comparison among simulations and observations are analyzed. Preliminary results show the difficulty to justify the much larger computational cost of high-resolution experiments when comparing with observations from a meteorological point of view, despite the finer spatio-temporal detail of the obtained pollutant fields. [1] Prein, A. F. (2014, December). Precipitation in the EURO-CORDEX 0.11° and 0.44° simulations: high resolution, high benefits?. In AGU Fall Meeting Abstracts (Vol. 1, p. 3893). [2] Grell, G. A., Peckham, S. E., Schmitz, R., McKeen, S. A., Frost, G., Skamarock, W. C., & Eder, B. (2005). Fully coupled "online" chemistry within the WRF model. Atmospheric Environment, 39(37), 6957-6975. [3] Jacob, D., Petersen, J., Eggert, B., Alias, A., Christensen, O. B., Bouwer, L. M., ... & Georgopoulou, E. (2014). EURO-CORDEX: new high-resolution climate change projections for European impact research. Regional Environmental Change, 14(2), 563-578. [4] Pouliot, G., Denier van der Gon, H., Kuenen, J., Makar, P., Zhang, J., Moran, M., 2015. Analysis of the emission inventories and model-ready emission datasets of Europe and North America for phase 2 of the AQMEII project. Atmos. Environ. 115, 345-360.
Simulation of Extreme Arctic Cyclones in IPCC AR5 Experiments
2014-05-15
atmospheric fields, including sea level pressure ( SLP ), on daily and sub-daily time scales at 2° horizontal resolution. A higher-resolution and more...its 21st-century simulation. Extreme cyclones were defined as occurrences of daily mean SLP at least 40 hPa below the climatological annual-average... SLP at a grid point. As such, no cyclone-tracking algorithm was employed, because the purpose here is to identify instances of extremely strong
NASA Astrophysics Data System (ADS)
Silvestro, Francesco; Parodi, Antonio; Campo, Lorenzo
2017-04-01
The characterization of the hydrometeorological extremes, both in terms of rainfall and streamflow, in a given region plays a key role in the environmental monitoring provided by the flood alert services. In last years meteorological simulations (both near real-time and historical reanalysis) were available at increasing spatial and temporal resolutions, making possible long-period hydrological reanalysis in which the meteo dataset is used as input in distributed hydrological models. In this work, a very high resolution meteorological reanalysis dataset, namely Express-Hydro (CIMA, ISAC-CNR, GAUSS Special Project PR45DE), was employed as input in the hydrological model Continuum in order to produce long time series of streamflows in the Liguria territory, located in the Northern part of Italy. The original dataset covers the whole Europe territory in the 1979-2008 period, at 4 km of spatial resolution and 3 hours of time resolution. Analyses in terms of comparison between the rainfall estimated by the dataset and the observations (available from the local raingauges network) were carried out, and a bias correction was also performed in order to better match the observed climatology. An extreme analysis was eventually carried on the streamflows time series obtained by the simulations, by comparing them with the results of the same hydrological model fed with the observed time series of rainfall. The results of the analysis are shown and discussed.
NASA Astrophysics Data System (ADS)
Pillai, D.; Gerbig, C.; Kretschmer, R.; Beck, V.; Karstens, U.; Neininger, B.; Heimann, M.
2012-10-01
We present simulations of atmospheric CO2 concentrations provided by two modeling systems, run at high spatial resolution: the Eulerian-based Weather Research Forecasting (WRF) model and the Lagrangian-based Stochastic Time-Inverted Lagrangian Transport (STILT) model, both of which are coupled to a diagnostic biospheric model, the Vegetation Photosynthesis and Respiration Model (VPRM). The consistency of the simulations is assessed with special attention paid to the details of horizontal as well as vertical transport and mixing of CO2 concentrations in the atmosphere. The dependence of model mismatch (Eulerian vs. Lagrangian) on models' spatial resolution is further investigated. A case study using airborne measurements during which two models showed large deviations from each other is analyzed in detail as an extreme case. Using aircraft observations and pulse release simulations, we identified differences in the representation of details in the interaction between turbulent mixing and advection through wind shear as the main cause of discrepancies between WRF and STILT transport at a spatial resolution such as 2 and 6 km. Based on observations and inter-model comparisons of atmospheric CO2 concentrations, we show that a refinement of the parameterization of turbulent velocity variance and Lagrangian time-scale in STILT is needed to achieve a better match between the Eulerian and the Lagrangian transport at such a high spatial resolution (e.g. 2 and 6 km). Nevertheless, the inter-model differences in simulated CO2 time series for a tall tower observatory at Ochsenkopf in Germany are about a factor of two smaller than the model-data mismatch and about a factor of three smaller than the mismatch between the current global model simulations and the data.
Integration of High-resolution Data for Temporal Bone Surgical Simulations
Wiet, Gregory J.; Stredney, Don; Powell, Kimerly; Hittle, Brad; Kerwin, Thomas
2016-01-01
Purpose To report on the state of the art in obtaining high-resolution 3D data of the microanatomy of the temporal bone and to process that data for integration into a surgical simulator. Specifically, we report on our experience in this area and discuss the issues involved to further the field. Data Sources Current temporal bone image acquisition and image processing established in the literature as well as in house methodological development. Review Methods We reviewed the current English literature for the techniques used in computer-based temporal bone simulation systems to obtain and process anatomical data for use within the simulation. Search terms included “temporal bone simulation, surgical simulation, temporal bone.” Articles were chosen and reviewed that directly addressed data acquisition and processing/segmentation and enhancement with emphasis given to computer based systems. We present the results from this review in relationship to our approach. Conclusions High-resolution CT imaging (≤100μm voxel resolution), along with unique image processing and rendering algorithms, and structure specific enhancement are needed for high-level training and assessment using temporal bone surgical simulators. Higher resolution clinical scanning and automated processes that run in efficient time frames are needed before these systems can routinely support pre-surgical planning. Additionally, protocols such as that provided in this manuscript need to be disseminated to increase the number and variety of virtual temporal bones available for training and performance assessment. PMID:26762105
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Kang, In-Sik; Reale, Oreste
2009-01-01
This talk gives an update on the progress and further plans for a coordinated project to carry out and analyze high-resolution simulations of tropical storm activity with a number of state-of-the-art global climate models. Issues addressed include, the mechanisms by which SSTs control tropical storm. activity on inter-annual and longer time scales, the modulation of that activity by the Madden Julian Oscillation on sub-seasonal time scales, as well as the sensitivity of the results to model formulation. The project also encourages companion coarser resolution runs to help assess resolution dependence, and. the ability of the models to capture the large-scale and long-terra changes in the parameters important for hurricane development. Addressing the above science questions is critical to understanding the nature of the variability of the Asian-Australian monsoon and its regional impacts, and thus CLIVAR RAMP fully endorses the proposed tropical storm simulation activity. The project is open to all interested organizations and investigators, and the results from the runs will be shared among the participants, as well as made available to the broader scientific community for analysis.
High resolution simulations of a variable HH jet
NASA Astrophysics Data System (ADS)
Raga, A. C.; de Colle, F.; Kajdič, P.; Esquivel, A.; Cantó, J.
2007-04-01
Context: In many papers, the flows in Herbig-Haro (HH) jets have been modeled as collimated outflows with a time-dependent ejection. In particular, a supersonic variability of the ejection velocity leads to the production of "internal working surfaces" which (for appropriate forms of the time-variability) can produce emitting knots that resemble the chains of knots observed along HH jets. Aims: In this paper, we present axisymmetric simulations of an "internal working surface" in a radiative jet (produced by an ejection velocity variability). We concentrate on a given parameter set (i.e., on a jet with a constante ejection density, and a sinusoidal velocity variability with a 20 yr period and a 40 km s-1 half-amplitude), and carry out a study of the behaviour of the solution for increasing numerical resolutions. Methods: In our simulations, we solve the gasdynamic equations together with a 17-species atomic/ionic network, and we are therefore able to compute emission coefficients for different emission lines. Results: We compute 3 adaptive grid simulations, with 20, 163 and 1310 grid points (at the highest grid resolution) across the initial jet radius. From these simulations we see that successively more complex structures are obtained for increasing numerical resolutions. Such an effect is seen in the stratifications of the flow variables as well as in the predicted emission line intensity maps. Conclusions: .We find that while the detailed structure of an internal working surface depends on resolution, the predicted emission line luminosities (integrated over the volume of the working surface) are surprisingly stable. This is definitely good news for the future computation of predictions from radiative jet models for carrying out comparisons with observations of HH objects.
High-resolution surface analysis for extended-range downscaling with limited-area atmospheric models
NASA Astrophysics Data System (ADS)
Separovic, Leo; Husain, Syed Zahid; Yu, Wei; Fernig, David
2014-12-01
High-resolution limited-area model (LAM) simulations are frequently employed to downscale coarse-resolution objective analyses over a specified area of the globe using high-resolution computational grids. When LAMs are integrated over extended time frames, from months to years, they are prone to deviations in land surface variables that can be harmful to the quality of the simulated near-surface fields. Nudging of the prognostic surface fields toward a reference-gridded data set is therefore devised in order to prevent the atmospheric model from diverging from the expected values. This paper presents a method to generate high-resolution analyses of land-surface variables, such as surface canopy temperature, soil moisture, and snow conditions, to be used for the relaxation of lower boundary conditions in extended-range LAM simulations. The proposed method is based on performing offline simulations with an external surface model, forced with the near-surface meteorological fields derived from short-range forecast, operational analyses, and observed temperatures and humidity. Results show that the outputs of the surface model obtained in the present study have potential to improve the near-surface atmospheric fields in extended-range LAM integrations.
A New High Resolution Climate Dataset for Climate Change Impacts Assessments in New England
NASA Astrophysics Data System (ADS)
Komurcu, M.; Huber, M.
2016-12-01
Assessing regional impacts of climate change (such as changes in extreme events, land surface hydrology, water resources, energy, ecosystems and economy) requires much higher resolution climate variables than those available from global model projections. While it is possible to run global models in higher resolution, the high computational cost associated with these simulations prevent their use in such manner. To alleviate this problem, dynamical downscaling offers a method to deliver higher resolution climate variables. As part of an NSF EPSCoR funded interdisciplinary effort to assess climate change impacts on New Hampshire ecosystems, hydrology and economy (the New Hampshire Ecosystems and Society project), we create a unique high-resolution climate dataset for New England. We dynamically downscale global model projections under a high impact emissions scenario using the Weather Research and Forecasting model (WRF) with three nested grids of 27, 9 and 3 km horizontal resolution with the highest resolution innermost grid focusing over New England. We prefer dynamical downscaling over other methods such as statistical downscaling because it employs physical equations to progressively simulate climate variables as atmospheric processes interact with surface processes, emissions, radiation, clouds, precipitation and other model components, hence eliminates fix relationships between variables. In addition to simulating mean changes in regional climate, dynamical downscaling also allows for the simulation of climate extremes that significantly alter climate change impacts. We simulate three time slices: 2006-2015, 2040-2060 and 2080-2100. This new high-resolution climate dataset (with more than 200 variables saved in hourly (six hourly) intervals for the highest resolution domain (outer two domains)) along with model input and restart files used in our WRF simulations will be publicly available for use to the broader scientific community to support in-depth climate change impacts assessments for New England. We present results focusing on future changes in New England extreme events.
NASA Astrophysics Data System (ADS)
Yang, Zhongyu
This thesis describes the design, experimental performance, and theoretical simulation of a novel time-of-flight analyzer that was integrated into a high resolution electron energy loss spectrometer (TOF-HREELS). First we examined the use of an interleaved comb chopper for chopping a continuous electron beam. Both static and dynamic behaviors were simulated theoretically and measured experimentally, with very good agreement. The finite penetration of the field beyond the plane of the chopper leads to non-ideal chopper response, which is characterized in terms of an "energy corruption" effect and a lead or lag in the time at which the beam responds to the chopper potential. Second we considered the recovery of spectra from pseudo-random binary sequence (PRBS) modulated TOF-HREELS data. The effects of the Poisson noise distribution and the non-ideal behavior of the "interleaved comb" chopper were simulated. We showed, for the first time, that maximum likelihood methods can be combined with PRBS modulation to achieve resolution enhancement, while properly accounting for the Poisson noise distribution and artifacts introduced by the chopper. Our results indicate that meV resolution, similar to that of modern high resolution electron energy loss spectrometers, can be achieved with a dramatic performance advantage over conventional, serial detection analyzers. To demonstrate the capabilities of the TOF-HREELS instrument, we made measurements on a highly oriented thin film polytetrafluoroethylene (PTFE) sample. We demonstrated that the TOF-HREELS can achieve a throughput advantage of a factor of 85 compared to the conventional HREELS instrument. Comparisons were made between the experimental results and theoretical simulations. We discuss various factors which affect inversion of PRBS modulated Time of Flight (TOF) data with the Lucy algorithm. Using simulations, we conclude that the convolution assumption was good under the conditions of our experiment. The chopper rise time, Poisson noise, and artifacts of the chopper response are evaluated. Finally, we conclude that the maximum likelihood algorithms are able to gain a multiplex advantage in PRBS modulation, despite the Poisson noise in the detector.
NASA Astrophysics Data System (ADS)
Haack, Lukas; Peniche, Ricardo; Sommer, Lutz; Kather, Alfons
2017-06-01
At early project stages, the main CSP plant design parameters such as turbine capacity, solar field size, and thermal storage capacity are varied during the techno-economic optimization to determine most suitable plant configurations. In general, a typical meteorological year with at least hourly time resolution is used to analyze each plant configuration. Different software tools are available to simulate the annual energy yield. Software tools offering a thermodynamic modeling approach of the power block and the CSP thermal cycle, such as EBSILONProfessional®, allow a flexible definition of plant topologies. In EBSILON, the thermodynamic equilibrium for each time step is calculated iteratively (quasi steady state), which requires approximately 45 minutes to process one year with hourly time resolution. For better presentation of gradients, 10 min time resolution is recommended, which increases processing time by a factor of 5. Therefore, analyzing a large number of plant sensitivities, as required during the techno-economic optimization procedure, the detailed thermodynamic simulation approach becomes impracticable. Suntrace has developed an in-house CSP-Simulation tool (CSPsim), based on EBSILON and applying predictive models, to approximate the CSP plant performance for central receiver and parabolic trough technology. CSPsim significantly increases the speed of energy yield calculations by factor ≥ 35 and has automated the simulation run of all predefined design configurations in sequential order during the optimization procedure. To develop the predictive models, multiple linear regression techniques and Design of Experiment methods are applied. The annual energy yield and derived LCOE calculated by the predictive model deviates less than ±1.5 % from the thermodynamic simulation in EBSILON and effectively identifies the optimal range of main design parameters for further, more specific analysis.
Challenge toward the prediction of typhoon behaviour and down pour
NASA Astrophysics Data System (ADS)
Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.
2013-08-01
Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.
Time multiplexing based extended depth of focus imaging.
Ilovitsh, Asaf; Zalevsky, Zeev
2016-01-01
We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.
NASA Astrophysics Data System (ADS)
Biercamp, Joachim; Adamidis, Panagiotis; Neumann, Philipp
2017-04-01
With the exa-scale era approaching, length and time scales used for climate research on one hand and numerical weather prediction on the other hand blend into each other. The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) represents a European consortium comprising partners from climate, weather and HPC in their effort to address key scientific challenges that both communities have in common. A particular challenge is to reach global models with spatial resolutions that allow simulating convective clouds and small-scale ocean eddies. These simulations would produce better predictions of trends and provide much more fidelity in the representation of high-impact regional events. However, running such models in operational mode, i.e with sufficient throughput in ensemble mode clearly will require exa-scale computing and data handling capability. We will discuss the ESiWACE initiative and relate it to work-in-progress on high-resolution simulations in Europe. We present recent strong scalability measurements from ESiWACE to demonstrate current computability in weather and climate simulation. A special focus in this particular talk is on the Icosahedal Nonhydrostatic (ICON) model used for a comparison of high resolution regional and global simulations with high quality observation data. We demonstrate that close-to-optimal parallel efficiency can be achieved in strong scaling global resolution experiments on Mistral/DKRZ, e.g. 94% for 5km resolution simulations using 36k cores on Mistral/DKRZ. Based on our scalability and high-resolution experiments, we deduce and extrapolate future capabilities for ICON that are expected for weather and climate research at exascale.
NASA Astrophysics Data System (ADS)
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
A Study of the Unstable Modes in High Mach Number Gaseous Jets and Shear Layers
NASA Astrophysics Data System (ADS)
Bassett, Gene Marcel
1993-01-01
Instabilities affecting the propagation of supersonic gaseous jets have been studied using high resolution computer simulations with the Piecewise-Parabolic-Method (PPM). These results are discussed in relation to jets from galactic nuclei. These studies involve a detailed treatment of a single section of a very long jet, approximating the dynamics by using periodic boundary conditions. Shear layer simulations have explored the effects of shear layers on the growth of nonlinear instabilities. Convergence of the numerical approximations has been tested by comparing jet simulations with different grid resolutions. The effects of initial conditions and geometry on the dominant disruptive instabilities have also been explored. Simulations of shear layers with a variety of thicknesses, Mach numbers and densities perturbed by incident sound waves imply that the time for the excited kink modes to grow large in amplitude and disrupt the shear layer is taug = (546 +/- 24) (M/4)^{1.7 } (Apert/0.02) ^{-0.4} delta/c, where M is the jet Mach number, delta is the half-width of the shear layer, and A_ {pert} is the perturbation amplitude. For simulations of periodic jets, the initial velocity perturbations set up zig-zag shock patterns inside the jet. In each case a single zig-zag shock pattern (an odd mode) or a double zig-zag shock pattern (an even mode) grows to dominate the flow. The dominant kink instability responsible for these shock patterns moves approximately at the linear resonance velocity, nu_ {mode} = cextnu_ {relative}/(cjet + c_ {ext}). For high resolution simulations (those with 150 or more computational zones across the jet width), the even mode dominates if the even penetration is higher in amplitude initially than the odd perturbation. For low resolution simulations, the odd mode dominates even for a stronger even mode perturbation. In high resolution simulations the jet boundary rolls up and large amounts of external gas are entrained into the jet. In low resolution simulations this entrainment process is impeded by numerical viscosity. The three-dimensional jet simulations behave similarly to two-dimensional jet runs with the same grid resolutions.
The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.
The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less
Large-Eddy Simulation of Turbulent Wall-Pressure Fluctuations
NASA Technical Reports Server (NTRS)
Singer, Bart A.
1996-01-01
Large-eddy simulations of a turbulent boundary layer with Reynolds number based on displacement thickness equal to 3500 were performed with two grid resolutions. The computations were continued for sufficient time to obtain frequency spectra with resolved frequencies that correspond to the most important structural frequencies on an aircraft fuselage. The turbulent stresses were adequately resolved with both resolutions. Detailed quantitative analysis of a variety of statistical quantities associated with the wall-pressure fluctuations revealed similar behavior for both simulations. The primary differences were associated with the lack of resolution of the high-frequency data in the coarse-grid calculation and the increased jitter (due to the lack of multiple realizations for averaging purposes) in the fine-grid calculation. A new curve fit was introduced to represent the spanwise coherence of the cross-spectral density.
Improving PET spatial resolution and detectability for prostate cancer imaging
NASA Astrophysics Data System (ADS)
Bal, H.; Guerin, L.; Casey, M. E.; Conti, M.; Eriksson, L.; Michel, C.; Fanti, S.; Pettinato, C.; Adler, S.; Choyke, P.
2014-08-01
Prostate cancer, one of the most common forms of cancer among men, can benefit from recent improvements in positron emission tomography (PET) technology. In particular, better spatial resolution, lower noise and higher detectability of small lesions could be greatly beneficial for early diagnosis and could provide a strong support for guiding biopsy and surgery. In this article, the impact of improved PET instrumentation with superior spatial resolution and high sensitivity are discussed, together with the latest development in PET technology: resolution recovery and time-of-flight reconstruction. Using simulated cancer lesions, inserted in clinical PET images obtained with conventional protocols, we show that visual identification of the lesions and detectability via numerical observers can already be improved using state of the art PET reconstruction methods. This was achieved using both resolution recovery and time-of-flight reconstruction, and a high resolution image with 2 mm pixel size. Channelized Hotelling numerical observers showed an increase in the area under the LROC curve from 0.52 to 0.58. In addition, a relationship between the simulated input activity and the area under the LROC curve showed that the minimum detectable activity was reduced by more than 23%.
Interannual rainfall variability over China in the MetUM GA6 and GC2 configurations
NASA Astrophysics Data System (ADS)
Stephan, Claudia Christine; Klingaman, Nicholas P.; Vidale, Pier Luigi; Turner, Andrew G.; Demory, Marie-Estelle; Guo, Liang
2018-05-01
Six climate simulations of the Met Office Unified Model Global Atmosphere 6.0 and Global Coupled 2.0 configurations are evaluated against observations and reanalysis data for their ability to simulate the mean state and year-to-year variability of precipitation over China. To analyse the sensitivity to air-sea coupling and horizontal resolution, atmosphere-only and coupled integrations at atmospheric horizontal resolutions of N96, N216 and N512 (corresponding to ˜ 200, 90 and 40 km in the zonal direction at the equator, respectively) are analysed. The mean and interannual variance of seasonal precipitation are too high in all simulations over China but improve with finer resolution and coupling. Empirical orthogonal teleconnection (EOT) analysis is applied to simulated and observed precipitation to identify spatial patterns of temporally coherent interannual variability in seasonal precipitation. To connect these patterns to large-scale atmospheric and coupled air-sea processes, atmospheric and oceanic fields are regressed onto the corresponding seasonal mean time series. All simulations reproduce the observed leading pattern of interannual rainfall variability in winter, spring and autumn; the leading pattern in summer is present in all but one simulation. However, only in two simulations are the four leading patterns associated with the observed physical mechanisms. Coupled simulations capture more observed patterns of variability and associate more of them with the correct physical mechanism, compared to atmosphere-only simulations at the same resolution. However, finer resolution does not improve the fidelity of these patterns or their associated mechanisms. This shows that evaluating climate models by only geographical distribution of mean precipitation and its interannual variance is insufficient. The EOT analysis adds knowledge about coherent variability and associated mechanisms.
Evaluation of a Mesoscale Convective System in Variable-Resolution CESM
NASA Astrophysics Data System (ADS)
Payne, A. E.; Jablonowski, C.
2017-12-01
Warm season precipitation over the Southern Great Plains (SGP) follows a well observed diurnal pattern of variability, peaking at night-time, due to the eastward propagation of mesoscale convection systems that develop over the eastern slopes of the Rockies in the late afternoon. While most climate models are unable to adequately capture the organization of convection and characteristic pattern of precipitation over this region, models with high enough resolution to explicitly resolve convection show improvement. However, high resolution simulations are computationally expensive and, in the case of regional climate models, are subject to boundary conditions. Newly developed variable resolution global climate models strike a balance between the benefits of high-resolution regional climate models and the large-scale dynamics of global climate models and low computational cost. Recently developed parameterizations that are insensitive to the model grid scale provide a way to improve model performance. Here, we present an evaluation of the newly available Cloud Layers Unified by Binormals (CLUBB) parameterization scheme in a suite of variable-resolution CESM simulations with resolutions ranging from 110 km to 7 km within a regionally refined region centered over the SGP Atmospheric Radiation Measurement (ARM) site. Simulations utilize the hindcast approach developed by the Department of Energy's Cloud-Associated Parameterizations Testbed (CAPT) for the assessment of climate models. We limit our evaluation to a single mesoscale convective system that passed over the region on May 24, 2008. The effects of grid-resolution on the timing and intensity of precipitation, as well as, on the transition from shallow to deep convection are assessed against ground-based observations from the SGP ARM site, satellite observations and ERA-Interim reanalysis.
NASA Astrophysics Data System (ADS)
Uijlenhoet, R.; Brauer, C.; Overeem, A.; Sassi, M.; Rios Gaona, M. F.
2014-12-01
Several rainfall measurement techniques are available for hydrological applications, each with its own spatial and temporal resolution. We investigated the effect of these spatiotemporal resolutions on discharge simulations in lowland catchments by forcing a novel rainfall-runoff model (WALRUS) with rainfall data from gauges, radars and microwave links. The hydrological model used for this analysis is the recently developed Wageningen Lowland Runoff Simulator (WALRUS). WALRUS is a rainfall-runoff model accounting for hydrological processes relevant to areas with shallow groundwater (e.g. groundwater-surface water feedback). Here, we used WALRUS for case studies in a freely draining lowland catchment and a polder with controlled water levels. We used rain gauge networks with automatic (hourly resolution but low spatial density) and manual gauges (high spatial density but daily resolution). Operational (real-time) and climatological (gauge-adjusted) C-band radar products and country-wide rainfall maps derived from microwave link data from a cellular telecommunication network were also used. Discharges simulated with these different inputs were compared to observations. We also investigated the effect of spatiotemporal resolution with a high-resolution X-band radar data set for catchments with different sizes. Uncertainty in rainfall forcing is a major source of uncertainty in discharge predictions, both with lumped and with distributed models. For lumped rainfall-runoff models, the main source of input uncertainty is associated with the way in which (effective) catchment-average rainfall is estimated. When catchments are divided into sub-catchments, rainfall spatial variability can become more important, especially during convective rainfall events, leading to spatially varying catchment wetness and spatially varying contribution of quick flow routes. Improving rainfall measurements and their spatiotemporal resolution can improve the performance of rainfall-runoff models, indicating their potential for reducing flood damage through real-time control.
Multiresolution modeling with a JMASS-JWARS HLA Federation
NASA Astrophysics Data System (ADS)
Prince, John D.; Painter, Ron D.; Pendell, Brian; Richert, Walt; Wolcott, Christopher
2002-07-01
CACI, Inc.-Federal has built, tested, and demonstrated the use of a JMASS-JWARS HLA Federation that supports multi- resolution modeling of a weapon system and its subsystems in a JMASS engineering and engagement model environment, while providing a realistic JWARS theater campaign-level synthetic battle space and operational context to assess the weapon system's value added and deployment/employment supportability in a multi-day, combined force-on-force scenario. Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model is both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting extremely large simulation. One viable alternative is to integrate the current hierarchical suite of simulation models using the DoD's High Level Architecture in order to support multi- resolution modeling. An HLA integration eliminates the extremely large model problem, provides a well-defined and manageable mixed resolution simulation and minimizes VV&A issues.
Wu, Sheng; Li, Hong; Petzold, Linda R.
2015-01-01
The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.
Haines, Brian M.; Aldrich, C. H.; Campbell, J. M.; ...
2017-04-24
In this study, we present the results of high-resolution simulations of the implosion of high-convergence layered indirect-drive inertial confinement fusion capsules of the type fielded on the National Ignition Facility using the xRAGE radiation-hydrodynamics code. In order to evaluate the suitability of xRAGE to model such experiments, we benchmark simulation results against available experimental data, including shock-timing, shock-velocity, and shell trajectory data, as well as hydrodynamic instability growth rates. We discuss the code improvements that were necessary in order to achieve favorable comparisons with these data. Due to its use of adaptive mesh refinement and Eulerian hydrodynamics, xRAGE is particularlymore » well suited for high-resolution study of multi-scale engineering features such as the capsule support tent and fill tube, which are known to impact the performance of high-convergence capsule implosions. High-resolution two-dimensional (2D) simulations including accurate and well-resolved models for the capsule fill tube, support tent, drive asymmetry, and capsule surface roughness are presented. These asymmetry seeds are isolated in order to study their relative importance and the resolution of the simulations enables the observation of details that have not been previously reported. We analyze simulation results to determine how the different asymmetries affect hotspot reactivity, confinement, and confinement time and how these combine to degrade yield. Yield degradation associated with the tent occurs largely through decreased reactivity due to the escape of hot fuel mass from the hotspot. Drive asymmetries and the fill tube, however, degrade yield primarily via burn truncation, as associated instability growth accelerates the disassembly of the hotspot. Finally, modeling all of these asymmetries together in 2D leads to improved agreement with experiment but falls short of explaining the experimentally observed yield degradation, consistent with previous 2D simulations of such capsules.« less
NASA Astrophysics Data System (ADS)
Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.
2016-05-01
Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.
How to model supernovae in simulations of star and galaxy formation
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.; Wetzel, Andrew; Kereš, Dušan; Faucher-Giguère, Claude-André; Quataert, Eliot; Boylan-Kolchin, Michael; Murray, Norman; Hayward, Christopher C.; El-Badry, Kareem
2018-06-01
We study the implementation of mechanical feedback from supernovae (SNe) and stellar mass loss in galaxy simulations, within the Feedback In Realistic Environments (FIRE) project. We present the FIRE-2 algorithm for coupling mechanical feedback, which can be applied to any hydrodynamics method (e.g. fixed-grid, moving-mesh, and mesh-less methods), and black hole as well as stellar feedback. This algorithm ensures manifest conservation of mass, energy, and momentum, and avoids imprinting `preferred directions' on the ejecta. We show that it is critical to incorporate both momentum and thermal energy of mechanical ejecta in a self-consistent manner, accounting for SNe cooling radii when they are not resolved. Using idealized simulations of single SN explosions, we show that the FIRE-2 algorithm, independent of resolution, reproduces converged solutions in both energy and momentum. In contrast, common `fully thermal' (energy-dump) or `fully kinetic' (particle-kicking) schemes in the literature depend strongly on resolution: when applied at mass resolution ≳100 M⊙, they diverge by orders of magnitude from the converged solution. In galaxy-formation simulations, this divergence leads to orders-of-magnitude differences in galaxy properties, unless those models are adjusted in a resolution-dependent way. We show that all models that individually time-resolve SNe converge to the FIRE-2 solution at sufficiently high resolution (<100 M⊙). However, in both idealized single-SN simulations and cosmological galaxy-formation simulations, the FIRE-2 algorithm converges much faster than other sub-grid models without re-tuning parameters.
NASA Technical Reports Server (NTRS)
Baron, S.; Lancraft, R.; Zacharias, G.
1980-01-01
The optimal control model (OCM) of the human operator is used to predict the effect of simulator characteristics on pilot performance and workload. The piloting task studied is helicopter hover. Among the simulator characteristics considered were (computer generated) visual display resolution, field of view and time delay.
Using Computer Simulations of Negotiation for Educational and Research Purposes in Business Schools.
ERIC Educational Resources Information Center
Conlon, Donald E.
1989-01-01
Discussion of educational and research advantages of using computer-based experimental simulations for the study of negotiation and dispute resolution in business schools focuses on two studies of undergraduates that used simulation exercises. The influence of time pressure on mediation is examined, and differences in student behavior are…
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
NASA Astrophysics Data System (ADS)
Barthlott, C.; Hoose, C.
2015-11-01
This paper assesses the resolution dependance of clouds and precipitation over Germany by numerical simulations with the COnsortium for Small-scale MOdeling (COSMO) model. Six intensive observation periods of the HOPE (HD(CP)2 Observational Prototype Experiment) measurement campaign conducted in spring 2013 and 1 summer day of the same year are simulated. By means of a series of grid-refinement resolution tests (horizontal grid spacing 2.8, 1 km, 500, and 250 m), the applicability of the COSMO model to represent real weather events in the gray zone, i.e., the scale ranging between the mesoscale limit (no turbulence resolved) and the large-eddy simulation limit (energy-containing turbulence resolved), is tested. To the authors' knowledge, this paper presents the first non-idealized COSMO simulations in the peer-reviewed literature at the 250-500 m scale. It is found that the kinetic energy spectra derived from model output show the expected -5/3 slope, as well as a dependency on model resolution, and that the effective resolution lies between 6 and 7 times the nominal resolution. Although the representation of a number of processes is enhanced with resolution (e.g., boundary-layer thermals, low-level convergence zones, gravity waves), their influence on the temporal evolution of precipitation is rather weak. However, rain intensities vary with resolution, leading to differences in the total rain amount of up to +48 %. Furthermore, the location of rain is similar for the springtime cases with moderate and strong synoptic forcing, whereas significant differences are obtained for the summertime case with air mass convection. Domain-averaged liquid water paths and cloud condensate profiles are used to analyze the temporal and spatial variability of the simulated clouds. Finally, probability density functions of convection-related parameters are analyzed to investigate their dependance on model resolution and their impact on cloud formation and subsequent precipitation.
A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall
NASA Astrophysics Data System (ADS)
Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian
2018-02-01
Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.
Curved crystal x-ray optics for monochromatic imaging with a clinical source.
Bingölbali, Ayhan; MacDonald, C A
2009-04-01
Monochromatic x-ray imaging has been shown to increase contrast and reduce dose relative to conventional broadband imaging. However, clinical sources with very narrow energy bandwidth tend to have limited intensity and field of view. In this study, focused fan beam monochromatic radiation was obtained using doubly curved monochromator crystals. While these optics have been in use for microanalysis at synchrotron facilities for some time, this work is the first investigation of the potential application of curved crystal optics to clinical sources for medical imaging. The optics could be used with a variety of clinical sources for monochromatic slot scan imaging. The intensity was assessed and the resolution of the focused beam was measured using a knife-edge technique. A simulation model was developed and comparisons to the measured resolution were performed to verify the accuracy of the simulation to predict resolution for different conventional sources. A simple geometrical calculation was also developed. The measured, simulated, and calculated resolutions agreed well. Adequate resolution and intensity for mammography were predicted for appropriate source/optic combinations.
Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki
2008-08-01
Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.
Communication: Adaptive boundaries in multiscale simulations
NASA Astrophysics Data System (ADS)
Wagoner, Jason A.; Pande, Vijay S.
2018-04-01
Combined-resolution simulations are an effective way to study molecular properties across a range of length and time scales. These simulations can benefit from adaptive boundaries that allow the high-resolution region to adapt (change size and/or shape) as the simulation progresses. The number of degrees of freedom required to accurately represent even a simple molecular process can vary by several orders of magnitude throughout the course of a simulation, and adaptive boundaries react to these changes to include an appropriate but not excessive amount of detail. Here, we derive the Hamiltonian and distribution function for such a molecular simulation. We also design an algorithm that can efficiently sample the boundary as a new coordinate of the system. We apply this framework to a mixed explicit/continuum simulation of a peptide in solvent. We use this example to discuss the conditions necessary for a successful implementation of adaptive boundaries that is both efficient and accurate in reproducing molecular properties.
NASA Astrophysics Data System (ADS)
Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck
2018-02-01
The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.
Tidal dwarf galaxies in cosmological simulations
NASA Astrophysics Data System (ADS)
Ploeckinger, Sylvia; Sharma, Kuldeep; Schaye, Joop; Crain, Robert A.; Schaller, Matthieu; Barber, Christopher
2018-02-01
The formation and evolution of gravitationally bound, star forming substructures in tidal tails of interacting galaxies, called tidal dwarf galaxies (TDG), has been studied, until now, only in idealized simulations of individual pairs of interacting galaxies for pre-determined orbits, mass ratios and gas fractions. Here, we present the first identification of TDG candidates in fully cosmological simulations, specifically the high-resolution simulations of the EAGLE suite. The finite resolution of the simulation limits their ability to predict the exact formation rate and survival time-scale of TDGs, but we show that gravitationally bound baryonic structures in tidal arms already form in current state-of-the-art cosmological simulations. In this case, the orbital parameter, disc orientations as well as stellar and gas masses and the specific angular momentum of the TDG forming galaxies are a direct consequence of cosmic structure formation. We identify TDG candidates in a wide range of environments, such as multiple galaxy mergers, clumpy high-redshift (up to z = 2) galaxies, high-speed encounters and tidal interactions with gas-poor galaxies. We present selection methods, the properties of the identified TDG candidates and a road map for more quantitative analyses using future high-resolution simulations.
An edge-readout, multilayer detector for positron emission tomography.
Li, Xin; Ruiz-Gonzalez, Maria; Furenlid, Lars R
2018-06-01
We present a novel gamma-ray-detector design based on total internal reflection (TIR) of scintillation photons within a crystal that addresses many limitations of traditional PET detectors. Our approach has appealing features, including submillimeter lateral resolution, DOI positioning from layer thickness, and excellent energy resolution. The design places light sensors on the edges of a stack of scintillator slabs separated by small air gaps and exploits the phenomenon that more than 80% of scintillation light emitted during a gamma-ray event reaches the edges of a thin crystal with polished faces due to TIR. Gamma-ray stopping power is achieved by stacking multiple layers, and DOI is determined by which layer the gamma ray interacts in. The concept of edge readouts of a thin slab was verified by Monte Carlo simulation of scintillation light transport. An LYSO crystal of dimensions 50.8 mm × 50.8 mm × 3.0 mm was modeled with five rectangular SiPMs placed along each edge face. The mean-detector-response functions (MDRFs) were calculated by simulating signals from 511 keV gamma-ray interactions in a grid of locations. Simulations were carried out to study the influence of choice of scintillator material and dimensions, gamma-ray photon energies, introduction of laser or mechanically induced optical barriers (LIOBs, MIOBs), and refractive indices of optical-coupling media and SiPM windows. We also analyzed timing performance including influence of gamma-ray interaction position and presence of optical barriers. We also modeled and built a prototype detector, a 27.4 mm × 27.4 mm × 3.0 mm CsI(Tl) crystal with 4 SiPMs per edge to experimentally validate the results predicted by the simulations. The prototype detector used CsI(Tl) crystals from Proteus outfitted with 16 Hamamatsu model S13360-6050PE MPPCs read out by an AiT-16-channel readout. The MDRFs were measured by scanning the detector with a collimated beam of 662-keV photons from a 137 Cs source. The spatial resolution was experimentally determined by imaging a tungsten slit that created a beam of 0.44 mm (FWHM) width normal to the detector surface. The energy resolution was evaluated by analyzing list-mode data from flood illumination by the 137 Cs source. We find that in a block-detector-sized LYSO layer read out by five SiPMs per edge, illuminated by 511-keV photons, the average resolution is 1.49 mm (FWHM). With the introduction of optical barriers, average spatial resolution improves to 0.56 mm (FWHM). The DOI resolution is the layer thickness of 3.0 mm. We also find that optical-coupling media and SiPM-window materials have an impact on spatial resolution. The timing simulation with LYSO crystal yields a coincidence resolving time (CRT) of 200-400 ps, which is slightly position dependent. And the introduction of optical barriers has minimum influence. The prototype CsI(Tl) detector, with a smaller area and fewer SiPMs, was measured to have central-area spatial resolutions of 0.70 and 0.39 mm without and with optical barriers, respectively. These results match well with our simulations. An energy resolution of 6.4% was achieved at 662 keV. A detector design based on a stack of monolithic scintillator layers that uses edge readouts offers several advantages over current block detectors for PET. For example, there is no tradeoff between spatial resolution and detection sensitivity since no reflector material displaces scintillator crystal, and submillimeter resolution can be achieved. DOI information is readily available, and excellent timing and energy resolutions are possible. © 2018 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
High definition TV projection via single crystal faceplate technology
NASA Astrophysics Data System (ADS)
Kindl, H. J.; St. John, Thomas
1993-03-01
Single crystal phosphor faceplates are epitaxial phosphors grown on crystalline substrates with the advantages of high light output, resolution, and extended operational life. Single crystal phosphor faceplate industrial technology in the United States is capable of providing a faceplate appropriate to the projection industry of up to four (4) inches in diameter. Projection systems incorporating cathode ray tubes utilizing single crystal phosphor faceplates will produce 1500 lumens of white light with 1000 lines of resolution, non-interlaced. This 1500 lumen projection system will meet all of the currently specified luminance and resolution requirements of Visual Display systems for flight simulators. Significant logistic advantages accrue from the introduction of single crystal phosphor faceplate CRT's. Specifically, the full performance life of a CRT is expected to increase by a factor of five (5); ie, from 2000 to 10,000 hours of operation. There will be attendant reductions in maintenance time, spare CRT requirements, system down time, etc. The increased brightness of the projection system will allow use of lower gain, lower cost simulator screen material. Further, picture performance characteristics will be more balanced across the full simulator.
Effect of elevation resolution on evapotranspiration simulations using MODFLOW.
Kambhammettu, B V N P; Schmid, Wolfgang; King, James P; Creel, Bobby J
2012-01-01
Surface elevations represented in MODFLOW head-dependent packages are usually derived from digital elevation models (DEMs) that are available at much high resolution. Conventional grid refinement techniques to simulate the model at DEM resolution increases computational time, input file size, and in many cases are not feasible for regional applications. This research aims at utilizing the increasingly available high resolution DEMs for effective simulation of evapotranspiration (ET) in MODFLOW as an alternative to grid refinement techniques. The source code of the evapotranspiration package is modified by considering for a fixed MODFLOW grid resolution and for different DEM resolutions, the effect of variability in elevation data on ET estimates. Piezometric head at each DEM cell location is corrected by considering the gradient along row and column directions. Applicability of the research is tested for the lower Rio Grande (LRG) Basin in southern New Mexico. The DEM at 10 m resolution is aggregated to resampled DEM grid resolutions which are integer multiples of MODFLOW grid resolution. Cumulative outflows and ET rates are compared at different coarse resolution grids. Results of the analysis conclude that variability in depth-to-groundwater within the MODFLOW cell is a major contributing parameter to ET outflows in shallow groundwater regions. DEM aggregation methods for the LRG Basin have resulted in decreased volumetric outflow due to the formation of a smoothing error, which lowered the position of water table to a level below the extinction depth. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Evaluating galactic habitability using high-resolution cosmological simulations of galaxy formation
NASA Astrophysics Data System (ADS)
Forgan, Duncan; Dayal, Pratika; Cockell, Charles; Libeskind, Noam
2017-01-01
We present the first model that couples high-resolution simulations of the formation of local group galaxies with calculations of the galactic habitable zone (GHZ), a region of space which has sufficient metallicity to form terrestrial planets without being subject to hazardous radiation. These simulations allow us to make substantial progress in mapping out the asymmetric three-dimensional GHZ and its time evolution for the Milky Way (MW) and Triangulum (M33) galaxies, as opposed to works that generally assume an azimuthally symmetric GHZ. Applying typical habitability metrics to MW and M33, we find that while a large number of habitable planets exist as close as a few kiloparsecs from the galactic centre, the probability of individual planetary systems being habitable rises as one approaches the edge of the stellar disc. Tidal streams and satellite galaxies also appear to be fertile grounds for habitable planet formation. In short, we find that both galaxies arrive at similar GHZs by different evolutionary paths, as measured by the first and third quartiles of surviving biospheres. For the MW, this interquartile range begins as a narrow band at large radii, expanding to encompass much of the Galaxy at intermediate times before settling at a range of 2-13 kpc. In the case of M33, the opposite behaviour occurs - the initial and final interquartile ranges are quite similar, showing gradual evolution. This suggests that Galaxy assembly history strongly influences the time evolution of the GHZ, which will affect the relative time lag between biospheres in different galactic locations. We end by noting the caveats involved in such studies and demonstrate that high-resolution cosmological simulations will play a vital role in understanding habitability on galactic scales, provided that these simulations accurately resolve chemical evolution.
Agent-based Large-Scale Emergency Evacuation Using Real-Time Open Government Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Wei; Liu, Cheng; Bhaduri, Budhendra L
The open government initiatives have provided tremendous data resources for the transportation system and emergency services in urban areas. This paper proposes a traffic simulation framework using high temporal resolution demographic data and real time open government data for evacuation planning and operation. A comparison study using real-world data in Seattle, Washington is conducted to evaluate the framework accuracy and evacuation efficiency. The successful simulations of selected area prove the concept to take advantage open government data, open source data, and high resolution demographic data in emergency management domain. There are two aspects of parameters considered in this study: usermore » equilibrium (UE) conditions of traffic assignment model (simple Non-UE vs. iterative UE) and data temporal resolution (Daytime vs. Nighttime). Evacuation arrival rate, average travel time, and computation time are adopted as Measure of Effectiveness (MOE) for evacuation performance analysis. The temporal resolution of demographic data has significant impacts on urban transportation dynamics during evacuation scenarios. Better evacuation performance estimation can be approached by integrating both Non-UE and UE scenarios. The new framework shows flexibility in implementing different evacuation strategies and accuracy in evacuation performance. The use of this framework can be explored to day-to-day traffic assignment to support daily traffic operations.« less
NASA Astrophysics Data System (ADS)
Sutanudjaja, Edwin; van Beek, Rens; Winsemius, Hessel; Ward, Philip; Bierkens, Marc
2017-04-01
The Aqueduct Global Flood Analyzer, launched in 2015, is an open-access and free-of-charge web-based interactive platform which assesses and visualises current and future projections of river flood impacts across the globe. One of the key components in the Analyzer is a set of river flood inundation hazard maps derived from the global hydrological model simulation of PCR-GLOBWB. For the current version of the Analyzer, accessible on http://floods.wri.org/#/, the early generation of PCR-GLOBWB 1.0 was used and simulated at 30 arc-minute ( 50 km at the equator) resolution. In this presentation, we will show the new version of these hazard maps. This new version is based on the latest version of PCR-GLOBWB 2.0 (https://github.com/UU-Hydro/PCR-GLOBWB_model, Sutanudjaja et al., 2016, doi:10.5281/zenodo.60764) simulated at 5 arc-minute ( 10 km at the equator) resolution. The model simulates daily hydrological and water resource fluxes and storages, including the simulation of overbank volume that ends up on the floodplain (if flooding occurs). The simulation was performed for the present day situation (from 1960) and future climate projections (until 2099) using the climate forcing created in the ISI-MIP project. From the simulated flood inundation volume time series, we then extract annual maxima for each cell, and fit these maxima to a Gumbel extreme value distribution. This allows us to derive flood volume maps of any hazard magnitude (ranging from 2-year to 1000-year flood events) and for any time period (e.g. 1960-1999, 2010-2049, 2030-2069, and 2060-2099). The derived flood volumes (at 5 arc-minute resolution) are then spread over the high resolution terrain model using an updated GLOFRIS downscaling module (Winsemius et al., 2013, doi:10.5194/hess-17-1871-2013). The updated version performs a volume spreading sequentially from more upstream basins to downstream basins, hence enabling a better inclusion of smaller streams, and takes into account spreading of water over diverging deltaic regions. This results in a set of high resolution hazard maps of flood inundation depth at 30 arc-second ( 1 km at the equator) resolution. Together with many other updates and new features, the resulting flood hazard maps will be used in the next generation of the Aqueduct Global Flood Analyzer.
NASA Astrophysics Data System (ADS)
González-Vida, Jose M.; Macías, Jorge; Mercado, Aurelio; Ortega, Sergio; Castro, Manuel J.
2017-04-01
Tsunami-HySEA model is used to simulate the Caribbean LANTEX 2013 scenario (LANTEX is the acronym for Large AtlaNtic Tsunami EXercise, which is carried out annually). The numerical simulation of the propagation and inundation phases, is performed with both models but using different mesh resolutions and nested meshes. Some comparisons with the MOST tsunami model available at the University of Puerto Rico (UPR) are made. Both models compare well for propagating tsunami waves in open sea, producing very similar results. In near-shore shallow waters, Tsunami-HySEA should be compared with the inundation version of MOST, since the propagation version of MOST is limited to deeper waters. Regarding the inundation phase, a 1 arc-sec (approximately 30 m) resolution mesh covering all of Puerto Rico, is used, and a three-level nested meshes technique implemented. In the inundation phase, larger differences between model results are observed. Nevertheless, the most striking difference resides in computational time; Tsunami-HySEA is coded using the advantages of GPU architecture, and can produce a 4 h simulation in a 60 arcsec resolution grid for the whole Caribbean Sea in less than 4 min with a single general-purpose GPU and as fast as 11 s with 32 general-purpose GPUs. In the inundation stage with nested meshes, approximately 8 hours of wall clock time is needed for a 2-h simulation in a single GPU (versus more than 2 days for the MOST inundation, running three different parts of the island—West, Center, East—at the same time due to memory limitations in MOST). When domain decomposition techniques are finally implemented by breaking up the computational domain into sub-domains and assigning a GPU to each sub-domain (multi-GPU Tsunami-HySEA version), we show that the wall clock time significantly decreases, allowing high-resolution inundation modelling in very short computational times, reducing, for example, if eight GPUs are used, the wall clock time to around 1 hour. Besides, these computational times are obtained using general-purpose GPU hardware.
NASA Astrophysics Data System (ADS)
Collier, J. C.; Zhang, G. J.
2006-05-01
Simulation of the North American monsoon system by the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM3) is evaluated in its sensitivity to increasing horizontal resolution. For two resolutions, T42 and T85, rainfall is compared to TRMM satellite-derived and surface gauge-based rainfall rates over the U.S. and northern Mexico as well as rainfall accumulations in gauges of the North American Monsoon Experiment (NAME) Enhanced Rain Gauge Network (NERN) in the Sierra Madre Occidental mountains. Simulated upper-tropospheric mass and wind fields are compared to those from NCEP-NCAR reanalyses. The comparison presented herein demonstrates that tropospheric motions associated with the North American monsoon system are sensitive to increasing the horizontal resolution of the model. An increase in resolution from T42 to T85 results in changes to a region of large-scale mid-tropospheric descent found north and east of the monsoon anticyclone. Relative to its simulation at T42, this region extends farther south and west at T85. Additionally, at T85, the subsidence is stronger. Consistent with the differences in large-scale descent, the T85 simulation of CAM3 is anomalously dry over Texas and northeastern Mexico during the peak monsoon months. Meanwhile, the geographic distribution of rainfall over the Sierra Madre Occidental region of Mexico is more satisfactorily simulated at T85 than at T42 for July and August. Moisture import into this region is greater at T85 than at T42 during these months. A focused study of the Sierra Madre Occidental region in particular shows that, in the regional average sense, the timing of the peak of the monsoon is relatively insensitive to the horizontal resolution of the model, while a phase bias in the diurnal cycle of monsoon-season precipitation is somewhat reduced in the higher-resolution run. At both resolutions, CAM3 poorly simulates the month-to-month evolution of monsoon rainfall over extreme northwestern Mexico and Arizona, though biases are considerably improved at T85.
Hostetler, S.W.; Alder, J.R.; Allan, A.M.
2011-01-01
We have completed an array of high-resolution simulations of present and future climate over Western North America (WNA) and Eastern North America (ENA) by dynamically downscaling global climate simulations using a regional climate model, RegCM3. The simulations are intended to provide long time series of internally consistent surface and atmospheric variables for use in climate-related research. In addition to providing high-resolution weather and climate data for the past, present, and future, we have developed an integrated data flow and methodology for processing, summarizing, viewing, and delivering the climate datasets to a wide range of potential users. Our simulations were run over 50- and 15-kilometer model grids in an attempt to capture more of the climatic detail associated with processes such as topographic forcing than can be captured by general circulation models (GCMs). The simulations were run using output from four GCMs. All simulations span the present (for example, 1968-1999), common periods of the future (2040-2069), and two simulations continuously cover 2010-2099. The trace gas concentrations in our simulations were the same as those of the GCMs: the IPCC 20th century time series for 1968-1999 and the A2 time series for simulations of the future. We demonstrate that RegCM3 is capable of producing present day annual and seasonal climatologies of air temperature and precipitation that are in good agreement with observations. Important features of the high-resolution climatology of temperature, precipitation, snow water equivalent (SWE), and soil moisture are consistently reproduced in all model runs over WNA and ENA. The simulations provide a potential range of future climate change for selected decades and display common patterns of the direction and magnitude of changes. As expected, there are some model to model differences that limit interpretability and give rise to uncertainties. Here, we provide background information about the GCMs and the RegCM3, a basic evaluation of the model output and examples of simulated future climate. We also provide information needed to access the web applications for visualizing and downloading the data, and give complete metadata that describe the variables in the datasets.
NASA Astrophysics Data System (ADS)
Ahmadov, R.; Grell, G. A.; James, E.; Alexander, C.; Stewart, J.; Benjamin, S.; McKeen, S. A.; Csiszar, I. A.; Tsidulko, M.; Pierce, R. B.; Pereira, G.; Freitas, S. R.; Goldberg, M.
2017-12-01
We present a new real-time smoke modeling system, the High Resolution Rapid Refresh coupled with smoke (HRRR-Smoke), to simulate biomass burning (BB) emissions, plume rise and smoke transport in real time. The HRRR is the NOAA Earth System Research Laboratory's 3km grid spacing version of the Weather Research and Forecasting (WRF) model used for weather forecasting. Here we make use of WRF-Chem (the WRF model coupled with chemistry) and simulate fine particulate matter (smoke) emissions emitted by BB. The HRRR-Smoke modeling system ingests fire radiative power (FRP) data from the Visible Infrared Imaging Radiometer Suite (VIIRS) sensor on the Suomi National Polar-orbiting Partnership (S-NPP) satellite to calculate BB emissions. The FRP product is based on processing 750m resolution "M" bands. The algorithms for fire detection and FRP retrieval are consistent with those used to generate the MODIS fire detection data. For the purpose of ingesting VIIRS fire data into the HRRR-Smoke model, text files are generated to provide the location and detection confidence of fire pixels, as well as FRP. The VIIRS FRP data from the text files are processed and remapped over the HRRR-Smoke model domains. We process the FRP data to calculate BB emissions (smoldering part) and fire size for the model input. In addition, HRRR-Smoke uses the FRP data to simulate the injection height for the flaming emissions using concurrently simulated meteorological fields by the model. Currently, there are two 3km resolution domains covering the contiguous US and Alaska which are used to simulate smoke in real time. In our presentation, we focus on the CONUS domain. HRRR-Smoke is initialized 4 times per day to forecast smoke concentrations for the next 36 hours. The VIIRS FRP data, as well as near-surface and vertically integrated smoke mass concentrations are visualized for every forecast hour. These plots are provided to the public via the HRRR-Smoke web-page: https://rapidrefresh.noaa.gov/HRRRsmoke/. Model evaluations for a case study are presented, where simulated smoke concentrations are compared with hourly PM2.5 measurements from EPA's Air Quality System network. These comparisons demonstrate the model's ability in simulating high aerosol loadings during major wildfire events in the western US.
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
Hayashi, K; Hoeksema, J T; Liu, Y; Bobra, M G; Sun, X D; Norton, A A
Time-dependent three-dimensional magnetohydrodynamics (MHD) simulation modules are implemented at the Joint Science Operation Center (JSOC) of the Solar Dynamics Observatory (SDO). The modules regularly produce three-dimensional data of the time-relaxed minimum-energy state of the solar corona using global solar-surface magnetic-field maps created from Helioseismic and Magnetic Imager (HMI) full-disk magnetogram data. With the assumption of a polytropic gas with specific-heat ratio of 1.05, three types of simulation products are currently generated: i) simulation data with medium spatial resolution using the definitive calibrated synoptic map of the magnetic field with a cadence of one Carrington rotation, ii) data with low spatial resolution using the definitive version of the synchronic frame format of the magnetic field, with a cadence of one day, and iii) low-resolution data using near-real-time (NRT) synchronic format of the magnetic field on a daily basis. The MHD data available in the JSOC database are three-dimensional, covering heliocentric distances from 1.025 to 4.975 solar radii, and contain all eight MHD variables: the plasma density, temperature, and three components of motion velocity, and three components of the magnetic field. This article describes details of the MHD simulations as well as the production of the input magnetic-field maps, and details of the products available at the JSOC database interface. To assess the merits and limits of the model, we show the simulated data in early 2011 and compare with the actual coronal features observed by the Atmospheric Imaging Assembly (AIA) and the near-Earth in-situ data.
Spatial and Temporal Monitoring Resolutions for CO2 Leakage Detection at Carbon Storage Sites
NASA Astrophysics Data System (ADS)
Yang, Y. M.; Dilmore, R. M.; Daley, T. M.; Carroll, S.; Mansoor, K.; Gasperikova, E.; Harbert, W.; Wang, Z.; Bromhal, G. S.; Small, M.
2016-12-01
Different leakage monitoring techniques offer different strengths in detection sensitivity, coverage, feedback time, cost, and technology availability, such that they may complement each other when applied together. This research focuses on quantifying the spatial coverage and temporal resolution of detection response for several geophysical remote monitoring and direct groundwater monitoring techniques for an optimal monitoring plan for CO2 leakage detection. Various monitoring techniques with different monitoring depths are selected: 3D time-lapse seismic survey, wellbore pressure, groundwater chemistry and soil gas. The spatial resolution in terms of leakage detectability is quantified through the effective detection distance between two adjacent monitors, given the magnitude of leakage and specified detection probability. The effective detection distances are obtained either from leakage simulations with various monitoring densities or from information garnered from field test data. These spatial leakage detection resolutions are affected by physically feasible monitoring design and detection limits. Similarly, the temporal resolution, in terms of leakage detectability, is quantified through the effective time to positive detection of a given size of leak and a specified detection probability, again obtained either from representative leakage simulations with various monitoring densities or from field test data. The effective time to positive detection is also affected by operational feedback time (associated with sampling, sample analysis and data interpretation), with values obtained mainly through expert interviews and literature review. In additional to the spatial and temporal resolutions of these monitoring techniques, the impact of CO2 plume migration speed and leakage detection sensitivity of each monitoring technique are also discussed with consideration of how much monitoring is necessary for effective leakage detection and how these monitoring techniques can be better combined in a time-space framework. The results of the spatial and temporal leakage detection resolutions for several geophysical monitoring techniques and groundwater monitoring are summarized to inform future monitoring designs at carbon storage sites.
Performance simulation of a compact PET insert for simultaneous PET/MR breast imaging
NASA Astrophysics Data System (ADS)
Liang, Yicheng; Peng, Hao
2014-07-01
We studied performance metrics of a small PET ring designed to be integrated with a breast MRI coil. Its performance was characterized using a Monte Carlo simulation of a system with the best possible design features we believe are technically available, with respect to system geometry, spatial resolution, shielding, and lesion detectability. The results indicate that the proposed system is able to achieve about 6.2% photon detection sensitivity at the center of field-of-view (FOV) (crystal design: 2.2×2.2×20 mm3, height: 3.4 cm). The peak noise equivalent count rate (NECR) is found to be 7886 cps with a time resolution of 250 ps (time window: 500 ps). With the presence of lead shielding, the NECR increases by a factor of 1.7 for high activity concentrations within the breast (>0.9 μCi/mL), while no noticeable benefit is observed in the range of activities currently being used in the clinical setting. In addition, the system is able to achieve spatial resolution of 1.6 mm (2.2×2.2×20 mm3 crystal) and 0.77 mm (1×1×20 mm3 crystal) at the center of FOV, respectively. The incorporation of 10 mm DOI resolution can help mitigate parallax error towards the edge of FOV. For both 2.2 mm and 1 mm crystal designs, the spatial resolution is around 3.2-3.5 mm at 5 cm away from the center. Finally, time-of-flight (TOF) helps in improving image quality, reduces the required number of iteration numbers and the scan time. The TOF effect was studied with 3 different time resolution settings (1 ns, 500 ps and 250 ps). With a TOF of 500 ps time resolution, we expect 3 mm diameter spheres where 5:1 activity concentration ratio will be detectable within 5 min achieving contrast to noise ratio (CNR) above 4.
Parallel simulation of tsunami inundation on a large-scale supercomputer
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.
2013-12-01
An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.
Characterization of a tin-loaded liquid scintillator for gamma spectroscopy and neutron detection
NASA Astrophysics Data System (ADS)
Wen, Xianfei; Harvey, Taylor; Weinmann-Smith, Robert; Walker, James; Noh, Young; Farley, Richard; Enqvist, Andreas
2018-07-01
A tin-loaded liquid scintillator has been developed for gamma spectroscopy and neutron detection. The scintillator was characterized in regard to energy resolution, pulse shape discrimination, neutron light output function, and timing resolution. The loading of tin into scintillators with low effective atomic number was demonstrated to provide photopeaks with acceptable energy resolution. The scintillator was shown to have reasonable neutron/gamma discrimination capability based on the charge comparison method. The effect on the discrimination quality of the total charge integration time and the initial delay time for tail charge integration was studied. To obtain the neutron light output function, the time-of-flight technique was utilized with a 252Cf source. The light output function was validated with the MCNPX-PoliMi code by comparing the measured and simulated pule height spectra. The timing resolution of the developed scintillator was also evaluated. The tin-loading was found to have negligible impact on the scintillation decay times. However, a relatively large degradation of timing resolution was observed due to the reduced light yield.
Detection and Attribution of Regional Climate Change
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bala, G; Mirin, A
2007-01-19
We developed a high resolution global coupled modeling capability to perform breakthrough studies of the regional climate change. The atmospheric component in our simulation uses a 1{sup o} latitude x 1.25{sup o} longitude grid which is the finest resolution ever used for the NCAR coupled climate model CCSM3. Substantial testing and slight retuning was required to get an acceptable control simulation. The major accomplishment is the validation of this new high resolution configuration of CCSM3. There are major improvements in our simulation of the surface wind stress and sea ice thickness distribution in the Arctic. Surface wind stress and oceanmore » circulation in the Antarctic Circumpolar Current are also improved. Our results demonstrate that the FV version of the CCSM coupled model is a state of the art climate model whose simulation capabilities are in the class of those used for IPCC assessments. We have also provided 1000 years of model data to Scripps Institution of Oceanography to estimate the natural variability of stream flow in California. In the future, our global model simulations will provide boundary data to high-resolution mesoscale model that will be used at LLNL. The mesoscale model would dynamically downscale the GCM climate to regional scale on climate time scales.« less
What is the effect of LiDAR-derived DEM resolution on large-scale watershed model results?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ping Yang; Daniel B. Ames; Andre Fonseca
This paper examines the effect of raster cell size on hydrographic feature extraction and hydrological modeling using LiDAR derived DEMs. LiDAR datasets for three experimental watersheds were converted to DEMs at various cell sizes. Watershed boundaries and stream networks were delineated from each DEM and were compared to reference data. Hydrological simulations were conducted and the outputs were compared. Smaller cell size DEMs consistently resulted in less difference between DEM-delineated features and reference data. However, minor differences been found between streamflow simulations resulted for a lumped watershed model run at daily simulations aggregated at an annual average. These findings indicatemore » that while higher resolution DEM grids may result in more accurate representation of terrain characteristics, such variations do not necessarily improve watershed scale simulation modeling. Hence the additional expense of generating high resolution DEM's for the purpose of watershed modeling at daily or longer time steps may not be warranted.« less
NASA Astrophysics Data System (ADS)
Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt
2015-05-01
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.
Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain
NASA Astrophysics Data System (ADS)
Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.
2018-04-01
The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.
NASA Astrophysics Data System (ADS)
Trudel, M.; Desrochers, N.; Leconte, R.
2017-12-01
Knowledge of water extent (WE) and level (WL) of rivers is necessary to calibrate and validate hydraulic models and thus to better simulate and forecast floods. Synthetic aperture radar (SAR) has demonstrated its potential for delineating water bodies, as backscattering of water is much lower than that of other natural surfaces. The ability of SAR to obtain information despite cloud cover makes it an interesting tool for temporal monitoring of water bodies. The delineation of WE combined with a high-resolution digital terrain model (DTM) allows extracting WL. However, most research using SAR data to calibrate hydraulic models has been carried out using one or two images. The objectives of this study is to use WL derived from time series high resolution Radarsat-2 SAR images for the calibration of a 1-D hydraulic model (HEC-RAS). Twenty high-resolution (5 m) Radarsat-2 images were acquired over a 40 km reach of the Athabasca River, in northern Alberta, Canada, between 2012 and 2016, covering both low and high flow regimes. A high-resolution (2m) DTM was generated combining information from LIDAR data and bathymetry acquired between 2008 and 2016 by boat surveying. The HEC-RAS model was implemented on the Athabasca River to simulate WL using cross-sections spaced by 100 m. An image histogram thresholding method was applied on each Radarsat-2 image to derive WE. WE were then compared against each cross-section to identify those were the slope of the banks is not too abrupt and therefore amenable to extract WL. 139 observations of WL at different locations along the river reach and with streamflow measurements were used to calibrate the HEC-RAS model. The RMSE between SAR-derived and simulated WL is under 0.35 m. Validation was performed using in situ observations of WL measured in 2008, 2012 and 2016. The RMSE between the simulated water levels calibrated with SAR images and in situ observations is less than 0.20 m. In addition, a critical success index (CSI) was performed to compare the WE simulated by HEC-RAS and that derived from SARs images. The CSI is higher than 0.85 for each date, which means that simulated WE is highly similar to the WE derived from SARs images. Thereby, the results of our analysis indicate that calibration of a hydraulic model can be performed from WL derived from time series of high-resolution SAR images.
On the timing performance of thin planar silicon sensors
NASA Astrophysics Data System (ADS)
Akchurin, N.; Ciriolo, V.; Currás, E.; Damgov, J.; Fernández, M.; Gallrapp, C.; Gray, L.; Junkes, A.; Mannelli, M.; Martin Kwok, K. H.; Meridiani, P.; Moll, M.; Nourbakhsh, S.; Pigazzini, S.; Scharf, C.; Silva, P.; Steinbrueck, G.; de Fatis, T. Tabarelli; Vila, I.
2017-07-01
We report on the signal timing capabilities of thin silicon sensors when traversed by multiple simultaneous minimum ionizing particles (MIP). Three different planar sensors, with depletion thicknesses 133, 211, and 285 μm, have been exposed to high energy muons and electrons at CERN. We describe signal shape and timing resolution measurements as well as the response of these devices as a function of the multiplicity of MIPs. We compare these measurements to simulations where possible. We achieve better than 20 ps timing resolution for signals larger than a few tens of MIPs.
Extraction of temporal information in functional MRI
NASA Astrophysics Data System (ADS)
Singh, M.; Sungkarat, W.; Jeong, Jeong-Won; Zhou, Yongxia
2002-10-01
The temporal resolution of functional MRI (fMRI) is limited by the shape of the haemodynamic response function (hrf) and the vascular architecture underlying the activated regions. Typically, the temporal resolution of fMRI is on the order of 1 s. We have developed a new data processing approach to extract temporal information on a pixel-by-pixel basis at the level of 100 ms from fMRI data. Instead of correlating or fitting the time-course of each pixel to a single reference function, which is the common practice in fMRI, we correlate each pixel's time-course to a series of reference functions that are shifted with respect to each other by 100 ms. The reference function yielding the highest correlation coefficient for a pixel is then used as a time marker for that pixel. A Monte Carlo simulation and experimental study of this approach were performed to estimate the temporal resolution as a function of signal-to-noise ratio (SNR) in the time-course of a pixel. Assuming a known and stationary hrf, the simulation and experimental studies suggest a lower limit in the temporal resolution of approximately 100 ms at an SNR of 3. The multireference function approach was also applied to extract timing information from an event-related motor movement study where the subjects flexed a finger on cue. The event was repeated 19 times with the event's presentation staggered to yield an approximately 100-ms temporal sampling of the haemodynamic response over the entire presentation cycle. The timing differences among different regions of the brain activated by the motor task were clearly visualized and quantified by this method. The results suggest that it is possible to achieve a temporal resolution of /spl sim/200 ms in practice with this approach.
NASA Astrophysics Data System (ADS)
Pillai, D.; Gerbig, C.; Kretschmer, R.; Beck, V.; Karstens, U.; Neininger, B.; Heimann, M.
2012-01-01
We present simulations of atmospheric CO2 concentrations provided by two modeling systems, run at high spatial resolution: the Eulerian-based Weather Research Forecasting (WRF) model and the Lagrangian-based Stochastic Time-Inverted Lagrangian Transport (STILT) model, both of which are coupled to a diagnostic biospheric model, the Vegetation Photosynthesis and Respiration Model (VPRM). The consistency of the simulations is assessed with special attention paid to the details of horizontal as well as vertical transport and mixing of CO2 concentrations in the atmosphere. The dependence of model mismatch (Eulerian vs. Lagrangian) on models' spatial resolution is further investigated. A case study using airborne measurements during which both models showed large deviations from each other is analyzed in detail as an extreme case. Using aircraft observations and pulse release simulations, we identified differences in the representation of details in the interaction between turbulent mixing and advection through wind shear as the main cause of discrepancies between WRF and STILT transport at a spatial resolution such as 2 and 6 km. Based on observations and inter-model comparisons of atmospheric CO2 concentrations, we show that a refinement of the parameterization of turbulent velocity variance and Lagrangian time-scale in STILT is needed to achieve a better match between the Eulerian and the Lagrangian transport at such a high spatial resolution (e.g. 2 and 6 km). Nevertheless, the inter-model differences in simulated CO2 time series for a tall tower observatory at Ochsenkopf in Germany are about a factor of two smaller than the model-data mismatch and about a factor of three smaller than the mismatch between the current global model simulations and the data. Thus suggests that it is reasonable to use STILT as an adjoint model of WRF atmospheric transport.
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-01-01
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified. PMID:29649173
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-04-12
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
NASA Astrophysics Data System (ADS)
Li, Hui; Sriver, Ryan L.
2018-01-01
High-resolution Atmosphere General Circulation Models (AGCMs) are capable of directly simulating realistic tropical cyclone (TC) statistics, providing a promising approach for TC-climate studies. Active air-sea coupling in a coupled model framework is essential to capturing TC-ocean interactions, which can influence TC-climate connections on interannual to decadal time scales. Here we investigate how the choices of ocean coupling can affect the directly simulated TCs using high-resolution configurations of the Community Earth System Model (CESM). We performed a suite of high-resolution, multidecadal, global-scale CESM simulations in which the atmosphere (˜0.25° grid spacing) is configured with three different levels of ocean coupling: prescribed climatological sea surface temperature (SST) (ATM), mixed layer ocean (SLAB), and dynamic ocean (CPL). We find that different levels of ocean coupling can influence simulated TC frequency, geographical distributions, and storm intensity. ATM simulates more storms and higher overall storm intensity than the coupled simulations. It also simulates higher TC track density over the eastern Pacific and the North Atlantic, while TC tracks are relatively sparse within CPL and SLAB for these regions. Storm intensification and the maximum wind speed are sensitive to the representations of local surface flux feedbacks in different coupling configurations. Key differences in storm number and distribution can be attributed to variations in the modeled large-scale climate mean state and variability that arise from the combined effect of intrinsic model biases and air-sea interactions. Results help to improve our understanding about the representation of TCs in high-resolution coupled Earth system models, with important implications for TC-climate applications.
Wehner, Michael F.; Bala, G.; Duffy, Phillip; ...
2010-01-01
We present a set of high-resolution global atmospheric general circulation model (AGCM) simulations focusing on the model's ability to represent tropical storms and their statistics. We find that the model produces storms of hurricane strength with realistic dynamical features. We also find that tropical storm statistics are reasonable, both globally and in the north Atlantic, when compared to recent observations. The sensitivity of simulated tropical storm statistics to increases in sea surface temperature (SST) is also investigated, revealing that a credible late 21st century SST increase produced increases in simulated tropical storm numbers and intensities in all ocean basins. Whilemore » this paper supports previous high-resolution model and theoretical findings that the frequency of very intense storms will increase in a warmer climate, it differs notably from previous medium and high-resolution model studies that show a global reduction in total tropical storm frequency. However, we are quick to point out that this particular model finding remains speculative due to a lack of radiative forcing changes in our time-slice experiments as well as a focus on the Northern hemisphere tropical storm seasons.« less
NASA Astrophysics Data System (ADS)
Fairchild, A. J.; Chirayath, V. A.; Gladen, R. W.; Chrysler, M. D.; Koymen, A. R.; Weiss, A. H.
2017-01-01
In this paper, we present results of numerical modelling of the University of Texas at Arlington’s time of flight positron annihilation induced Auger electron spectrometer (UTA TOF-PAES) using SIMION® 8.1 Ion and Electron Optics Simulator. The time of flight (TOF) spectrometer measures the energy of electrons emitted from the surface of a sample as a result of the interaction of low energy positrons with the sample surface. We have used SIMION® 8.1 to calculate the times of flight spectra of electrons leaving the sample surface with energies and angles dispersed according to distribution functions chosen to model the positron induced electron emission process and have thus obtained an estimate of the true electron energy distribution. The simulated TOF distribution was convolved with a Gaussian timing resolution function and compared to the experimental distribution. The broadening observed in the simulated TOF spectra was found to be consistent with that observed in the experimental secondary electron spectra of Cu generated as a result of positrons incident with energy 1.5 eV to 901 eV, when a timing resolution of 2.3 ns was assumed.
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
NASA Astrophysics Data System (ADS)
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; Senior, Catherine A.; Bellucci, Alessio; Bao, Qing; Chang, Ping; Corti, Susanna; Fučkar, Neven S.; Guemas, Virginie; von Hardenberg, Jost; Hazeleger, Wilco; Kodama, Chihiro; Koenigk, Torben; Leung, L. Ruby; Lu, Jian; Luo, Jing-Jia; Mao, Jiafu; Mizielinski, Matthew S.; Mizuta, Ryo; Nobre, Paulo; Satoh, Masaki; Scoccimarro, Enrico; Semmler, Tido; Small, Justin; von Storch, Jin-Song
2016-11-01
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, "what are the origins and consequences of systematic model biases?", but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.
NASA Astrophysics Data System (ADS)
Tulloch, R.; Hill, C. N.; Jahn, O.
2010-12-01
We present results from an ensemble of BP oil spill simulations. The oil spill slick is modeled as a buoyant surface plume that is transported by ocean currents modulated, in some experiments, by surface winds. Ocean currents are taken from ECCO2 project (see http://ecco2.org ) observationally constrained state estimates spanning 1992-2007. In this work we (i) explore the role of increased resolution of ocean eddies, (ii) compare inferences from particle based, lagrangian, approaches with eulerian, field based, approaches and (ii) examine the impact of differential response of oil particles and water to normal and extreme, hurricane derived, wind stress. We focus on three main questions. Is the simulated response to an oil spill markedly different for different years, depending on ocean circulation and wind forcing? Does the simulated response depend heavily on resolution and are lagrangian and eulerian estimates comparable? We start from two regional configurations of the MIT General Circulation Model (MITgcm - see http://mitgcm.org ) at 16km and 4km resolutions respectively, both covering the Gulf of Mexico and western North Atlantic regions. The simulations are driven at open boundaries with momentum and hydrographic fields from ECCO2 observationally constrained global circulation estimates. The time dependent surface flow fields from these simulations are used to transport a dye that can optionally decay over time (approximating biological breakdown) and to transport lagrangian particles. Using these experiments we examine the robustness of conclusions regarding the fate of a buoyant slick, injected at a single point. In conclusion we discuss how future drilling operations could use similar approaches to better anticipate outcomes of accidents both in this region and elsewhere.
NASA Astrophysics Data System (ADS)
Lin, S. J.
2015-12-01
The NOAA/Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions (e.g., tornado outbreak events and cat-5 hurricanes) and ultra-high-resolution (1-km) regional climate simulations within a consistent global modeling framework. The fundation of this flexible regional-global modeling system is the non-hydrostatic extension of the vertically Lagrangian dynamical core (Lin 2004, Monthly Weather Review) known in the community as FV3 (finite-volume on the cubed-sphere). Because of its flexability and computational efficiency, the FV3 is one of the final candidates of NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched (single) grid capability, a two-way (regional-global) multiple nested grid capability, and the combination of the stretched and two-way nests, so as to make convection-resolving regional climate simulation within a consistent global modeling system feasible using today's High Performance Computing System. One of our main scientific goals is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornadoes using a global model that was originally designed for century long climate simulations. As a unified weather-climate modeling system, we evaluated the performance of the model with horizontal resolution ranging from 1 km to as low as 200 km. In particular, for downscaling studies, we have developed various tests to ensure that the large-scale circulation within the global varaible resolution system is well simulated while at the same time the small-scale can be accurately captured within the targeted high resolution region.
Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model
NASA Astrophysics Data System (ADS)
Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.
2017-12-01
This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.
Extended-Range High-Resolution Dynamical Downscaling over a Continental-Scale Domain
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
High-resolution mesoscale simulations, when applied for downscaling meteorological fields over large spatial domains and for extended time periods, can provide valuable information for many practical application scenarios including the weather-dependent renewable energy industry. In the present study, a strategy has been proposed to dynamically downscale coarse-resolution meteorological fields from Environment Canada's regional analyses for a period of multiple years over the entire Canadian territory. The study demonstrates that a continuous mesoscale simulation over the entire domain is the most suitable approach in this regard. Large-scale deviations in the different meteorological fields pose the biggest challenge for extended-range simulations over continental scale domains, and the enforcement of the lateral boundary conditions is not sufficient to restrict such deviations. A scheme has therefore been developed to spectrally nudge the simulated high-resolution meteorological fields at the different model vertical levels towards those embedded in the coarse-resolution driving fields derived from the regional analyses. A series of experiments were carried out to determine the optimal nudging strategy including the appropriate nudging length scales, nudging vertical profile and temporal relaxation. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil-moisture, and snow conditions, towards their expected values obtained from a high-resolution offline surface scheme was also devised to limit any considerable deviation in the evolving surface fields due to extended-range temporal integrations. The study shows that ensuring large-scale atmospheric similarities helps to deliver near-surface statistical scores for temperature, dew point temperature and horizontal wind speed that are better or comparable to the operational regional forecasts issued by Environment Canada. Furthermore, the meteorological fields resulting from the proposed downscaling strategy have significantly improved spatiotemporal variance compared to those from the operational forecasts, and any time series generated from the downscaled fields do not suffer from discontinuities due to switching between the consecutive forecasts.
NASA Astrophysics Data System (ADS)
Magaldi, Marcello G.; Haine, Thomas W. N.
2015-02-01
The cascade of dense waters of the Southeast Greenland shelf during summer 2003 is investigated with two very high-resolution (0.5-km) simulations. The first simulation is non-hydrostatic. The second simulation is hydrostatic and about 3.75 times less expensive. Both simulations are compared to a 2-km hydrostatic run, about 31 times less expensive than the 0.5 km non-hydrostatic case. Time-averaged volume transport values for deep waters are insensitive to the changes in horizontal resolution and vertical momentum dynamics. By this metric, both lateral stirring and vertical shear instabilities associated with the cascading process are accurately parameterized by the turbulent schemes used at 2-km horizontal resolution. All runs compare well with observations and confirm that the cascade is mainly driven by cyclones which are linked to dense overflow boluses at depth. The passage of the cyclones is also associated with the generation of internal gravity waves (IGWs) near the shelf. Surface fields and kinetic energy spectra do not differ significantly between the runs for horizontal scales L > 30 km. Complex structures emerge and the spectra flatten at scales L < 30 km in the 0.5-km runs. In the non-hydrostatic case, additional energy is found in the vertical kinetic energy spectra at depth in the 2 km < L < 10 km range and with frequencies around 7 times the inertial frequency. This enhancement is missing in both hydrostatic runs and is here argued to be due to the different IGW evolution and propagation offshore. The different IGW behavior in the non-hydrostatic case has strong implications for the energetics: compared to the 2-km case, the baroclinic conversion term and vertical kinetic energy are about 1.4 and at least 34 times larger, respectively. This indicates that the energy transfer from the geostrophic eddy field to IGWs and their propagation away from the continental slope is not properly represented in the hydrostatic runs.
NASA Astrophysics Data System (ADS)
Joglekar, Prasad; Shastry, K.; Satyal, Suman; Weiss, Alexander
2012-02-01
Time of flight Positron Annihilation Induced Auger Electron Spectroscopy system, a highly surface selective analytical technique using time of flight of auger electron resulting from the annihilation of core electrons by trapped incident positron in image potential well. We simulated and modeled the trajectories of the charge particles in TOF-PAES using SIMION for the development of new high resolution system at U T Arlington and current TOFPAES system. This poster presents the SIMION simulations results, Time of flight calculations and larmor radius calculations for current system as well as new system.
2015-11-24
spatial concerns: ¤ how well are gradients captured? (resolution requirement) spatial/temporal concerns: ¤ dispersion and dissipation error...distribution is unlimited. Gradient Capture vs. Resolution: Single Mode FFT: Solution/Derivative: Convergence: f x( )= sin(x) with x∈[0,2π ] df dx...distribution is unlimited. Gradient Capture vs. Resolution: Multiple Modes FFT: Solution/Derivative: Convergence: 6 __ CD02 __ CD04 __ CD06
NASA Astrophysics Data System (ADS)
Porto da Silveira, I.; Zuidema, P.; Kirtman, B. P.
2017-12-01
The rugged topography of the Andes Cordillera along with strong coastal upwelling, strong sea surface temperatures (SST) gradients and extensive but geometrically-thin stratocumulus decks turns the Southeast Pacific (SEP) into a challenge for numerical modeling. In this study, hindcast simulations using the Community Climate System Model (CCSM4) at two resolutions were analyzed to examine the importance of resolution alone, with the parameterizations otherwise left unchanged. The hindcasts were initialized on January 1 with the real-time oceanic and atmospheric reanalysis (CFSR) from 1982 to 2003, forming a 10-member ensemble. The two resolutions are (0.1o oceanic and 0.5o atmospheric) and (1.125o oceanic and 0.9o atmospheric). The SST error growth in the first six days of integration (fast errors) and those resulted from model drift (saturated errors) are assessed and compared towards evaluating the model processes responsible for the SST error growth. For the high-resolution simulation, SST fast errors are positive (+0.3oC) near the continental borders and negative offshore (-0.1oC). Both are associated with a decrease in cloud cover, a weakening of the prevailing southwesterly winds and a reduction of latent heat flux. The saturated errors possess a similar spatial pattern, but are larger and are more spatially concentrated. This suggests that the processes driving the errors already become established within the first week, in contrast to the low-resolution simulations. These, instead, manifest too-warm SSTs related to too-weak upwelling, driven by too-strong winds and Ekman pumping. Nevertheless, the ocean surface tends to be cooler in the low-resolution simulation than the high-resolution due to a higher cloud cover. Throughout the integration, saturated SST errors become positive and could reach values up to +4oC. These are accompanied by upwelling dumping and a decrease in cloud cover. High and low resolution models presented notable differences in how SST errors variability drove atmospheric changes, especially because the high resolution is sensitive to resurgence regions. This allows the model to resolve cloud heights and establish different radiative feedbacks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frenje, J. A.; Hilsabeck, T. J.; Wink, C. W.
The next-generation magnetic recoil spectrometer for time-resolved measurements of the neutron spectrum has been conceptually designed for the National Ignition Facility. This spectrometer, called MRSt, represents a paradigm shift in our thinking about neutron spectrometry for inertial confinement fusion applications, as it will provide simultaneously information about the burn history and time evolution of areal density (ρR), apparent ion temperature (T i), yield (Y n), and macroscopic flows during burn. From this type of data, an assessment of the evolution of the fuel assembly, hotspot, and alpha heating can be made. According to simulations, the MRSt will provide accurate datamore » with a time resolution of ~20 ps and energy resolution of ~100 keV for total neutron yields above ~10 16. Lastly, at lower yields, the diagnostic will be operated at a higher-efficiency, lower-energy-resolution mode to provide a time resolution of ~20 ps.« less
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
NASA Astrophysics Data System (ADS)
Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.
2014-08-01
Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation-type data from the European Space Agency (ESA) GlobCover project, and 30 arc-sec leaf area index and fraction of absorbed photosynthetically active radiation data from the ESA GlobCarbon project. Simulations are carried out for the metropolitan area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering three periods of time are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, grid resolution, topographic and land-use databases. Our comparisons show overall good agreement between simulated and observational data, mainly for the potential temperature and the wind speed fields, and clearly indicate that the use of high-resolution databases improves significantly our ability to predict the local atmospheric circulation.
NASA Astrophysics Data System (ADS)
Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.
2015-12-01
The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/
The birth of a supermassive black hole binary
NASA Astrophysics Data System (ADS)
Pfister, Hugo; Lupi, Alessandro; Capelo, Pedro R.; Volonteri, Marta; Bellovary, Jillian M.; Dotti, Massimo
2017-11-01
We study the dynamical evolution of supermassive black holes, in the late stage of galaxy mergers, from kpc to pc scales. In particular, we capture the formation of the binary, a necessary step before the final coalescence, and trace back the main processes causing the decay of the orbit. We use hydrodynamical simulations of galaxy mergers with different resolutions, from 20 pc down to 1 pc, in order to study the effects of the resolution on our results, remove numerical effects, and assess that resolving the influence radius of the orbiting black hole is a minimum condition to fully capture the formation of the binary. Our simulations include the relevant physical processes, namely star formation, supernova feedback, accretion on to the black holes and the ensuing feedback. We find that, in these mergers, dynamical friction from the smooth stellar component of the nucleus is the main process that drives black holes from kpc to pc scales. Gas does not play a crucial role and even clumps do not induce scattering or perturb the orbits. We compare the time needed for the formation of the binary to analytical predictions and suggest how to apply such analytical formalism to obtain estimates of binary formation times in lower resolution simulations.
Time resolution deterioration with increasing crystal length in a TOF-PET system
NASA Astrophysics Data System (ADS)
Gundacker, S.; Knapitsch, A.; Auffray, E.; Jarron, P.; Meyer, T.; Lecoq, P.
2014-02-01
Highest time resolution in scintillator based detectors is becoming more and more important. In medical detector physics L(Y)SO scintillators are commonly used for time of flight positron emission tomography (TOF-PET). Coincidence time resolutions (CTRs) smaller than 100 ps FWHM are desirable in order to improve the image signal to noise ratio and thus give benefit to the patient by shorter scanning times. Also in high energy physics there is the demand to improve the timing capabilities of calorimeters down to 10 ps. To achieve these goals it is important to study the whole chain, i.e. the high energy particle interaction in the crystal, the scintillation process itself, the scintillation light transfer in the crystal, the photodetector and the electronics. Time resolution measurements for a PET like system are performed with the time-over-threshold method in a coincidence setup utilizing the ultra-fast amplifier-discriminator NINO. With 2×2×3 mm3 LSO:Ce codoped 0.4%Ca crystals coupled to commercially available SiPMs (Hamamatsu S10931-050P MPPC) we achieve a CTR of 108±5 ps FWHM at an energy of 511 keV. Under the same experimental conditions an increase in crystal length to 5 mm deteriorates the CTR to 123±7 ps FWHM, 10 mm to 143±7 ps FWHM and 20 mm to 176±7 ps FWHM. This degradation in CTR is caused by the light transfer efficiency (LTE) and light transfer time spread (LTTS) in the crystal. To quantitatively understand the measured values, we developed a Monte Carlo simulation tool in MATLAB incorporating the timing properties of the photodetector and electronics, the scintillation properties of the crystal and the light transfer within the crystal simulated by SLITRANI. In this work, we show that the predictions of the simulation are in good agreement with the experimental data. We conclude that for longer crystals the deterioration in CTR is mainly caused by the LTE, i.e. the ratio of photons reaching the photodetector to the total amount of photons generated by the scintillation whereas the LTTS influence is partly offset by the gamma absorption in the crystal.
High resolution global climate modelling; the UPSCALE project, a large simulation campaign
NASA Astrophysics Data System (ADS)
Mizielinski, M. S.; Roberts, M. J.; Vidale, P. L.; Schiemann, R.; Demory, M.-E.; Strachan, J.; Edwards, T.; Stephens, A.; Lawrence, B. N.; Pritchard, M.; Chiu, P.; Iwi, A.; Churchill, J.; del Cano Novales, C.; Kettleborough, J.; Roseblade, W.; Selwood, P.; Foster, M.; Glover, M.; Malcolm, A.
2014-01-01
The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project constructed and ran an ensemble of HadGEM3 (Hadley centre Global Environment Model 3) atmosphere-only global climate simulations over the period 1985-2011, at resolutions of N512 (25 km), N216 (60 km) and N96 (130 km) as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe) in 2012, with additional resources supplied by the Natural Environmental Research Council (NERC) and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the high performance computing center Stuttgart (HLRS), and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA) for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE dataset. This dataset is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.
High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign
NASA Astrophysics Data System (ADS)
Mizielinski, M. S.; Roberts, M. J.; Vidale, P. L.; Schiemann, R.; Demory, M.-E.; Strachan, J.; Edwards, T.; Stephens, A.; Lawrence, B. N.; Pritchard, M.; Chiu, P.; Iwi, A.; Churchill, J.; del Cano Novales, C.; Kettleborough, J.; Roseblade, W.; Selwood, P.; Foster, M.; Glover, M.; Malcolm, A.
2014-08-01
The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3) atmosphere-only global climate simulations over the period 1985-2011, at resolutions of N512 (25 km), N216 (60 km) and N96 (130 km) as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe) in 2012, with additional resources supplied by the Natural Environment Research Council (NERC) and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS), and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA) for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.
NASA Astrophysics Data System (ADS)
Steiman-Cameron, Thomas Y.; Durisen, Richard H.; Boley, Aaron C.; Michael, Scott; McConnell, Caitlin R.
2013-05-01
We conduct a convergence study of a protoplanetary disk subject to gravitational instabilities (GIs) at a time of approximate balance between heating produced by the GIs and radiative cooling governed by realistic dust opacities. We examine cooling times, characterize GI-driven spiral waves and their resultant gravitational torques, and evaluate how accurately mass transport can be represented by an α-disk formulation. Four simulations, identical except for azimuthal resolution, are conducted with a grid-based three-dimensional hydrodynamics code. There are two regions in which behaviors differ as resolution increases. The inner region, which contains 75% of the disk mass and is optically thick, has long cooling times and is well converged in terms of various measures of structure and mass transport for the three highest resolutions. The longest cooling times coincide with radii where the Toomre Q has its minimum value. Torques are dominated in this region by two- and three-armed spirals. The effective α arising from gravitational stresses is typically a few × 10-3 and is only roughly consistent with local balance of heating and cooling when time-averaged over many dynamic times and a wide range of radii. On the other hand, the outer disk region, which is mostly optically thin, has relatively short cooling times and does not show convergence as resolution increases. Treatment of unstable disks with optical depths near unity with realistic radiative transport is a difficult numerical problem requiring further study. We discuss possible implications of our results for numerical convergence of fragmentation criteria in disk simulations.
The formation of disc galaxies in high-resolution moving-mesh cosmological simulations
NASA Astrophysics Data System (ADS)
Marinacci, Federico; Pakmor, Rüdiger; Springel, Volker
2014-01-01
We present cosmological hydrodynamical simulations of eight Milky Way-sized haloes that have been previously studied with dark matter only in the Aquarius project. For the first time, we employ the moving-mesh code AREPO in zoom simulations combined with a comprehensive model for galaxy formation physics designed for large cosmological simulations. Our simulations form in most of the eight haloes strongly disc-dominated systems with realistic rotation curves, close to exponential surface density profiles, a stellar mass to halo mass ratio that matches expectations from abundance matching techniques, and galaxy sizes and ages consistent with expectations from large galaxy surveys in the local Universe. There is no evidence for any dark matter core formation in our simulations, even so they include repeated baryonic outflows by supernova-driven winds and black hole quasar feedback. For one of our haloes, the object studied in the recent `Aquila' code comparison project, we carried out a resolution study with our techniques, covering a dynamic range of 64 in mass resolution. Without any change in our feedback parameters, the final galaxy properties are reassuringly similar, in contrast to other modelling techniques used in the field that are inherently resolution dependent. This success in producing realistic disc galaxies is reached, in the context of our interstellar medium treatment, without resorting to a high density threshold for star formation, a low star formation efficiency, or early stellar feedback, factors deemed crucial for disc formation by other recent numerical studies.
NASA Astrophysics Data System (ADS)
Akchurin, Nural; CMS Collaboration
2017-11-01
We report on the signal timing capabilities of thin silicon sensors when traversed by multiple simultaneous minimum ionizing particles (MIP). Three different planar sensors, 133, 211, and 285 μm thick in depletion thickness, have been exposed to high energy muons and electrons at CERN. We describe signal shape and timing resolution measurements as well as the response of these devices as a function of the multiplicity of MIPs. We compare these measurements to simulations where possible. We achieve better than 20 ps timing resolution for signals larger than a few tens of MIPs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Chun; Leung, L. Ruby; Park, Sang-Hun
Advances in computing resources are gradually moving regional and global numerical forecasting simulations towards sub-10 km resolution, but global high resolution climate simulations remain a challenge. The non-hydrostatic Model for Prediction Across Scales (MPAS) provides a global framework to achieve very high resolution using regional mesh refinement. Previous studies using the hydrostatic version of MPAS (H-MPAS) with the physics parameterizations of Community Atmosphere Model version 4 (CAM4) found notable resolution dependent behaviors. This study revisits the resolution sensitivity using the non-hydrostatic version of MPAS (NH-MPAS) with both CAM4 and CAM5 physics. A series of aqua-planet simulations at global quasi-uniform resolutionsmore » ranging from 240 km to 30 km and global variable resolution simulations with a regional mesh refinement of 30 km resolution over the tropics are analyzed, with a primary focus on the distinct characteristics of NH-MPAS in simulating precipitation, clouds, and large-scale circulation features compared to H-MPAS-CAM4. The resolution sensitivity of total precipitation and column integrated moisture in NH-MPAS is smaller than that in H-MPAS-CAM4. This contributes importantly to the reduced resolution sensitivity of large-scale circulation features such as the inter-tropical convergence zone and Hadley circulation in NH-MPAS compared to H-MPAS. In addition, NH-MPAS shows almost no resolution sensitivity in the simulated westerly jet, in contrast to the obvious poleward shift in H-MPAS with increasing resolution, which is partly explained by differences in the hyperdiffusion coefficients used in the two models that influence wave activity. With the reduced resolution sensitivity, simulations in the refined region of the NH-MPAS global variable resolution configuration exhibit zonally symmetric features that are more comparable to the quasi-uniform high-resolution simulations than those from H-MPAS that displays zonal asymmetry in simulations inside the refined region. Overall, NH-MPAS with CAM5 physics shows less resolution sensitivity compared to CAM4. These results provide a reference for future studies to further explore the use of NH-MPAS for high-resolution climate simulations in idealized and realistic configurations.« less
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lin, Yuh-Lang
2004-01-01
During the research project, sounding datasets were generated for the region surrounding 9 major airports, including Dallas, TX, Boston, MA, New York, NY, Chicago, IL, St. Louis, MO, Atlanta, GA, Miami, FL, San Francico, CA, and Los Angeles, CA. The numerical simulation of winter and summer environments during which no instrument flight rule impact was occurring at these 9 terminals was performed using the most contemporary version of the Terminal Area PBL Prediction System (TAPPS) model nested from 36 km to 6 km to 1 km horizontal resolution and very detailed vertical resolution in the planetary boundary layer. The soundings from the 1 km model were archived at 30 minute time intervals for a 24 hour period and the vertical dependent variables as well as derived quantities, i.e., 3-dimensional wind components, temperatures, pressures, mixing ratios, turbulence kinetic energy and eddy dissipation rates were then interpolated to 5 m vertical resolution up to 1000 m elevation above ground level. After partial validation against field experiment datasets for Dallas as well as larger scale and much coarser resolution observations at the other 8 airports, these sounding datasets were sent to NASA for use in the Virtual Air Space and Modeling program. The application of these datasets being to determine representative airport weather environments to diagnose the response of simulated wake vortices to realistic atmospheric environments. These virtual datasets are based on large scale observed atmospheric initial conditions that are dynamically interpolated in space and time. The 1 km nested-grid simulated datasets providing a very coarse and highly smoothed representation of airport environment meteorological conditions. Details concerning the airport surface forcing are virtually absent from these simulated datasets although the observed background atmospheric processes have been compared to the simulated fields and the fields were found to accurately replicate the flows surrounding the airport where coarse verification data were available as well as where airport scale datasets were available.
PROPAGATOR: a synchronous stochastic wildfire propagation model with distributed computation engine
NASA Astrophysics Data System (ADS)
D´Andrea, M.; Fiorucci, P.; Biondi, G.; Negro, D.
2012-04-01
PROPAGATOR is a stochastic model of forest fire spread, useful as a rapid method for fire risk assessment. The model is based on a 2D stochastic cellular automaton. The domain of simulation is discretized using a square regular grid with cell size of 20x20 meters. The model uses high-resolution information such as elevation and type of vegetation on the ground. Input parameters are wind direction, speed and the ignition point of fire. The simulation of fire propagation is done via a stochastic mechanism of propagation between a burning cell and a non-burning cell belonging to its neighbourhood, i.e. the 8 adjacent cells in the rectangular grid. The fire spreads from one cell to its neighbours with a certain base probability, defined using vegetation types of two adjacent cells, and modified by taking into account the slope between them, wind direction and speed. The simulation is synchronous, and takes into account the time needed by the burning fire to cross each cell. Vegetation cover, slope, wind speed and direction affect the fire-propagation speed from cell to cell. The model simulates several mutually independent realizations of the same stochastic fire propagation process. Each of them provides a map of the area burned at each simulation time step. Propagator simulates self-extinction of the fire, and the propagation process continues until at least one cell of the domain is burning in each realization. The output of the model is a series of maps representing the probability of each cell of the domain to be affected by the fire at each time-step: these probabilities are obtained by evaluating the relative frequency of ignition of each cell with respect to the complete set of simulations. Propagator is available as a module in the OWIS (Opera Web Interfaces) system. The model simulation runs on a dedicated server and it is remote controlled from the client program, NAZCA. Ignition points of the simulation can be selected directly in a high-resolution, three-dimensional graphical representation of the Italian territory within NAZCA. The other simulation parameters, namely wind speed and direction, number of simulations, computing grid size and temporal resolution, can be selected from within the program interface. The output of the simulation is showed in real-time during the simulation, and are also available off-line and on the DEWETRA system, a Web GIS-based system for environmental risk assessment, developed according to OGC-INSPIRE standards. The model execution is very fast, providing a full prevision for the scenario in few minutes, and can be useful for real-time active fire management and suppression.
Optimization of super-resolution processing using incomplete image sets in PET imaging.
Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2008-12-01
Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of the point source study showed that the three SR images exhibited similar signal amplitudes and FWHM. The NEMA/IEC study showed that the average difference in SNR among the three SR images was 2.1% with respect to one another and they contained similar noise structure. ISR-1 and ISR-2 can be used to replace CSR, thereby reducing the total SR processing time and memory storage while maintaining similar contrast, resolution, SNR, and noise structure.
Robert M. Scheller; James B. Domingo; Brian R. Sturtevant; Jeremy S. Williams; Arnold Rudy; Eric J. Gustafson; David J. Mladenoff
2007-01-01
We introduce LANDIS-II, a landscape model designed to simulate forest succession and disturbances. LANDIS-II builds upon and preserves the functionality of previous LANDIS forest landscape simulation models. LANDIS-II is distinguished by the inclusion of variable time steps for different ecological processes; our use of a rigorous development and testing process used...
NASA Astrophysics Data System (ADS)
Roesler, E. L.; Bosler, P. A.; Taylor, M.
2016-12-01
The impact of strong extratropical storms on coastal communities is large, and the extent to which storms will change with a warming Arctic is unknown. Understanding storms in reanalysis and in climate models is important for future predictions. We know that the number of detected Arctic storms in reanalysis is sensitive to grid resolution. To understand Arctic storm sensitivity to resolution in climate models, we describe simulations designed to identify and compare Arctic storms at uniform low resolution (1 degree), at uniform high resolution (1/8 degree), and at variable resolution (1 degree to 1/8 degree). High-resolution simulations resolve more fine-scale structure and extremes, such as storms, in the atmosphere than a uniform low-resolution simulation. However, the computational cost of running a globally uniform high-resolution simulation is often prohibitive. The variable resolution tool in atmospheric general circulation models permits regional high-resolution solutions at a fraction of the computational cost. The storms are identified using the open-source search algorithm, Stride Search. The uniform high-resolution simulation has over 50% more storms than the uniform low-resolution and over 25% more storms than the variable resolution simulations. Storm statistics from each of the simulations is presented and compared with reanalysis. We propose variable resolution as a cost-effective means of investigating physics/dynamics coupling in the Arctic environment. Future work will include comparisons with observed storms to investigate tuning parameters for high resolution models. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2016-7402 A
NASA Astrophysics Data System (ADS)
Dipankar, A.; Stevens, B. B.; Zängl, G.; Pondkule, M.; Brdar, S.
2014-12-01
The effect of clouds on large scale dynamics is represented in climate models through parameterization of various processes, of which the parameterization of shallow and deep convection are particularly uncertain. The atmospheric boundary layer, which controls the coupling to the surface, and which defines the scale of shallow convection, is typically 1 km in depth. Thus, simulations on a O(100 m) grid largely obviate the need for such parameterizations. By crossing this threshold of O(100m) grid resolution one can begin thinking of large-eddy simulation (LES), wherein the sub-grid scale parameterization have a sounder theoretical foundation. Substantial initiatives have been taken internationally to approach this threshold. For example, Miura et al., 2007 and Mirakawa et al., 2014 approach this threshold by doing global simulations, with (gradually) decreasing grid resolution, to understand the effect of cloud-resolving scales on the general circulation. Our strategy, on the other hand, is to take a big leap forward by fixing the resolution at O(100 m), and gradually increasing the domain size. We believe that breaking this threshold would greatly help in improving the parameterization schemes and reducing the uncertainty in climate predictions. To take this forward, the German Federal Ministry of Education and Research has initiated a project on HD(CP)2 that aims for a limited area LES at resolution O(100 m) using the new unified modeling system ICON (Zängl et al., 2014). In the talk, results from the HD(CP)2 evaluation simulation will be shown that targets high resolution simulation over a small domain around Jülich, Germany. This site is chosen because high resolution HD(CP)2 Observational Prototype Experiment took place in this region from 1.04.2013 to 31.05.2013, in order to critically evaluate the model. Nesting capabilities of ICON is used to gradually increase the resolution from the outermost domain, which is forced from the COSMO-DE data, to the innermost and finest resolution domain centered around Jülich (see Fig. 1 top panel). Furthermore, detailed analyses of the simulation results against the observation data will be presented. A reprsentative figure showing time series of column integrated water vapor (IWV) for both model and observation on 24.04.2013 is shown in bottom panel of Fig. 1.
Performance Evaluation of 98 CZT Sensors for Their Use in Gamma-Ray Imaging
NASA Astrophysics Data System (ADS)
Dedek, Nicolas; Speller, Robert D.; Spendley, Paul; Horrocks, Julie A.
2008-10-01
98 SPEAR sensors from eV Products have been evaluated for their use in a portable Compton camera. The sensors have a 5 mm times 5 mm times 5 mm CdZnTe crystal and are provided together with a preamplifier. The energy resolution was studied in detail for all sensors and was found to be 6% on average at 59.5 keV and 3% on average at 662 keV. The standard deviations of the corresponding energy resolution distributions are remarkably small (0.6% at 59.5 keV, 0.7% at 662 keV) and reflect the uniformity of the sensor characteristics. For a possible outside use the temperature dependence of the sensor performances was investigated for temperatures between 15 and 45 deg Celsius. A linear shift in calibration with temperature was observed. The energy resolution at low energies (81 keV) was found to deteriorate exponentially with temperature, while it stayed constant at higher energies (356 keV). A Compton camera built of these sensors was simulated. To obtain realistic energy spectra a suitable detector response function was implemented. To investigate the angular resolution of the camera a 137Cs point source was simulated. Reconstructed images of the point source were compared for perfect and realistic energy and position resolutions. The angular resolution of the camera was found to be better than 10 deg.
Piezoresistive Cantilever Performance—Part II: Optimization
Park, Sung-Jin; Doll, Joseph C.; Rastegar, Ali J.; Pruitt, Beth L.
2010-01-01
Piezoresistive silicon cantilevers fabricated by ion implantation are frequently used for force, displacement, and chemical sensors due to their low cost and electronic readout. However, the design of piezoresistive cantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. We systematically analyzed the effect of design and process parameters on force resolution and then developed an optimization approach to improve force resolution while satisfying various design constraints using simulation results. The combined simulation and optimization approach is extensible to other doping methods beyond ion implantation in principle. The optimization results were validated by fabricating cantilevers with the optimized conditions and characterizing their performance. The measurement results demonstrate that the analytical model accurately predicts force and displacement resolution, and sensitivity and noise tradeoff in optimal cantilever performance. We also performed a comparison between our optimization technique and existing models and demonstrated eight times improvement in force resolution over simplified models. PMID:20333323
Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Zheng, Yang; Chen, Xihao; Zhu, Rui
2017-07-01
Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.
Frenje, J. A.; Hilsabeck, T. J.; Wink, C. W.; ...
2016-08-02
The next-generation magnetic recoil spectrometer for time-resolved measurements of the neutron spectrum has been conceptually designed for the National Ignition Facility. This spectrometer, called MRSt, represents a paradigm shift in our thinking about neutron spectrometry for inertial confinement fusion applications, as it will provide simultaneously information about the burn history and time evolution of areal density (ρR), apparent ion temperature (T i), yield (Y n), and macroscopic flows during burn. From this type of data, an assessment of the evolution of the fuel assembly, hotspot, and alpha heating can be made. According to simulations, the MRSt will provide accurate datamore » with a time resolution of ~20 ps and energy resolution of ~100 keV for total neutron yields above ~10 16. Lastly, at lower yields, the diagnostic will be operated at a higher-efficiency, lower-energy-resolution mode to provide a time resolution of ~20 ps.« less
Frenje, J A; Hilsabeck, T J; Wink, C W; Bell, P; Bionta, R; Cerjan, C; Gatu Johnson, M; Kilkenny, J D; Li, C K; Séguin, F H; Petrasso, R D
2016-11-01
The next-generation magnetic recoil spectrometer for time-resolved measurements of the neutron spectrum has been conceptually designed for the National Ignition Facility. This spectrometer, called MRSt, represents a paradigm shift in our thinking about neutron spectrometry for inertial confinement fusion applications, as it will provide simultaneously information about the burn history and time evolution of areal density (ρR), apparent ion temperature (T i ), yield (Y n ), and macroscopic flows during burn. From this type of data, an assessment of the evolution of the fuel assembly, hotspot, and alpha heating can be made. According to simulations, the MRSt will provide accurate data with a time resolution of ∼20 ps and energy resolution of ∼100 keV for total neutron yields above ∼10 16 . At lower yields, the diagnostic will be operated at a higher-efficiency, lower-energy-resolution mode to provide a time resolution of ∼20 ps.
Compressive light field imaging
NASA Astrophysics Data System (ADS)
Ashok, Amit; Neifeld, Mark A.
2010-04-01
Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frenje, J. A., E-mail: jfrenje@psfc.mit.edu; Wink, C. W.; Gatu Johnson, M.
The next-generation magnetic recoil spectrometer for time-resolved measurements of the neutron spectrum has been conceptually designed for the National Ignition Facility. This spectrometer, called MRSt, represents a paradigm shift in our thinking about neutron spectrometry for inertial confinement fusion applications, as it will provide simultaneously information about the burn history and time evolution of areal density (ρR), apparent ion temperature (T{sub i}), yield (Y{sub n}), and macroscopic flows during burn. From this type of data, an assessment of the evolution of the fuel assembly, hotspot, and alpha heating can be made. According to simulations, the MRSt will provide accurate datamore » with a time resolution of ∼20 ps and energy resolution of ∼100 keV for total neutron yields above ∼10{sup 16}. At lower yields, the diagnostic will be operated at a higher-efficiency, lower-energy-resolution mode to provide a time resolution of ∼20 ps.« less
Chen, Jin; Venugopal, Vivek; Intes, Xavier
2011-01-01
Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610
Jain, Kartik; Jiang, Jingfeng; Strother, Charles; Mardal, Kent-André
2016-11-01
Blood flow in intracranial aneurysms has, until recently, been considered to be disturbed but still laminar. Recent high resolution computational studies have demonstrated, in some situations, however, that the flow may exhibit high frequency fluctuations that resemble weakly turbulent or transitional flow. Due to numerous assumptions required for simplification in computational fluid dynamics (CFD) studies, the occurrence of these events, in vivo, remains unsettled. The detection of these fluctuations in aneurysmal blood flow, i.e., hemodynamics by CFD, poses additional challenges as such phenomena cannot be captured in clinical data acquisition with magnetic resonance (MR) due to inadequate temporal and spatial resolutions. The authors' purpose was to address this issue by comparing results from highly resolved simulations, conventional resolution laminar simulations, and MR measurements, identify the differences, and identify their causes. Two aneurysms in the basilar artery, one with disturbed yet laminar flow and the other with transitional flow, were chosen. One set of highly resolved direct numerical simulations using the lattice Boltzmann method (LBM) and another with adequate resolutions under laminar flow assumption were conducted using a commercially available ANSYS Fluent solver. The velocity fields obtained from simulation results were qualitatively and statistically compared against each other and with MR acquisition. Results from LBM, ANSYS Fluent, and MR agree well qualitatively and quantitatively for one of the aneurysms with laminar flow in which fluctuations were <80 Hz. The comparisons for the second aneurysm with high fluctuations of > ∼ 600 Hz showed vivid differences between LBM, ANSYS Fluent, and magnetic resonance imaging. After ensemble averaging and down-sampling to coarser space and time scales, these differences became minimal. A combination of MR derived data and CFD can be helpful in estimating the hemodynamic environment of intracranial aneurysms. Adequately resolved CFD would suffice gross assessment of hemodynamics, potentially in a clinical setting, and highly resolved CFD could be helpful in a detailed and retrospective understanding of the physiological mechanisms.
On the Fringe Field of Wide Angle LC Optical Phased Array
NASA Technical Reports Server (NTRS)
Wang, Xighua; Wang, Bin; Bos, Philip J.; Anderson, James E.; Pouch, John; Miranda, Felix; McManamon, Paul F.
2004-01-01
For free space laser communication, light weighted large deployable optics is a critical component for the transmitter. However, such an optical element will introduce large aberrations due to the fact that the surface figure of the large optics is susceptable to deformation in the space environment. We propose to use a high-resolution liquid crystal spatial light modulator to correct for wavefront aberrations introduced by the primary optical element, and to achieve very fine beam steering and shaping at the same time. A 2-D optical phased array (OPA) antenna based on a Liquid Crystal on Silicon (LCOS) spatial light modulator is described. This device offers a combination of low cost, high resolution, high accuracy, high diffraction efficiency at video speed. To quantitatively understand the influence factor of the different design parameters, a computer simulation of the device is given by the 2-D director simulation and the Finite Difference Time domain (FDTD) simulation. For the 1-D OPA, we define the maximum steering angle to have a grating period of 8 pixel/reset scheme; as for larger steering angles than this criterion, the diffraction efficiency drops dramatically. In this case, the diffraction efficiency of 0.86 and the Strehl ratio of 0.9 are obtained in the simulation. The performance of the device in achieving high resolution wavefront correction and beam steering is also characterized experimentally.
Dependence of Hurricane intensity and structures on vertical resolution and time-step size
NASA Astrophysics Data System (ADS)
Zhang, Da-Lin; Wang, Xiaoxue
2003-09-01
In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.
2011-01-01
This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.
Moskal, P; Rundel, O; Alfs, D; Bednarski, T; Białas, P; Czerwiński, E; Gajos, A; Giergiel, K; Gorgol, M; Jasińska, B; Kamińska, D; Kapłon, Ł; Korcyl, G; Kowalski, P; Kozik, T; Krzemień, W; Kubicz, E; Niedźwiecki, Sz; Pałka, M; Raczyński, L; Rudy, Z; Sharma, N G; Słomski, A; Silarski, M; Strzelecki, A; Wieczorek, A; Wiślicki, W; Witkowski, P; Zieliński, M; Zoń, N
2016-03-07
Recent tests of a single module of the Jagiellonian Positron Emission Tomography system (J-PET) consisting of 30 cm long plastic scintillator strips have proven its applicability for the detection of annihilation quanta (0.511 MeV) with a coincidence resolving time (CRT) of 0.266 ns. The achieved resolution is almost by a factor of two better with respect to the current TOF-PET detectors and it can still be improved since, as it is shown in this article, the intrinsic limit of time resolution for the determination of time of the interaction of 0.511 MeV gamma quanta in plastic scintillators is much lower. As the major point of the article, a method allowing to record timestamps of several photons, at two ends of the scintillator strip, by means of matrix of silicon photomultipliers (SiPM) is introduced. As a result of simulations, conducted with the number of SiPM varying from 4 to 42, it is shown that the improvement of timing resolution saturates with the growing number of photomultipliers, and that the [Formula: see text] configuration at two ends allowing to read twenty timestamps, constitutes an optimal solution. The conducted simulations accounted for the emission time distribution, photon transport and absorption inside the scintillator, as well as quantum efficiency and transit time spread of photosensors, and were checked based on the experimental results. Application of the [Formula: see text] matrix of SiPM allows for achieving the coincidence resolving time in positron emission tomography of [Formula: see text]0.170 ns for 15 cm axial field-of-view (AFOV) and [Formula: see text]0.365 ns for 100 cm AFOV. The results open perspectives for construction of a cost-effective TOF-PET scanner with significantly better TOF resolution and larger AFOV with respect to the current TOF-PET modalities.
NASA Astrophysics Data System (ADS)
Moskal, P.; Rundel, O.; Alfs, D.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Giergiel, K.; Gorgol, M.; Jasińska, B.; Kamińska, D.; Kapłon, Ł.; Korcyl, G.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz; Pałka, M.; Raczyński, L.; Rudy, Z.; Sharma, N. G.; Słomski, A.; Silarski, M.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Witkowski, P.; Zieliński, M.; Zoń, N.
2016-03-01
Recent tests of a single module of the Jagiellonian Positron Emission Tomography system (J-PET) consisting of 30 cm long plastic scintillator strips have proven its applicability for the detection of annihilation quanta (0.511 MeV) with a coincidence resolving time (CRT) of 0.266 ns. The achieved resolution is almost by a factor of two better with respect to the current TOF-PET detectors and it can still be improved since, as it is shown in this article, the intrinsic limit of time resolution for the determination of time of the interaction of 0.511 MeV gamma quanta in plastic scintillators is much lower. As the major point of the article, a method allowing to record timestamps of several photons, at two ends of the scintillator strip, by means of matrix of silicon photomultipliers (SiPM) is introduced. As a result of simulations, conducted with the number of SiPM varying from 4 to 42, it is shown that the improvement of timing resolution saturates with the growing number of photomultipliers, and that the 2× 5 configuration at two ends allowing to read twenty timestamps, constitutes an optimal solution. The conducted simulations accounted for the emission time distribution, photon transport and absorption inside the scintillator, as well as quantum efficiency and transit time spread of photosensors, and were checked based on the experimental results. Application of the 2× 5 matrix of SiPM allows for achieving the coincidence resolving time in positron emission tomography of ≈ 0.170 ns for 15 cm axial field-of-view (AFOV) and ≈ 0.365 ns for 100 cm AFOV. The results open perspectives for construction of a cost-effective TOF-PET scanner with significantly better TOF resolution and larger AFOV with respect to the current TOF-PET modalities.
A divergence-cleaning scheme for cosmological SPMHD simulations
NASA Astrophysics Data System (ADS)
Stasyszyn, F. A.; Dolag, K.; Beck, A. M.
2013-01-01
In magnetohydrodynamics (MHD), the magnetic field is evolved by the induction equation and coupled to the gas dynamics by the Lorentz force. We perform numerical smoothed particle magnetohydrodynamics (SPMHD) simulations and study the influence of a numerical magnetic divergence. For instabilities arising from {nabla }\\cdot {boldsymbol B} related errors, we find the hyperbolic/parabolic cleaning scheme suggested by Dedner et al. to give good results and prevent numerical artefacts from growing. Additionally, we demonstrate that certain current SPMHD implementations of magnetic field regularizations give rise to unphysical instabilities in long-time simulations. We also find this effect when employing Euler potentials (divergenceless by definition), which are not able to follow the winding-up process of magnetic field lines properly. Furthermore, we present cosmological simulations of galaxy cluster formation at extremely high resolution including the evolution of magnetic fields. We show synthetic Faraday rotation maps and derive structure functions to compare them with observations. Comparing all the simulations with and without divergence cleaning, we are able to confirm the results of previous simulations performed with the standard implementation of MHD in SPMHD at normal resolution. However, at extremely high resolution, a cleaning scheme is needed to prevent the growth of numerical {nabla }\\cdot {boldsymbol B} errors at small scales.
A Method for Modeling Household Occupant Behavior to Simulate Residential Energy Consumption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brandon J; Starke, Michael R; Abdelaziz, Omar
2014-01-01
This paper presents a statistical method for modeling the behavior of household occupants to estimate residential energy consumption. Using data gathered by the U.S. Census Bureau in the American Time Use Survey (ATUS), actions carried out by survey respondents are categorized into ten distinct activities. These activities are defined to correspond to the major energy consuming loads commonly found within the residential sector. Next, time varying minute resolution Markov chain based statistical models of different occupant types are developed. Using these behavioral models, individual occupants are simulated to show how an occupant interacts with the major residential energy consuming loadsmore » throughout the day. From these simulations, the minimum number of occupants, and consequently the minimum number of multiple occupant households, needing to be simulated to produce a statistically accurate representation of aggregate residential behavior can be determined. Finally, future work will involve the use of these occupant models along side residential load models to produce a high-resolution energy consumption profile and estimate the potential for demand response from residential loads.« less
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
What model resolution is required in climatological downscaling over complex terrain?
NASA Astrophysics Data System (ADS)
El-Samra, Renalda; Bou-Zeid, Elie; El-Fadel, Mutasem
2018-05-01
This study presents results from the Weather Research and Forecasting (WRF) model applied for climatological downscaling simulations over highly complex terrain along the Eastern Mediterranean. We sequentially downscale general circulation model results, for a mild and wet year (2003) and a hot and dry year (2010), to three local horizontal resolutions of 9, 3 and 1 km. Simulated near-surface hydrometeorological variables are compared at different time scales against data from an observational network over the study area comprising rain gauges, anemometers, and thermometers. The overall performance of WRF at 1 and 3 km horizontal resolution was satisfactory, with significant improvement over the 9 km downscaling simulation. The total yearly precipitation from WRF's 1 km and 3 km domains exhibited < 10% bias with respect to observational data. The errors in minimum and maximum temperatures were reduced by the downscaling, along with a high-quality delineation of temperature variability and extremes for both the 1 and 3 km resolution runs. Wind speeds, on the other hand, are generally overestimated for all model resolutions, in comparison with observational data, particularly on the coast (up to 50%) compared to inland stations (up to 40%). The findings therefore indicate that a 3 km resolution is sufficient for the downscaling, especially that it would allow more years and scenarios to be investigated compared to the higher 1 km resolution at the same computational effort. In addition, the results provide a quantitative measure of the potential errors for various hydrometeorological variables.
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; ...
2016-11-22
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relativelymore » few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950–2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. Lastly, HighResMIP thereby focuses on one of the CMIP6 broad questions, “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.« less
Simultaneous fluoroscopic and nuclear imaging: impact of collimator choice on nuclear image quality.
van der Velden, Sandra; Beijst, Casper; Viergever, Max A; de Jong, Hugo W A M
2017-01-01
X-ray-guided oncological interventions could benefit from the availability of simultaneously acquired nuclear images during the procedure. To this end, a real-time, hybrid fluoroscopic and nuclear imaging device, consisting of an X-ray c-arm combined with gamma imaging capability, is currently being developed (Beijst C, Elschot M, Viergever MA, de Jong HW. Radiol. 2015;278:232-238). The setup comprises four gamma cameras placed adjacent to the X-ray tube. The four camera views are used to reconstruct an intermediate three-dimensional image, which is subsequently converted to a virtual nuclear projection image that overlaps with the X-ray image. The purpose of the present simulation study is to evaluate the impact of gamma camera collimator choice (parallel hole versus pinhole) on the quality of the virtual nuclear image. Simulation studies were performed with a digital image quality phantom including realistic noise and resolution effects, with a dynamic frame acquisition time of 1 s and a total activity of 150 MBq. Projections were simulated for 3, 5, and 7 mm pinholes and for three parallel hole collimators (low-energy all-purpose (LEAP), low-energy high-resolution (LEHR) and low-energy ultra-high-resolution (LEUHR)). Intermediate reconstruction was performed with maximum likelihood expectation-maximization (MLEM) with point spread function (PSF) modeling. In the virtual projection derived therefrom, contrast, noise level, and detectability were determined and compared with the ideal projection, that is, as if a gamma camera were located at the position of the X-ray detector. Furthermore, image deformations and spatial resolution were quantified. Additionally, simultaneous fluoroscopic and nuclear images of a sphere phantom were acquired with a physical prototype system and compared with the simulations. For small hot spots, contrast is comparable for all simulated collimators. Noise levels are, however, 3 to 8 times higher in pinhole geometries than in parallel hole geometries. This results in higher contrast-to-noise ratios for parallel hole geometries. Smaller spheres can thus be detected with parallel hole collimators than with pinhole collimators (17 mm vs 28 mm). Pinhole geometries show larger image deformations than parallel hole geometries. Spatial resolution varied between 1.25 cm for the 3 mm pinhole and 4 cm for the LEAP collimator. The simulation method was successfully validated by the experiments with the physical prototype. A real-time hybrid fluoroscopic and nuclear imaging device is currently being developed. Image quality of nuclear images obtained with different collimators was compared in terms of contrast, noise, and detectability. Parallel hole collimators showed lower noise and better detectability than pinhole collimators. © 2016 American Association of Physicists in Medicine.
Dynamical downscaling of wind fields for wind power applications
NASA Astrophysics Data System (ADS)
Mengelkamp, H.-T.; Huneke, S.; Geyer, J.
2010-09-01
Dynamical downscaling of wind fields for wind power applications H.-T. Mengelkamp*,**, S. Huneke**, J, Geyer** *GKSS Research Center Geesthacht GmbH **anemos Gesellschaft für Umweltmeteorologie mbH Investments in wind power require information on the long-term mean wind potential and its temporal variations on daily to annual and decadal time scales. This information is rarely available at specific wind farm sites. Short-term on-site measurements usually are only performed over a 12 months period. These data have to be set into the long-term perspective through correlation to long-term consistent wind data sets. Preliminary wind information is often asked for to select favourable wind sites over regional and country wide scales. Lack of high-quality wind measurements at weather stations was the motivation to start high resolution wind field simulations The simulations are basically a refinement of global scale reanalysis data by means of high resolution simulations with an atmospheric mesoscale model using high-resolution terrain and land-use data. The 3-dimensional representation of the atmospheric state available every six hours at 2.5 degree resolution over the globe, known as NCAR/NCEP reanalysis data, forms the boundary conditions for continuous simulations with the non-hydrostatic atmospheric mesoscale model MM5. MM5 is nested in itself down to a horizontal resolution of 5 x 5 km². The simulation is performed for different European countries and covers the period 2000 to present and is continuously updated. Model variables are stored every 10 minutes for various heights. We have analysed the wind field primarily. The wind data set is consistent in space and time and provides information on the regional distribution of the long-term mean wind potential, the temporal variability of the wind potential, the vertical variation of the wind potential, and the temperature, and pressure distribution (air density). In the context of wind power these data are used • as an initial estimate of wind and energy potential • for the long-term correlation of wind measurements and turbine production data • to provide wind potential maps on a regional to country wide scale • to provide input data sets for simulation models • to determine the spatial correlation of the wind field in portfolio calculations • to calculate the wind turbine energy loss during prescribed downtimes • to provide information on the temporal variations of the wind and wind turbine energy production The time series of wind speed and wind direction are compared to measurements at offshore and onshore locations.
Stochastic Models for Precipitable Water in Convection
NASA Astrophysics Data System (ADS)
Leung, Kimberly
Atmospheric precipitable water vapor (PWV) is the amount of water vapor in the atmosphere within a vertical column of unit cross-sectional area and is a critically important parameter of precipitation processes. However, accurate high-frequency and long-term observations of PWV in the sky were impossible until the availability of modern instruments such as radar. The United States Department of Energy (DOE)'s Atmospheric Radiation Measurement (ARM) Program facility made the first systematic and high-resolution observations of PWV at Darwin, Australia since 2002. At a resolution of 20 seconds, this time series allowed us to examine the volatility of PWV, including fractal behavior with dimension equal to 1.9, higher than the Brownian motion dimension of 1.5. Such strong fractal behavior calls for stochastic differential equation modeling in an attempt to address some of the difficulties of convective parameterization in various kinds of climate models, ranging from general circulation models (GCM) to weather research forecasting (WRF) models. This important observed data at high resolution can capture the fractal behavior of PWV and enables stochastic exploration into the next generation of climate models which considers scales from micrometers to thousands of kilometers. As a first step, this thesis explores a simple stochastic differential equation model of water mass balance for PWV and assesses accuracy, robustness, and sensitivity of the stochastic model. A 1000-day simulation allows for the determination of the best-fitting 25-day period as compared to data from the TWP-ICE field campaign conducted out of Darwin, Australia in early 2006. The observed data and this portion of the simulation had a correlation coefficient of 0.6513 and followed similar statistics and low-resolution temporal trends. Building on the point model foundation, a similar algorithm was applied to the National Center for Atmospheric Research (NCAR)'s existing single-column model as a test-of-concept for eventual inclusion in a general circulation model. The stochastic scheme was designed to be coupled with the deterministic single-column simulation by modifying results of the existing convective scheme (Zhang-McFarlane) and was able to produce a 20-second resolution time series that effectively simulated observed PWV, as measured by correlation coefficient (0.5510), fractal dimension (1.9), statistics, and visual examination of temporal trends. Results indicate that simulation of a highly volatile time series of observed PWV is certainly achievable and has potential to improve prediction capabilities in climate modeling. Further, this study demonstrates the feasibility of adding a mathematics- and statistics-based stochastic scheme to an existing deterministic parameterization to simulate observed fractal behavior.
NASA Astrophysics Data System (ADS)
Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.
2018-02-01
While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.
Modeling fire behavior on tropical islands with high-resolution weather data
John W. Benoit; Francis M. Fujioka; David R. Weise
2009-01-01
In this study, we consider fire behavior simulation in tropical island scenarios such as Hawaii and Puerto Rico. The development of a system to provide real-time fire behavior prediction in Hawaii is discussed. This involves obtaining fuels and topography information at a fine scale, as well as supplying daily high-resolution weather forecast data for the area of...
NASA Astrophysics Data System (ADS)
Garibaldi, F.; Capuani, S.; Colilli, S.; Cosentino, L.; Cusanno, F.; De Leo, R.; Finocchiaro, P.; Foresta, M.; Giove, F.; Giuliani, F.; Gricia, M.; Loddo, F.; Lucentini, M.; Maraviglia, B.; Meddi, F.; Monno, E.; Musico, P.; Pappalardo, A.; Perrino, R.; Ranieri, A.; Rivetti, A.; Santavenere, F.; Tamma, C.
2013-02-01
Prostate cancer is the most common disease in men and the second leading cause of cancer death. Generic large instruments for diagnosis have sensitivity, spatial resolution, and contrast inferior with respect to dedicated prostate imagers. Multimodality imaging can play a significant role merging anatomical and functional details coming from simultaneous PET and MRI. The TOPEM project has the goal of designing, building, and testing an endorectal PET-TOF MRI probe. The performance is dominated by the detector close to the source. Results from simulation show spatial resolution of ∼1.5 mm for source distances up to 80 mm. The efficiency is significantly improved with respect to the external PET. Mini-detectors have been built and tested. We obtained, for the first time, to our best knowledge, timing resolution of <400 ps and at the same time Depth Of Interaction (DOI) resolution of 1 mm or less.
Survey of currently available high-resolution raster graphics systems
NASA Technical Reports Server (NTRS)
Jones, Denise R.
1987-01-01
Presented are data obtained on high-resolution raster graphics engines currently available on the market. The data were obtained through survey responses received from various vendors and also from product literature. The questionnaire developed for this survey was basically a list of characteristics desired in a high performance color raster graphics system which could perform real-time aircraft simulations. Several vendors responded to the survey, with most reporting on their most advanced high-performance, high-resolution raster graphics engine.
A Subsystem Test Bed for Chinese Spectral Radioheliograph
NASA Astrophysics Data System (ADS)
Zhao, An; Yan, Yihua; Wang, Wei
2014-11-01
The Chinese Spectral Radioheliograph is a solar dedicated radio interferometric array that will produce high spatial resolution, high temporal resolution, and high spectral resolution images of the Sun simultaneously in decimetre and centimetre wave range. Digital processing of intermediate frequency signal is an important part in a radio telescope. This paper describes a flexible and high-speed digital down conversion system for the CSRH by applying complex mixing, parallel filtering, and extracting algorithms to process IF signal at the time of being designed and incorporates canonic-signed digit coding and bit-plane method to improve program efficiency. The DDC system is intended to be a subsystem test bed for simulation and testing for CSRH. Software algorithms for simulation and hardware language algorithms based on FPGA are written which use less hardware resources and at the same time achieve high performances such as processing high-speed data flow (1 GHz) with 10 MHz spectral resolution. An experiment with the test bed is illustrated by using geostationary satellite data observed on March 20, 2014. Due to the easy alterability of the algorithms on FPGA, the data can be recomputed with different digital signal processing algorithms for selecting optimum algorithm.
High resolution global flood hazard map from physically-based hydrologic and hydraulic models.
NASA Astrophysics Data System (ADS)
Begnudelli, L.; Kaheil, Y.; McCollum, J.
2017-12-01
The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak corresponds to the return period corresponding to the hazard map being produced (e.g. 100 years, 500 years). Each numerical simulation models one river reach, except for the longest reaches which are split in smaller parts. Here we show results for selected river basins worldwide.
NASA Technical Reports Server (NTRS)
Wang, Xinghua; Wang, Bin; Bos, Philip J.; Anderson, James E.; Kujawinska, Malgorzata; Pouch, John; Miranda, Feliz
2004-01-01
In a 3-D display system based on an opto-electronic reconstruction of a digitally recorded hologram, the field of view of such a system is limited by the spatial resolution of the liquid crystal on silicon (LCOS) spatial light modular (SLM) used to perform the opto-electronic reconstruction. In this article, the special resolution limitation of LCOS SLM associated with the fringe field effect and interpixel coupling is determined by the liquid crystal detector simulation and the Finite Difference Time Domain (FDTD) simulation. The diffraction efficiency loss associated with the imperfection in the phase profile is studied with an example of opto-electronic reconstruction of an amplitude object. A high spatial resolution LCOS SLM with a wide reconstruction angle is proposed.
Impact Of Resolving Submesoscale Features On Modeling The Gulf Stream System
NASA Astrophysics Data System (ADS)
Chassignet, E.; Xu, X.
2016-02-01
Despite being one the best-known circulation pattern of the world ocean, the representation of the Gulf Stream, especially its energetic extension east of the New England Seamounts Chains in the western North Atlantic Ocean, has been a major challenge for ocean general circulation models even at eddy-rich resolutions. Here we show that, for the first time, a simulation of the North Atlantic circulation at 1/50° resolution realistically represents the narrow, energetic jet near 55°W when compared to observations, whereas similarly configured simulations at 1/25° and 1/12° resolution do not. This result highlights the importance of submesoscale features in driving the energetic Gulf Stream extension in the western North Atlantic. The results are discussed in terms of mesoscale and submesoscale energy power spectra.
NASA Astrophysics Data System (ADS)
Viebahn, Jan; von der Heydt, Anna S.; Dijkstra, Henk A.
2014-05-01
During the past 65 Million (Ma) years, Earth's climate has undergone a major change from warm 'greenhouse' to colder 'icehouse' conditions with extensive ice sheets in the polar regions of both hemispheres. The Eocene-Oligocene (~34 Ma) and Oligocene-Miocene (~23 Ma) boundaries reflect major transitions in Cenozoic global climate change. Proposed mechanisms of these transitions include reorganization of ocean circulation due to critical gateway opening/deepening, changes in atmospheric CO2-concentration, and feedback mechanisms related to land-ice formation. A long-standing hypothesis is that the formation of the Antarctic Circumpolar Current due to opening/deepening of Southern Ocean gateways led to glaciation of the Antarctic continent. However, while this hypothesis remains controversial, its assessment via coupled climate model simulations depends crucially on the spatial resolution in the ocean component. More precisely, only high-resolution modeling of the turbulent ocean circulation is capable of adequately describing reorganizations in the ocean flow field and related changes in turbulent heat transport. In this study, for the first time results of a high-resolution (0.1° horizontally) realistic global ocean model simulation with a closed Drake Passage are presented. Changes in global ocean temperatures, heat transport, and ocean circulation (e.g., Meridional Overturning Circulation and Antarctic Coastal Current) are established by comparison with an open Drake Passage high-resolution reference simulation. Finally, corresponding low-resolution simulations are also analyzed. The results highlight the essential impact of the ocean eddy field in palaeoclimatic change.
NASA Astrophysics Data System (ADS)
Mizyuk, Artem; Senderov, Maxim; Korotaev, Gennady
2016-04-01
Large number of numerical ocean models were implemented for the Black Sea basin during last two decades. They reproduce rather similar structure of synoptical variability of the circulation. Since 00-s numerical studies of the mesoscale structure are carried out using high performance computing (HPC). With the growing capacity of computing resources it is now possible to reconstruct the Black Sea currents with spatial resolution of several hundreds meters. However, how realistic these results can be? In the proposed study an attempt is made to understand which spatial scales are reproduced by ocean model in the Black Sea. Simulations are made using parallel version of NEMO (Nucleus for European Modelling of the Ocean). A two regional configurations with spatial resolutions 5 km and 2.5 km are described. Comparison of the SST from simulations with two spatial resolutions shows rather qualitative difference of the spatial structures. Results of high resolution simulation are compared also with satellite observations and observation-based products from Copernicus using spatial correlation and spectral analysis. Spatial scales of correlations functions for simulated and observed SST are rather close and differs much from satellite SST reanalysis. Evolution of spectral density for modelled SST and reanalysis showed agreed time periods of small scales intensification. Using of the spectral analysis for satellite measurements is complicated due to gaps. The research leading to this results has received funding from Russian Science Foundation (project № 15-17-20020)
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2017-04-01
In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10.1175/JCLI-D-15-0746.1
One-way coupling of an atmospheric and a hydrologic model in Colorado
Hay, L.E.; Clark, M.P.; Pagowski, M.; Leavesley, G.H.; Gutowski, W.J.
2006-01-01
This paper examines the accuracy of high-resolution nested mesoscale model simulations of surface climate. The nesting capabilities of the atmospheric fifth-generation Pennsylvania State University (PSU)-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5) were used to create high-resolution, 5-yr climate simulations (from 1 October 1994 through 30 September 1999), starting with a coarse nest of 20 km for the western United States. During this 5-yr period, two finer-resolution nests (5 and 1.7 km) were run over the Yampa River basin in northwestern Colorado. Raw and bias-corrected daily precipitation and maximum and minimum temperature time series from the three MM5 nests were used as input to the U.S. Geological Survey's distributed hydrologic model [the Precipitation Runoff Modeling System (PRMS)] and were compared with PRMS results using measured climate station data. The distributed capabilities of PRMS were provided by partitioning the Yampa River basin into hydrologic response units (HRUs). In addition to the classic polygon method of HRU definition, HRUs for PRMS were defined based on the three MM5 nests. This resulted in 16 datasets being tested using PRMS. The input datasets were derived using measured station data and raw and bias-corrected MM5 20-, 5-, and 1.7-km output distributed to 1) polygon HRUs and 2) 20-, 5-, and 1.7-km-gridded HRUs, respectively. Each dataset was calibrated independently, using a multiobjective, stepwise automated procedure. Final results showed a general increase in the accuracy of simulated runoff with an increase in HRU resolution. In all steps of the calibration procedure, the station-based simulations of runoff showed higher accuracy than the MM5-based simulations, although the accuracy of MM5 simulations was close to station data for the high-resolution nests. Further work is warranted in identifying the causes of the biases in MM5 local climate simulations and developing methods to remove them. ?? 2006 American Meteorological Society.
Dynamic x-ray imaging of laser-driven nanoplasmas
NASA Astrophysics Data System (ADS)
Fennel, Thomas
2016-05-01
A major promise of current x-ray science at free electron lasers is the realization of unprecedented imaging capabilities for resolving the structure and ultrafast dynamics of matter with nanometer spatial and femtosecond temporal resolution or even below via single-shot x-ray diffraction. Laser-driven atomic clusters and nanoparticles provide an ideal platform for developing and demonstrating the required technology to extract the ultrafast transient spatiotemporal dynamics from the diffraction images. In this talk, the perspectives and challenges of dynamic x-ray imaging will be discussed using complete self-consistent microscopic electromagnetic simulations of IR pump x-ray probe imaging for the example of clusters. The results of the microscopic particle-in-cell simulations (MicPIC) enable the simulation-assisted reconstruction of corresponding experimental data. This capability is demonstrated by converting recently measured LCLS data into a ultrahigh resolution movie of laser-induced plasma expansion. Finally, routes towards reaching attosecond time resolution in the visualization of complex dynamical processes in matter by x-ray diffraction will be discussed.
Resolution enhancement using simultaneous couple illumination
NASA Astrophysics Data System (ADS)
Hussain, Anwar; Martínez Fuentes, José Luis
2016-10-01
A super-resolution technique based on structured illumination created by a liquid crystal on silicon spatial light modulator (LCOS-SLM) is presented. Single and simultaneous pairs of tilted beams are generated to illuminate a target object. Resolution enhancement of an optical 4f system is demonstrated by using numerical simulations. The resulting intensity images are recorded at a charged couple device (CCD) and stored in the computer memory for further processing. One dimension enhancement can be performed with only 15 images. Two dimensional complete improvement requires 153 different images. The resolution of the optical system is extended three times compared to the band limited system.
NASA Astrophysics Data System (ADS)
Brunner, K. N.; Bitzer, P. M.
2017-12-01
The electrical energy dissipated by lightning is a fundamental question in lightning physics and may be used in severe weather applications. However, the electrical energy, flash area/extent and spectral energy density (radiance) are all influenced by the geometry of the lightning channel. We present details of a Monte Carlo based model simulating the optical emission from lightning and compare with observations. Using time-of-arrival techniques and the electric field change measurements from the Huntsville Alabama Marx Meter Array (HAMMA), the 4D lightning channel is reconstructed. The located sources and lightning channel emit optical emission, calibrated by the ground based electric field, that scatters until absorbed or a cloud boundary is reached within the model. At cloud top, the simulation is gridded as LIS pixels (events) and contiguous events (groups). The radiance is related via the LIS calibration and the estimated lightning electrical energy is calculated at the LIS/GLM time resolution. Previous Monte Carlo simulations have relied on a simplified lightning channel and scattering medium. This work considers the cloud a stratified medium of graupel/ice and inhomogeneous at flash scale. The impact of cloud inhomogeneity on the scattered optical emission at cloud top and at the time resolution of LIS and GLM are also considered. The simulation results and energy metrics provide an estimation of the electrical energy using GLM and LIS on the International Space Station (ISS-LIS).
Instrumental resolution of the chopper spectrometer 4SEASONS evaluated by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kajimoto, Ryoichi; Sato, Kentaro; Inamura, Yasuhiro; Fujita, Masaki
2018-05-01
We performed simulations of the resolution function of the 4SEASONS spectrometer at J-PARC by using the Monte Carlo simulation package McStas. The simulations showed reasonably good agreement with analytical calculations of energy and momentum resolutions by using a simplified description. We implemented new functionalities in Utsusemi, the standard data analysis tool used in 4SEASONS, to enable visualization of the simulated resolution function and predict its shape for specific experimental configurations.
Next-Generation Climate Modeling Science Challenges for Simulation, Workflow and Analysis Systems
NASA Astrophysics Data System (ADS)
Koch, D. M.; Anantharaj, V. G.; Bader, D. C.; Krishnan, H.; Leung, L. R.; Ringler, T.; Taylor, M.; Wehner, M. F.; Williams, D. N.
2016-12-01
We will present two examples of current and future high-resolution climate-modeling research that are challenging existing simulation run-time I/O, model-data movement, storage and publishing, and analysis. In each case, we will consider lessons learned as current workflow systems are broken by these large-data science challenges, as well as strategies to repair or rebuild the systems. First we consider the science and workflow challenges to be posed by the CMIP6 multi-model HighResMIP, involving around a dozen modeling groups performing quarter-degree simulations, in 3-member ensembles for 100 years, with high-frequency (1-6 hourly) diagnostics, which is expected to generate over 4PB of data. An example of science derived from these experiments will be to study how resolution affects the ability of models to capture extreme-events such as hurricanes or atmospheric rivers. Expected methods to transfer (using parallel Globus) and analyze (using parallel "TECA" software tools) HighResMIP data for such feature-tracking by the DOE CASCADE project will be presented. A second example will be from the Accelerated Climate Modeling for Energy (ACME) project, which is currently addressing challenges involving multiple century-scale coupled high resolution (quarter-degree) climate simulations on DOE Leadership Class computers. ACME is anticipating production of over 5PB of data during the next 2 years of simulations, in order to investigate the drivers of water cycle changes, sea-level-rise, and carbon cycle evolution. The ACME workflow, from simulation to data transfer, storage, analysis and publication will be presented. Current and planned methods to accelerate the workflow, including implementing run-time diagnostics, and implementing server-side analysis to avoid moving large datasets will be presented.
LaFontaine, Jacob H.; Jones, L. Elliott; Painter, Jaime A.
2017-12-29
A suite of hydrologic models has been developed for the Apalachicola-Chattahoochee-Flint River Basin (ACFB) as part of the National Water Census, a U.S. Geological Survey research program that focuses on developing new water accounting tools and assessing water availability and use at the regional and national scales. Seven hydrologic models were developed using the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, process-based system that simulates the effects of precipitation, temperature, land cover, and water use on basin hydrology. A coarse-resolution PRMS model was developed for the entire ACFB, and six fine-resolution PRMS models were developed for six subbasins of the ACFB. The coarse-resolution model was loosely coupled with a groundwater model to better assess the effects of water use on streamflow in the lower ACFB, a complex geologic setting with karst features. The PRMS coarse-resolution model was used to provide inputs of recharge to the groundwater model, which in turn provide simulations of groundwater flow that were aggregated with PRMS-based simulations of surface runoff and shallow-subsurface flow. Simulations without the effects of water use were developed for each model for at least the calendar years 1982–2012 with longer periods for the Potato Creek subbasin (1942–2012) and the Spring Creek subbasin (1952–2012). Water-use-affected flows were simulated for 2008–12. Water budget simulations showed heterogeneous distributions of precipitation, actual evapotranspiration, recharge, runoff, and storage change across the ACFB. Streamflow volume differences between no-water-use and water-use simulations were largest along the main stem of the Apalachicola and Chattahoochee River Basins, with streamflow percentage differences largest in the upper Chattahoochee and Flint River Basins and Spring Creek in the lower Flint River Basin. Water-use information at a shorter time step and a fully coupled simulation in the lower ACFB may further improve water availability estimates and hydrologic simulations in the basin.
NASA Astrophysics Data System (ADS)
Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo
2016-04-01
Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Junqi; Byrum, Karen; Demarteau, Marcel
Planar microchannel plate-based photodetector with bialkali photocathode is capable of fast and accurate time and position resolutions. A new 6 cm x 6 cm photodetector production facility was designed and built at Argonne National Laboratory. Small form-factor MCP-based photodetectors completely constructed of glass were designed and prototypes were successfully fabricated. Knudsen effusion cells were incorporated in the photocathode growth chamber to achieve uniform and high quantum efficiency hotocathodes. The thin film uniformity distribution was simulated and measured for an antimony film deposition, showing uniformity of better than 10%. Several prototype devices with bialkali photocathodes have been fabricated with the describedmore » system and their characteristics were evaluated in the large signal (multi-PE) limit. A typical prototype device exhibits time-of-flight resolution of ~ 27 psec and differential time resolution of ~ 9 psec, corresponding to spatial resolution of ~ 0.65 mm.« less
High Resolution Forecasting System for Mountain area based on KLAPS-WRF
NASA Astrophysics Data System (ADS)
Chun, Ji Min; Rang Kim, Kyu; Lee, Seon-Yong; Kang, Wee Soo; Park, Jong Sun; Yi, Chae Yeon; Choi, Young-jean; Park, Eun Woo; Hong, Soon Sung; Jung, Hyun-Sook
2013-04-01
This paper reviews the results of recent observations and simulations on the thermal belt and cold air drainage, which are outstanding in local climatic phenomena in mountain areas. In a mountain valley, cold air pool and thermal belt were simulated with the Weather and Research Forecast (WRF) model and the Korea Local Analysis and Prediction System (KLAPS) to determine the impacts of planetary boundary layer (PBL) schemes and topography resolution on model performance. Using the KLAPS-WRF models, an information system was developed for 12 hour forecasting of cold air damage in orchard. This system was conducted on a three level nested grid from 1 km to 111 m horizontal resolution. Results of model runs were verified by the data from automated weather stations, which were installed at twelve sites in a valley at Yeonsuri, Yangpyeonggun, Gyeonggido to measure temperature and wind speed and direction during March to May 2012. The potential of the numerical model to simulate these local features was found to be dependent on the planetary boundary layer schemes. Statistical verification results indicate that Mellor-Yamada-Janjic (MYJ) PBL scheme was in good agreement with night time temperature, while the no-PBL scheme produced predictions similar to the day time temperature observation. Although the KLAPS-WRF system underestimates temperature in mountain areas and overestimates wind speed, it produced an accurate description of temperature, with an RMSE of 1.67 ˚C in clear daytime. Wind speed and direction were not forecasted well in precision (RMSE: 5.26 m/s and 10.12 degree). It might have been caused by the measurement uncertainty and spatial variability. Additionally, the performance of KLAPS-WRF was performed to evaluate for different terrain resolution: Topography data were improved from USGS (United States Geological Survey) 30" to NGII (National Geographic Information Institute) 10 m. The simulated results were quantitatively compared to observations and there was a significant improvement (RMSE: 2.06 ˚C -> 1.73 ˚C) in the temperature prediction in the study area. The results will provide useful guidance of grid size selection on high resolution simulation over the mountain regions in Korea.
MR-based source localization for MR-guided HDR brachytherapy
NASA Astrophysics Data System (ADS)
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castello, Marco; DIBRIS, University of Genoa, Via Opera Pia 13, Genoa 16145; Diaspro, Alberto
2014-12-08
Time-gated detection, namely, only collecting the fluorescence photons after a time-delay from the excitation events, reduces complexity, cost, and illumination intensity of a stimulated emission depletion (STED) microscope. In the gated continuous-wave- (CW-) STED implementation, the spatial resolution improves with increased time-delay, but the signal-to-noise ratio (SNR) reduces. Thus, in sub-optimal conditions, such as a low photon-budget regime, the SNR reduction can cancel-out the expected gain in resolution. Here, we propose a method which does not discard photons, but instead collects all the photons in different time-gates and recombines them through a multi-image deconvolution. Our results, obtained on simulated andmore » experimental data, show that the SNR of the restored image improves relative to the gated image, thereby improving the effective resolution.« less
NASA Astrophysics Data System (ADS)
Mereuta, Loredana; Roy, Mahua; Asandei, Alina; Lee, Jong Kook; Park, Yoonkyung; Andricioaei, Ioan; Luchian, Tudor
2014-01-01
The microscopic details of how peptides translocate one at a time through nanopores are crucial determinants for transport through membrane pores and important in developing nano-technologies. To date, the translocation process has been too fast relative to the resolution of the single molecule techniques that sought to detect its milestones. Using pH-tuned single-molecule electrophysiology and molecular dynamics simulations, we demonstrate how peptide passage through the α-hemolysin protein can be sufficiently slowed down to observe intermediate single-peptide sub-states associated to distinct structural milestones along the pore, and how to control residence time, direction and the sequence of spatio-temporal state-to-state dynamics of a single peptide. Molecular dynamics simulations of peptide translocation reveal the time- dependent ordering of intermediate structures of the translocating peptide inside the pore at atomic resolution. Calculations of the expected current ratios of the different pore-blocking microstates and their time sequencing are in accord with the recorded current traces.
Real-time flight conflict detection and release based on Multi-Agent system
NASA Astrophysics Data System (ADS)
Zhang, Yifan; Zhang, Ming; Yu, Jue
2018-01-01
This paper defines two-aircrafts, multi-aircrafts and fleet conflict mode, sets up space-time conflict reservation on the basis of safety interval and conflict warning time in three-dimension. Detect real-time flight conflicts combined with predicted flight trajectory of other aircrafts in the same airspace, and put forward rescue resolutions for the three modes respectively. When accorded with the flight conflict conditions, determine the conflict situation, and enter the corresponding conflict resolution procedures, so as to avoid the conflict independently, as well as ensure the flight safety of aimed aircraft. Lastly, the correctness of model is verified with numerical simulation comparison.
Vizualization Challenges of a Subduction Simulation Using One Billion Markers
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Gerya, T. V.; Yuen, D. A.
2004-12-01
Recent advances in supercomputing technology have permitted us to study the multiscale, multicomponent fluid dynamics of subduction zones at unprecedented resolutions down to about the length of a football field. We have performed numerical simulations using one billion tracers over a grid of about 80 thousand points in two dimensions. These runs have been performed using a thermal-chemical simulation that accounts for hydration and partial melting in the thermal, mechanical, petrological, and rheological domains. From these runs, we have observed several geophysically interesting phenomena including the development of plumes with unmixed mantle composition as well as plumes with mixed mantle/crust components. Unmixed plumes form at depths greater than 100km (5-10 km above the upper interface of subducting slab) and consist of partially molten wet peridotite. Mixed plumes form at lesser depth directly from the subducting slab and contain partially molten hydrated oceanic crust and sediments. These high resolution simulations have also spurred the development of new visualization methods. We have created a new web-based interface to data from our subduction simulation and other high-resolution 2D data that uses an hierarchical data format to achieve response times of less than one second when accessing data files on the order of 3GB. This interface, WEB-IS4, uses a Javascript and HTML frontend coupled with a C and PHP backend and allows the user to perform region of interest zooming, real-time colormap selection, and can return relevant statistics relating to the data in the region of interest.
NASA Astrophysics Data System (ADS)
Yang, X.; Scheibe, T. D.; Chen, X.; Hammond, G. E.; Song, X.
2015-12-01
The zone in which river water and groundwater mix plays an important role in natural ecosystems as it regulates the mixing of nutrients that control biogeochemical transformations. Subsurface heterogeneity leads to local hotspots of microbial activity that are important to system function yet difficult to resolve computationally. To address this challenge, we are testing a hybrid multiscale approach that couples models at two distinct scales, based on field research at the U. S. Department of Energy's Hanford Site. The region of interest is a 400 x 400 x 20 m macroscale domain that intersects the aquifer and the river and contains a contaminant plume. However, biogeochemical activity is high in a thin zone (mud layer, <1 m thick) immediately adjacent to the river. This microscale domain is highly heterogeneous and requires fine spatial resolution to adequately represent the effects of local mixing on reactions. It is not computationally feasible to resolve the full macroscale domain at the fine resolution needed in the mud layer, and the reaction network needed in the mud layer is much more complex than that needed in the rest of the macroscale domain. Hence, a hybrid multiscale approach is used to efficiently and accurately predict flow and reactive transport at both scales. In our simulations, models at both scales are simulated using the PFLOTRAN code. Multiple microscale simulations in dynamically defined sub-domains (fine resolution, complex reaction network) are executed and coupled with a macroscale simulation over the entire domain (coarse resolution, simpler reaction network). The objectives of the research include: 1) comparing accuracy and computing cost of the hybrid multiscale simulation with a single-scale simulation; 2) identifying hot spots of microbial activity; and 3) defining macroscopic quantities such as fluxes, residence times and effective reaction rates.
3D visualization of ultra-fine ICON climate simulation data
NASA Astrophysics Data System (ADS)
Röber, Niklas; Spickermann, Dela; Böttinger, Michael
2016-04-01
Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.
Fusing Unmanned Aerial Vehicle Imagery with High Resolution Hydrologic Modeling (Invited)
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Pierini, N.; Schreiner-McGraw, A.; Anderson, C.; Saripalli, S.; Rango, A.
2013-12-01
After decades of development and applications, high resolution hydrologic models are now common tools in research and increasingly used in practice. More recently, high resolution imagery from unmanned aerial vehicles (UAVs) that provide information on land surface properties have become available for civilian applications. Fusing the two approaches promises to significantly advance the state-of-the-art in terms of hydrologic modeling capabilities. This combination will also challenge assumptions on model processes, parameterizations and scale as land surface characteristics (~0.1 to 1 m) may now surpass traditional model resolutions (~10 to 100 m). Ultimately, predictions from high resolution hydrologic models need to be consistent with the observational data that can be collected from UAVs. This talk will describe our efforts to develop, utilize and test the impact of UAV-derived topographic and vegetation fields on the simulation of two small watersheds in the Sonoran and Chihuahuan Deserts at the Santa Rita Experimental Range (Green Valley, AZ) and the Jornada Experimental Range (Las Cruces, NM). High resolution digital terrain models, image orthomosaics and vegetation species classification were obtained from a fixed wing airplane and a rotary wing helicopter, and compared to coarser analyses and products, including Light Detection and Ranging (LiDAR). We focus the discussion on the relative improvements achieved with UAV-derived fields in terms of terrain-hydrologic-vegetation analyses and summer season simulations using the TIN-based Real-time Integrated Basin Simulator (tRIBS) model. Model simulations are evaluated at each site with respect to a high-resolution sensor network consisting of six rain gauges, forty soil moisture and temperature profiles, four channel runoff flumes, a cosmic-ray soil moisture sensor and an eddy covariance tower over multiple summer periods. We also discuss prospects for the fusion of high resolution models with novel observations from UAVs, including synthetic aperture radar and multispectral imagery.
Quantitative Evaluation of PET Respiratory Motion Correction Using MR Derived Simulated Data
NASA Astrophysics Data System (ADS)
Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P.; Marsden, Paul K.
2015-12-01
The impact of respiratory motion correction on quantitative accuracy in PET imaging is evaluated using simulations for variable patient specific characteristics such as tumor uptake and respiratory pattern. Respiratory patterns from real patients were acquired, with long quiescent motion periods (type-1) as commonly observed in most patients and with long-term amplitude variability as is expected under conditions of difficult breathing (type-2). The respiratory patterns were combined with an MR-derived motion model to simulate real-time 4-D PET-MR datasets. Lung and liver tumors were simulated with diameters of 10 and 12 mm and tumor-to-background ratio ranging from 3:1 to 6:1. Projection data for 6- and 3-mm PET resolution were generated for the Philips Gemini scanner and reconstructed without and with motion correction using OSEM (2 iterations, 23 subsets). Motion correction was incorporated into the reconstruction process based on MR-derived motion fields. Tumor peak standardized uptake values (SUVpeak) were calculated from 30 noise realizations. Respiratory motion correction improves the quantitative performance with the greatest benefit observed for patients of breathing type-2. For breathing type-1 after applying motion correction, SUVpeak of 12-mm liver tumor with 6:1 contrast was increased by 46% for a current PET resolution (i.e., 6 mm) and by 47% for a higher PET resolution (i.e., 3 mm). Furthermore, the results of this study indicate that the benefit of higher scanner resolution is small unless motion correction is applied. In particular, for large liver tumor (12 mm) with low contrast (3:1) after motion correction, the SUVpeak was increased by 34% for 6-mm resolution and by 50% for a higher PET resolution (i.e., 3-mm resolution. This investigation indicates that there is a high impact of respiratory motion correction on tumor quantitative accuracy and that motion correction is important in order to benefit from the increased resolution of future PET scanners.
Spatial heterogeneity of leaf area index across scales from simulation and remote sensing
NASA Astrophysics Data System (ADS)
Reichenau, Tim G.; Korres, Wolfgang; Montzka, Carsten; Schneider, Karl
2016-04-01
Leaf area index (LAI, single sided leaf area per ground area) influences mass and energy exchange of vegetated surfaces. Therefore LAI is an input variable for many land surface schemes of coupled large scale models, which do not simulate LAI. Since these models typically run on rather coarse resolution grids, LAI is often inferred from coarse resolution remote sensing. However, especially in agriculturally used areas, a grid cell of these products often covers more than a single land-use. In that case, the given LAI does not apply to any single land-use. Therefore, the overall spatial heterogeneity in these datasets differs from that on resolutions high enough to distinguish areas with differing land-use. Detailed process-based plant growth models simulate LAI for separate plant functional types or specific species. However, limited availability of observations causes reduced spatial heterogeneity of model input data (soil, weather, land-use). Since LAI is strongly heterogeneous in space and time and since processes depend on LAI in a nonlinear way, a correct representation of LAI spatial heterogeneity is also desirable on coarse resolutions. The current study assesses this issue by comparing the spatial heterogeneity of LAI from remote sensing (RapidEye) and process-based simulations (DANUBIA simulation system) across scales. Spatial heterogeneity is assessed by analyzing LAI frequency distributions (spatial variability) and semivariograms (spatial structure). Test case is the arable land in the fertile loess plain of the Rur catchment near the Germany-Netherlands border.
Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.
Dematté, Lorenzo
2012-01-01
Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output
Resolution requirements for numerical simulations of transition
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Krist, Steven E.; Hussaini, M. Yousuff
1989-01-01
The resolution requirements for direct numerical simulations of transition to turbulence are investigated. A reliable resolution criterion is determined from the results of several detailed simulations of channel and boundary-layer transition.
Sarrigiannis, Ptolemaios G; Zhao, Yifan; Wei, Hua-Liang; Billings, Stephen A; Fotheringham, Jayne; Hadjivassiliou, Marios
2014-01-01
To introduce a new method of quantitative EEG analysis in the time domain, the error reduction ratio (ERR)-causality test. To compare performance against cross-correlation and coherence with phase measures. A simulation example was used as a gold standard to assess the performance of ERR-causality, against cross-correlation and coherence. The methods were then applied to real EEG data. Analysis of both simulated and real EEG data demonstrates that ERR-causality successfully detects dynamically evolving changes between two signals, with very high time resolution, dependent on the sampling rate of the data. Our method can properly detect both linear and non-linear effects, encountered during analysis of focal and generalised seizures. We introduce a new quantitative EEG method of analysis. It detects real time levels of synchronisation in the linear and non-linear domains. It computes directionality of information flow with corresponding time lags. This novel dynamic real time EEG signal analysis unveils hidden neural network interactions with a very high time resolution. These interactions cannot be adequately resolved by the traditional methods of coherence and cross-correlation, which provide limited results in the presence of non-linear effects and lack fidelity for changes appearing over small periods of time. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
TDC-based readout electronics for real-time acquisition of high resolution PET bio-images
NASA Astrophysics Data System (ADS)
Marino, N.; Saponara, S.; Ambrosi, G.; Baronti, F.; Bisogni, M. G.; Cerello, P.,; Ciciriello, F.; Corsi, F.; Fanucci, L.; Ionica, M.; Licciulli, F.; Marzocca, C.; Morrocchi, M.; Pennazio, F.; Roncella, R.; Santoni, C.; Wheadon, R.; Del Guerra, A.
2013-02-01
Positron emission tomography (PET) is a clinical and research tool for in vivo metabolic imaging. The demand for better image quality entails continuous research to improve PET instrumentation. In clinical applications, PET image quality benefits from the time of flight (TOF) feature. Indeed, by measuring the photons arrival time on the detectors with a resolution less than 100 ps, the annihilation point can be estimated with centimeter resolution. This leads to better noise level, contrast and clarity of detail in the images either using analytical or iterative reconstruction algorithms. This work discusses a silicon photomultiplier (SiPM)-based magnetic-field compatible TOF-PET module with depth of interaction (DOI) correction. The detector features a 3D architecture with two tiles of SiPMs coupled to a single LYSO scintillator on both its faces. The real-time front-end electronics is based on a current-mode ASIC where a low input impedance, fast current buffer allows achieving the required time resolution. A pipelined time to digital converter (TDC) measures and digitizes the arrival time and the energy of the events with a timestamp of 100 ps and 400 ps, respectively. An FPGA clusters the data and evaluates the DOI, with a simulated z resolution of the PET image of 1.4 mm FWHM.
Wen, Qiuting; Kodiweera, Chandana; Dale, Brian M; Shivraman, Giri; Wu, Yu-Chien
2018-01-01
To accelerate high-resolution diffusion imaging, rotating single-shot acquisition (RoSA) with composite reconstruction is proposed. Acceleration was achieved by acquiring only one rotating single-shot blade per diffusion direction, and high-resolution diffusion-weighted (DW) images were reconstructed by using similarities of neighboring DW images. A parallel imaging technique was implemented in RoSA to further improve the image quality and acquisition speed. RoSA performance was evaluated by simulation and human experiments. A brain tensor phantom was developed to determine an optimal blade size and rotation angle by considering similarity in DW images, off-resonance effects, and k-space coverage. With the optimal parameters, RoSA MR pulse sequence and reconstruction algorithm were developed to acquire human brain data. For comparison, multishot echo planar imaging (EPI) and conventional single-shot EPI sequences were performed with matched scan time, resolution, field of view, and diffusion directions. The simulation indicated an optimal blade size of 48 × 256 and a 30 ° rotation angle. For 1 × 1 mm 2 in-plane resolution, RoSA was 12 times faster than the multishot acquisition with comparable image quality. With the same acquisition time as SS-EPI, RoSA provided superior image quality and minimum geometric distortion. RoSA offers fast, high-quality, high-resolution diffusion images. The composite image reconstruction is model-free and compatible with various diffusion computation approaches including parametric and nonparametric analyses. Magn Reson Med 79:264-275, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang
2018-01-25
Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
NASA Astrophysics Data System (ADS)
Zhuo, Congshan; Zhong, Chengwen
2016-11-01
In this paper, a three-dimensional filter-matrix lattice Boltzmann (FMLB) model based on large eddy simulation (LES) was verified for simulating wall-bounded turbulent flows. The Vreman subgrid-scale model was employed in the present FMLB-LES framework, which had been proved to be capable of predicting turbulent near-wall region accurately. The fully developed turbulent channel flows were performed at a friction Reynolds number Reτ of 180. The turbulence statistics computed from the present FMLB-LES simulations, including mean stream velocity profile, Reynolds stress profile and root-mean-square velocity fluctuations greed well with the LES results of multiple-relaxation-time (MRT) LB model, and some discrepancies in comparison with those direct numerical simulation (DNS) data of Kim et al. was also observed due to the relatively low grid resolution. Moreover, to investigate the influence of grid resolution on the present LES simulation, a DNS simulation on a finer gird was also implemented by present FMLB-D3Q19 model. Comparisons of detailed computed various turbulence statistics with available benchmark data of DNS showed quite well agreement.
NASA Astrophysics Data System (ADS)
Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.
2013-12-01
Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough routes of water flows for extensive target areas. After then, refinements through hydraulics simulations with CTX DTMs (~12-18m resolution) and HiRISE DTMs (~1- 4m resolution) were conducted by employing the output of HRSC simulations as the initial conditions. Thus even a few high and very high resolution stereo DTMs coverage enabled the performance of a high precision hydraulics analysis for reconstructing a whole fluvial event. In this manner, useful information to identify the characteristics of martian fluvial activities, such as water depth along the time line, flow direction, and travel time, were successfully retrieved with each target tributary. Together with all above useful outputs of hydraulics analysis, the local roughness and photogrammetric control of the stereo DTMs appeared to be crucial elements for accurate fluvial simulation. The potential of this study should be further explored for its application to the other extraterrestrial bodies where fluvial activity once existed, as well as the major martian channel and valleys.
Simulation of a 7.7 MW onshore wind farm with the Actuator Line Model
NASA Astrophysics Data System (ADS)
Guggeri, A.; Draper, M.; Usera, G.
2017-05-01
Recently, the Actuator Line Model (ALM) has been evaluated with coarser resolution and larger time steps than what is generally recommended, taking into account an atmospheric sheared and turbulent inflow condition. The aim of the present paper is to continue these studies, assessing the capability of the ALM to represent the wind turbines’ interactions in an onshore wind farm. The ‘Libertad’ wind farm, which consists of four 1.9MW Vestas V100 wind turbines, was simulated considering different wind directions, and the results were compared with the wind farm SCADA data, finding good agreement between them. A sensitivity analysis was performed to evaluate the influence of the spatial resolution, finding acceptable agreement, although some differences were found. It is believed that these differences are due to the characteristics of the different Atmospheric Boundary Layer (ABL) simulations taken as inflow condition (precursor simulations).
NASA Astrophysics Data System (ADS)
Lee, Huikyo; Waliser, Duane E.; Ferraro, Robert; Iguchi, Takamichi; Peters-Lidard, Christa D.; Tian, Baijun; Loikith, Paul C.; Wright, Daniel B.
2017-07-01
Accurate simulation of extreme precipitation events remains a challenge in climate models. This study utilizes hourly precipitation data from ground stations and satellite instruments to evaluate rainfall characteristics simulated by the NASA-Unified Weather Research and Forecasting (NU-WRF) regional climate model at horizontal resolutions of 4, 12, and 24 km over the Great Plains of the United States. We also examined the sensitivity of the simulated precipitation to different spectral nudging approaches and the cumulus parameterizations. The rainfall characteristics in the observations and simulations were defined as an hourly diurnal cycle of precipitation and a joint probability distribution function (JPDF) between duration and peak intensity of precipitation events over the Great Plains in summer. We calculated a JPDF for each data set and the overlapping area between observed and simulated JPDFs to measure the similarity between the two JPDFs. Comparison of the diurnal precipitation cycles between observations and simulations does not reveal the added value of high-resolution simulations. However, the performance of NU-WRF simulations measured by the JPDF metric strongly depends on horizontal resolution. The simulation with the highest resolution of 4 km shows the best agreement with the observations in simulating duration and intensity of wet spells. Spectral nudging does not affect the JPDF significantly. The effect of cumulus parameterizations on the JPDFs is considerable but smaller than that of horizontal resolution. The simulations with lower resolutions of 12 and 24 km show reasonable agreement but only with the high-resolution observational data that are aggregated into coarse resolution and spatially averaged.
Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.
NASA Astrophysics Data System (ADS)
Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric
2016-04-01
SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.
Exploring connectivity with large-scale Granger causality on resting-state functional MRI.
DSouza, Adora M; Abidin, Anas Z; Leistritz, Lutz; Wismüller, Axel
2017-08-01
Large-scale Granger causality (lsGC) is a recently developed, resting-state functional MRI (fMRI) connectivity analysis approach that estimates multivariate voxel-resolution connectivity. Unlike most commonly used multivariate approaches, which establish coarse-resolution connectivity by aggregating voxel time-series avoiding an underdetermined problem, lsGC estimates voxel-resolution, fine-grained connectivity by incorporating an embedded dimension reduction. We investigate application of lsGC on realistic fMRI simulations, modeling smoothing of neuronal activity by the hemodynamic response function and repetition time (TR), and empirical resting-state fMRI data. Subsequently, functional subnetworks are extracted from lsGC connectivity measures for both datasets and validated quantitatively. We also provide guidelines to select lsGC free parameters. Results indicate that lsGC reliably recovers underlying network structure with area under receiver operator characteristic curve (AUC) of 0.93 at TR=1.5s for a 10-min session of fMRI simulations. Furthermore, subnetworks of closely interacting modules are recovered from the aforementioned lsGC networks. Results on empirical resting-state fMRI data demonstrate recovery of visual and motor cortex in close agreement with spatial maps obtained from (i) visuo-motor fMRI stimulation task-sequence (Accuracy=0.76) and (ii) independent component analysis (ICA) of resting-state fMRI (Accuracy=0.86). Compared with conventional Granger causality approach (AUC=0.75), lsGC produces better network recovery on fMRI simulations. Furthermore, it cannot recover functional subnetworks from empirical fMRI data, since quantifying voxel-resolution connectivity is not possible as consequence of encountering an underdetermined problem. Functional network recovery from fMRI data suggests that lsGC gives useful insight into connectivity patterns from resting-state fMRI at a multivariate voxel-resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
Resolution Enhancement In Ultrasonic Imaging By A Time-Varying Filter
NASA Astrophysics Data System (ADS)
Ching, N. H.; Rosenfeld, D.; Braun, M.
1987-09-01
The study reported here investigates the use of a time-varying filter to compensate for the spreading of ultrasonic pulses due to the frequency dependence of attenuation by tissues. The effect of this pulse spreading is to degrade progressively the axial resolution with increasing depth. The form of compensation required to correct for this effect is impossible to realize exactly. A novel time-varying filter utilizing a bank of bandpass filters is proposed as a realizable approximation of the required compensation. The performance of this filter is evaluated by means of a computer simulation. The limits of its application are discussed. Apart from improving the axial resolution, and hence the accuracy of axial measurements, the compensating filter could be used in implementing tissue characterization algorithms based on attenuation data.
NASA Astrophysics Data System (ADS)
Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan
2017-10-01
Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.
Effects of Finite Element Resolution in the Simulation of Magnetospheric Particle Motion
NASA Technical Reports Server (NTRS)
Hansen, Richard
2006-01-01
This document describes research done in conjunction with a degree program. The purpose of the research was to compare particle trajectories in a specified set of global electric and magnetic fields; to study the effect of mesh spacing, resulting in an evaluation of adequate spacing resolution; and to study time-dependent fields in the context of substorm dipolarizations of the magnetospheric tail.
NASA Astrophysics Data System (ADS)
Zarzycki, C. M.; Gettelman, A.; Callaghan, P.
2017-12-01
Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.
Improved spatial resolution in PET scanners using sampling techniques
Surti, Suleman; Scheuermann, Ryan; Werner, Matthew E.; Karp, Joel S.
2009-01-01
Increased focus towards improved detector spatial resolution in PET has led to the use of smaller crystals in some form of light sharing detector design. In this work we evaluate two sampling techniques that can be applied during calibrations for pixelated detector designs in order to improve the reconstructed spatial resolution. The inter-crystal positioning technique utilizes sub-sampling in the crystal flood map to better sample the Compton scatter events in the detector. The Compton scatter rejection technique, on the other hand, rejects those events that are located further from individual crystal centers in the flood map. We performed Monte Carlo simulations followed by measurements on two whole-body scanners for point source data. The simulations and measurements were performed for scanners using scintillators with Zeff ranging from 46.9 to 63 for LaBr3 and LYSO, respectively. Our results show that near the center of the scanner, inter-crystal positioning technique leads to a gain of about 0.5-mm in reconstructed spatial resolution (FWHM) for both scanner designs. In a small animal LYSO scanner the resolution improves from 1.9-mm to 1.6-mm with the inter-crystal technique. The Compton scatter rejection technique shows higher gains in spatial resolution but at the cost of reduction in scanner sensitivity. The inter-crystal positioning technique represents a modest acquisition software modification for an improvement in spatial resolution, but at a cost of potentially longer data correction and reconstruction times. The Compton scatter rejection technique, while also requiring a modest acquisition software change with no increased data correction and reconstruction times, will be useful in applications where the scanner sensitivity is very high and larger improvements in spatial resolution are desirable. PMID:19779586
NASA Astrophysics Data System (ADS)
Bastin, Sophie; Champollion, Cédric; Bock, Olivier; Drobinski, Philippe; Masson, Frédéric
2005-03-01
Global Positioning System (GPS) tomography analyses of water vapor, complemented by high-resolution numerical simulations are used to investigate a Mistral/sea breeze event in the region of Marseille, France, during the ESCOMPTE experiment. This is the first time GPS tomography has been used to validate the three-dimensional water vapor concentration from numerical simulation, and to analyze a small-scale meteorological event. The high spatial and temporal resolution of GPS analyses provides a unique insight into the evolution of the vertical and horizontal distribution of water vapor during the Mistral/sea-breeze transition.
NASA Astrophysics Data System (ADS)
Pradhan, Aniruddhe; Akhavan, Rayhaneh
2017-11-01
Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydrationmore » shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.« less
Retrieved Products from Simulated Hyperspectral Observations of a Hurricane
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John
2015-01-01
Demonstrate via Observing System Simulation Experiments (OSSEs) the potential utility of flying high spatial resolution AIRS class IR sounders on future LEO and GEO missions.The study simulates and analyzes radiances for 3 sounders with AIRS spectral and radiometric properties on different orbits with different spatial resolutions: 1) Control run 13 kilometers AIRS spatial resolution at nadir on LEO in Aqua orbit; 2) 2 kilometer spatial resolution LEO sounder at nadir ARIES; 3) 5 kilometers spatial resolution sounder on a GEO orbit, radiances simulated every 72 minutes.
Spectral decomposition of internal gravity wave sea surface height in global models
NASA Astrophysics Data System (ADS)
Savage, Anna C.; Arbic, Brian K.; Alford, Matthew H.; Ansong, Joseph K.; Farrar, J. Thomas; Menemenlis, Dimitris; O'Rourke, Amanda K.; Richman, James G.; Shriver, Jay F.; Voet, Gunnar; Wallcraft, Alan J.; Zamudio, Luis
2017-10-01
Two global ocean models ranging in horizontal resolution from 1/12° to 1/48° are used to study the space and time scales of sea surface height (SSH) signals associated with internal gravity waves (IGWs). Frequency-horizontal wavenumber SSH spectral densities are computed over seven regions of the world ocean from two simulations of the HYbrid Coordinate Ocean Model (HYCOM) and three simulations of the Massachusetts Institute of Technology general circulation model (MITgcm). High wavenumber, high-frequency SSH variance follows the predicted IGW linear dispersion curves. The realism of high-frequency motions (>0.87 cpd) in the models is tested through comparison of the frequency spectral density of dynamic height variance computed from the highest-resolution runs of each model (1/25° HYCOM and 1/48° MITgcm) with dynamic height variance frequency spectral density computed from nine in situ profiling instruments. These high-frequency motions are of particular interest because of their contributions to the small-scale SSH variability that will be observed on a global scale in the upcoming Surface Water and Ocean Topography (SWOT) satellite altimetry mission. The variance at supertidal frequencies can be comparable to the tidal and low-frequency variance for high wavenumbers (length scales smaller than ˜50 km), especially in the higher-resolution simulations. In the highest-resolution simulations, the high-frequency variance can be greater than the low-frequency variance at these scales.
NUMERICAL SIMULATIONS OF CORONAL HEATING THROUGH FOOTPOINT BRAIDING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansteen, V.; Pontieu, B. De; Carlsson, M.
2015-10-01
Advanced three-dimensional (3D) radiative MHD simulations now reproduce many properties of the outer solar atmosphere. When including a domain from the convection zone into the corona, a hot chromosphere and corona are self-consistently maintained. Here we study two realistic models, with different simulated areas, magnetic field strength and topology, and numerical resolution. These are compared in order to characterize the heating in the 3D-MHD simulations which self-consistently maintains the structure of the atmosphere. We analyze the heating at both large and small scales and find that heating is episodic and highly structured in space, but occurs along loop-shaped structures, andmore » moves along with the magnetic field. On large scales we find that the heating per particle is maximal near the transition region and that widely distributed opposite-polarity field in the photosphere leads to a greater heating scale height in the corona. On smaller scales, heating is concentrated in current sheets, the thicknesses of which are set by the numerical resolution. Some current sheets fragment in time, this process occurring more readily in the higher-resolution model leading to spatially highly intermittent heating. The large-scale heating structures are found to fade in less than about five minutes, while the smaller, local, heating shows timescales of the order of two minutes in one model and one minutes in the other, higher-resolution, model.« less
Surface Dimming by the 2013 Rim Fire Simulated by a Sectional Aerosol Model
NASA Technical Reports Server (NTRS)
Yu, Pengfei; Toon, Owen B.; Bardeen, Charles G; Bucholtz, Anthony; Rosenlof, Karen; Saide, Pablo E.; Da Silva, Arlindo M.; Ziemba, Luke D.; Thornhill, Kenneth L.; Jimenez, Jose-Luis;
2016-01-01
The Rim Fire of 2013, the third largest area burned by fire recorded in California history, is simulated by a climate model coupled with a size-resolved aerosol model. Modeled aerosol mass, number and particle size distribution are within variability of data obtained from multiple airborne in-situ measurements. Simulations suggest Rim Fire smoke may block 4-6 of sunlight energy reaching the surface, with a dimming efficiency around 120-150 W m(exp -2) per unit aerosol optical depth in the mid-visible at 13:00-15:00 local time. Underestimation of simulated smoke single scattering albedo at mid-visible by 0.04 suggests the model overestimates either the particle size or the absorption due to black carbon. This study shows that exceptional events like the 2013 Rim Fire can be simulated by a climate model with one-degree resolution with overall good skill, though that resolution is still not sufficient to resolve the smoke peak near the source region.
Surface dimming by the 2013 Rim Fire simulated by a sectional aerosol model.
Yu, Pengfei; Toon, Owen B; Bardeen, Charles G; Bucholtz, Anthony; Rosenlof, Karen H; Saide, Pablo E; Da Silva, Arlindo; Ziemba, Luke D; Thornhill, Kenneth L; Jimenez, Jose-Luis; Campuzano-Jost, Pedro; Schwarz, Joshua P; Perring, Anne E; Froyd, Karl D; Wagner, N L; Mills, Michael J; Reid, Jeffrey S
2016-06-27
The Rim Fire of 2013, the third largest area burned by fire recorded in California history, is simulated by a climate model coupled with a size-resolved aerosol model. Modeled aerosol mass, number, and particle size distribution are within variability of data obtained from multiple-airborne in situ measurements. Simulations suggest that Rim Fire smoke may block 4-6% of sunlight energy reaching the surface, with a dimming efficiency around 120-150 W m -2 per unit aerosol optical depth in the midvisible at 13:00-15:00 local time. Underestimation of simulated smoke single scattering albedo at midvisible by 0.04 suggests that the model overestimates either the particle size or the absorption due to black carbon. This study shows that exceptional events like the 2013 Rim Fire can be simulated by a climate model with 1° resolution with overall good skill, although that resolution is still not sufficient to resolve the smoke peak near the source region.
FASTPM: a new scheme for fast simulations of dark matter and haloes
NASA Astrophysics Data System (ADS)
Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick
2016-12-01
We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.
Super-Resolution Algorithm in Cumulative Virtual Blanking
NASA Astrophysics Data System (ADS)
Montillet, J. P.; Meng, X.; Roberts, G. W.; Woolfson, M. S.
2008-11-01
The proliferation of mobile devices and the emergence of wireless location-based services have generated consumer demand for precise location. In this paper, the MUSIC super-resolution algorithm is applied to time delay estimation for positioning purposes in cellular networks. The goal is to position a Mobile Station with UMTS technology. The problem of Base-Stations herability is solved using Cumulative Virtual Blanking. A simple simulator is presented using DS-SS signal. The results show that MUSIC algorithm improves the time delay estimation in both the cases whether or not Cumulative Virtual Blanking was carried out.
Improving the resolution for Lamb wave testing via a smoothed Capon algorithm
NASA Astrophysics Data System (ADS)
Cao, Xuwei; Zeng, Liang; Lin, Jing; Hua, Jiadong
2018-04-01
Lamb wave testing is promising for damage detection and evaluation in large-area structures. The dispersion of Lamb waves is often unavoidable, restricting testing resolution and making the signal hard to interpret. A smoothed Capon algorithm is proposed in this paper to estimate the accurate path length of each wave packet. In the algorithm, frequency domain whitening is firstly used to obtain the transfer function in the bandwidth of the excitation pulse. Subsequently, wavenumber domain smoothing is employed to reduce the correlation between wave packets. Finally, the path lengths are determined by distance domain searching based on the Capon algorithm. Simulations are applied to optimize the number of smoothing times. Experiments are performed on an aluminum plate consisting of two simulated defects. The results demonstrate that spatial resolution is improved significantly by the proposed algorithm.
Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge
2016-01-15
The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.
Evaluating scale and roughness effects in urban flood modelling using terrestrial LIDAR data
NASA Astrophysics Data System (ADS)
Ozdemir, H.; Sampson, C. C.; de Almeida, G. A. M.; Bates, P. D.
2013-10-01
This paper evaluates the results of benchmark testing a new inertial formulation of the St. Venant equations, implemented within the LISFLOOD-FP hydraulic model, using different high resolution terrestrial LiDAR data (10 cm, 50 cm and 1 m) and roughness conditions (distributed and composite) in an urban area. To examine these effects, the model is applied to a hypothetical flooding scenario in Alcester, UK, which experienced surface water flooding during summer 2007. The sensitivities of simulated water depth, extent, arrival time and velocity to grid resolutions and different roughness conditions are analysed. The results indicate that increasing the terrain resolution from 1 m to 10 cm significantly affects modelled water depth, extent, arrival time and velocity. This is because hydraulically relevant small scale topography that is accurately captured by the terrestrial LIDAR system, such as road cambers and street kerbs, is better represented on the higher resolution DEM. It is shown that altering surface friction values within a wide range has only a limited effect and is not sufficient to recover the results of the 10 cm simulation at 1 m resolution. Alternating between a uniform composite surface friction value (n = 0.013) or a variable distributed value based on land use has a greater effect on flow velocities and arrival times than on water depths and inundation extent. We conclude that the use of extra detail inherent in terrestrial laser scanning data compared to airborne sensors will be advantageous for urban flood modelling related to surface water, risk analysis and planning for Sustainable Urban Drainage Systems (SUDS) to attenuate flow.
Evaluating scale and roughness effects in urban flood modelling using terrestrial LIDAR data
NASA Astrophysics Data System (ADS)
Ozdemir, H.; Sampson, C. C.; de Almeida, G. A. M.; Bates, P. D.
2013-05-01
This paper evaluates the results of benchmark testing a new inertial formulation of the de St. Venant equations, implemented within the LISFLOOD-FP hydraulic model, using different high resolution terrestrial LiDAR data (10 cm, 50 cm and 1 m) and roughness conditions (distributed and composite) in an urban area. To examine these effects, the model is applied to a hypothetical flooding scenario in Alcester, UK, which experienced surface water flooding during summer 2007. The sensitivities of simulated water depth, extent, arrival time and velocity to grid resolutions and different roughness conditions are analysed. The results indicate that increasing the terrain resolution from 1 m to 10 cm significantly affects modelled water depth, extent, arrival time and velocity. This is because hydraulically relevant small scale topography that is accurately captured by the terrestrial LIDAR system, such as road cambers and street kerbs, is better represented on the higher resolution DEM. It is shown that altering surface friction values within a wide range has only a limited effect and is not sufficient to recover the results of the 10 cm simulation at 1 m resolution. Alternating between a uniform composite surface friction value (n = 0.013) or a variable distributed value based on land use has a greater effect on flow velocities and arrival times than on water depths and inundation extent. We conclude that the use of extra detail inherent in terrestrial laser scanning data compared to airborne sensors will be advantageous for urban flood modelling related to surface water, risk analysis and planning for Sustainable Urban Drainage Systems (SUDS) to attenuate flow.
NASA Astrophysics Data System (ADS)
Wilcox, William Edward, Jr.
1995-01-01
A computer program (LIDAR-PC) and associated atmospheric spectral databases have been developed which accurately simulate the laser remote sensing of the atmosphere and the system performance of a direct-detection Lidar or tunable Differential Absorption Lidar (DIAL) system. This simulation program allows, for the first time, the use of several different large atmospheric spectral databases to be coupled with Lidar parameter simulations on the same computer platform to provide a real-time, interactive, and easy to use design tool for atmospheric Lidar simulation and modeling. LIDAR -PC has been used for a range of different Lidar simulations and compared to experimental Lidar data. In general, the simulations agreed very well with the experimental measurements. In addition, the simulation offered, for the first time, the analysis and comparison of experimental Lidar data to easily determine the range-resolved attenuation coefficient of the atmosphere and the effect of telescope overlap factor. The software and databases operate on an IBM-PC or compatible computer platform, and thus are very useful to the research community for Lidar analysis. The complete Lidar and atmospheric spectral transmission modeling program uses the HITRAN database for high-resolution molecular absorption lines of the atmosphere, the BACKSCAT/LOWTRAN computer databases and models for the effects of aerosol and cloud backscatter and attenuation, and the range-resolved Lidar equation. The program can calculate the Lidar backscattered signal-to-noise for a slant path geometry from space and simulate the effect of high resolution, tunable, single frequency, and moderate line width lasers on the Lidar/DIAL signal. The program was used to model and analyze the experimental Lidar data obtained from several measurements. A fixed wavelength, Ho:YSGG aerosol Lidar (Sugimoto, 1990) developed at USF and a tunable Ho:YSGG DIAL system (Cha, 1991) for measuring atmospheric water vapor at 2.1 μm were analyzed. The simulations agreed very well with the measurements, and also yielded, for the first time, the ability to easily deduce the atmospheric attentuation coefficient, alpha, from the Lidar data. Simulations and analysis of other Lidar measurements included that of a 1.57 mu m OPO aerosol Lidar system developed at USF (Harrell, 1995) and of the NASA LITE (Laser-in-Space Technology Experiment) Lidar recently flown in the Space shuttle. Finally, an extensive series of laboratory experiments were made with the 1.57 μm OPO Lidar system to test calculations of the telescope/laser overlap and the effect of different telescope sizes and designs. The simulations agreed well with the experimental data for the telescope diameter and central obscuration test cases. The LIDAR-PC programs are available on the Internet from the USAF Lidar Home Page Web site, http://www.cas.usf.edu/physics/lidar.html/.
How supernovae launch galactic winds?
NASA Astrophysics Data System (ADS)
Fielding, Drummond; Quataert, Eliot; Martizzi, Davide; Faucher-Giguère, Claude-André
2017-09-01
We use idealized three-dimensional hydrodynamic simulations of global galactic discs to study the launching of galactic winds by supernovae (SNe). The simulations resolve the cooling radii of the majority of supernova remnants (SNRs) and thus self-consistently capture how SNe drive galactic winds. We find that SNe launch highly supersonic winds with properties that agree reasonably well with expectations from analytic models. The energy loading (η _E= \\dot{E}_wind/ \\dot{E}_SN) of the winds in our simulations are well converged with spatial resolution while the wind mass loading (η _M= \\dot{M}_wind/\\dot{M}_\\star) decreases with resolution at the resolutions we achieve. We present a simple analytic model based on the concept that SNRs with cooling radii greater than the local scaleheight break out of the disc and power the wind. This model successfully explains the dependence (or lack thereof) of ηE (and by extension ηM) on the gas surface density, star formation efficiency, disc radius and the clustering of SNe. The winds our simulations are weaker than expected in reality, likely due to the fact that we seed SNe preferentially at density peaks. Clustering SNe in time and space substantially increases the wind power.
High Resolution Integrated Hohlraum-Capsule Simulations for Virtual NIF Ignition Campaign
NASA Astrophysics Data System (ADS)
Jones, O. S.; Marinak, M. M.; Cerjan, C. J.; Clark, D. S.; Edwards, M. J.; Haan, S. W.; Langer, S. H.; Salmonson, J. D.
2009-11-01
We have undertaken a virtual campaign to assess the viability of the sequence of NIF experiments planned for 2010 that will experimentally tune the shock timing, symmetry, and ablator thickness of a cryogenic ignition capsule prior to the first ignition attempt. The virtual campaign consists of two teams. The ``red team'' creates realistic simulated diagnostic data for a given experiment from the output of a detailed radiation hydrodynamics calculation that has physics models that have been altered in a way that is consistent with probable physics uncertainties. The ``blue team'' executes a series of virtual experiments and interprets the simulated diagnostic data from those virtual experiments. To support this effort we have developed a capability to do very high spatial resolution integrated hohlraum-capsule simulations using the Hydra code. Surface perturbations for all ablator layer surfaces and the DT ice layer are calculated explicitly through mode 30. The effects of the fill tube, cracks in the ice layer, and defects in the ablator are included in models extracted from higher resolution calculations. Very high wave number mix is included through a mix model. We will show results from these calculations in the context of the ongoing virtual campaign.
NASA Astrophysics Data System (ADS)
Michael, Scott A.; Steiman-Cameron, T.; Durisen, R.; Boley, A.
2008-05-01
Using 3D simulations of a cooling disk undergoing gravitational instabilities (GIs), we compute the effective Shakura and Sunyaev (1973) alphas due to gravitational torques and compare them to predictions from an analytic local theory for thin disks by Gammie (2001). Our goal is to determine how accurately a locally defined alpha can characterize mass and angular momentum transport by GIs in disks. Cases are considered both with cooling by an imposed constant global cooling time (Mejia et al. 2005) and with realistic radiative transfer (Boley et al. 2007). Grid spacing in the azimuthal direction is varied to investigate how the computed alpha is affected by numerical resolution. The azimuthal direction is particularly important, because higher resolution in azimuth allows GI power to spread to higher-order (multi-armed) modes that behave more locally. We find that, in many important respects, the transport of mass and angular momentum by GIs is an intrinsically global phenomenon. Effective alphas are variable on a dynamic time scale over global spatial scales. Nevertheless, preliminary results at the highest resolutions for an imposed cooling time show that our computed alphas, though systematically higher, tend on average to follow Gammie's prediction to within perhaps a factor of two. Our computed alphas include only gravitational stresses, while in Gammie's treatment the effective alpha is due equally to hydrodynamic (Reynolds) and gravitational stresses. So Gammie's prediction may significantly underestimate the true average stresses in a GI-active disk. Our effective alphas appear to be reasonably well converged for 256 and 512 azimuthal zones. We also have a high-resolution simulation under way to test the extent of radial mixing by GIs of gas and its entrained dust for comparison with Stardust observations. Results will be presented if available at the time of the meeting.
NASA Astrophysics Data System (ADS)
Gochis, D. J.; Dugger, A. L.; Karsten, L. R.; Barlage, M. J.; Sampson, K. M.; Yu, W.; Pan, L.; McCreight, J. L.; Howard, K.; Busto, J.; Deems, J. S.
2017-12-01
Hydrometeorological processes vary over comparatively short length scales in regions of complex terrain such as the southern Rocky Mountains. Changes in temperature, precipitation, wind and solar radiation can vary significantly across elevation gradients, terrain landform and land cover conditions throughout the region. Capturing such variability in hydrologic models can necessitate the utilization of so-called `hyper-resolution' spatial meshes with effective element spacings of less than 100m. However, it is often difficult to obtain meteorological forcings of high quality in such regions at those resolutions which can result in significant uncertainty in fundamental in hydrologic model inputs. In this study we examine the comparative influences of meteorological forcing data fidelity and spatial resolution on seasonal simulations of snowpack evolution, runoff and streamflow in a set of high mountain watersheds in southern Colorado. We utilize the operational, NOAA National Water Model configuration of the community WRF-Hydro system as a baseline and compare against it, additional model scenarios with differing specifications of meteorological forcing data, with and without topographic downscaling adjustments applied, with and without experimental high resolution radar derived precipitation estimates and with WRF-Hydro configurations of progressively finer spatial resolution. The results suggest significant influence from and importance of meteorological downscaling techniques in controlling spatial distributions of meltout and runoff timing. The use of radar derived precipitation exhibits clear sensitivity on hydrologic simulation skill compared with the use of coarser resolution, background precipitation analyses. Advantages and disadvantages of the utilization of progressively higher resolution model configurations both in terms of computational requirements and model fidelity are also discussed.
Design and realization of retina-like three-dimensional imaging based on a MOEMS mirror
NASA Astrophysics Data System (ADS)
Cao, Jie; Hao, Qun; Xia, Wenze; Peng, Yuxin; Cheng, Yang; Mu, Jiaxing; Wang, Peng
2016-07-01
To balance conflicts for high-resolution, large-field-of-view and real-time imaging, a retina-like imaging method based on time-of flight (TOF) is proposed. Mathematical models of 3D imaging based on MOEMS are developed. Based on this method, we perform simulations of retina-like scanning properties, including compression of redundant information and rotation and scaling invariance. To validate the theory, we develop a prototype and conduct relevant experiments. The preliminary results agree well with the simulations.
Research and Simulation in Support of Near Real Time/Real Time Reconnaissance RPV Systems
1977-06-01
Image 4,5.2 Raster Lines Across Image 4.5.3 Angle Projected by Displayed Image 4.6 Optical Defocusing SIMULATION CONSIDERATIONS PAGE 162 162 162...television and infrared, there are a finite number of resolution elements across the format. As a consequence, selection of a shorter optical focal...light that is scanned across and down the CRT to form a raster similar to that seen in a standard television tube. The light is optically projected
NASA Astrophysics Data System (ADS)
Fenech, Sara; Doherty, Ruth M.; Heaviside, Clare; Vardoulakis, Sotiris; Macintyre, Helen L.; O'Connor, Fiona M.
2018-04-01
We examine the impact of model horizontal resolution on simulated concentrations of surface ozone (O3) and particulate matter less than 2.5 µm in diameter (PM2.5), and the associated health impacts over Europe, using the HadGEM3-UKCA chemistry-climate model to simulate pollutant concentrations at a coarse (˜ 140 km) and a finer (˜ 50 km) resolution. The attributable fraction (AF) of total mortality due to long-term exposure to warm season daily maximum 8 h running mean (MDA8) O3 and annual-average PM2.5 concentrations is then calculated for each European country using pollutant concentrations simulated at each resolution. Our results highlight a seasonal variation in simulated O3 and PM2.5 differences between the two model resolutions in Europe. Compared to the finer resolution results, simulated European O3 concentrations at the coarse resolution are higher on average in winter and spring (˜ 10 and ˜ 6 %, respectively). In contrast, simulated O3 concentrations at the coarse resolution are lower in summer and autumn (˜ -1 and ˜ -4 %, respectively). These differences may be partly explained by differences in nitrogen dioxide (NO2) concentrations simulated at the two resolutions. Compared to O3, we find the opposite seasonality in simulated PM2.5 differences between the two resolutions. In winter and spring, simulated PM2.5 concentrations are lower at the coarse compared to the finer resolution (˜ -8 and ˜ -6 %, respectively) but higher in summer and autumn (˜ 29 and ˜ 8 %, respectively). Simulated PM2.5 values are also mostly related to differences in convective rainfall between the two resolutions for all seasons. These differences between the two resolutions exhibit clear spatial patterns for both pollutants that vary by season, and exert a strong influence on country to country variations in estimated AF for the two resolutions. Warm season MDA8 O3 levels are higher in most of southern Europe, but lower in areas of northern and eastern Europe when simulated at the coarse resolution compared to the finer resolution. Annual-average PM2.5 concentrations are higher across most of northern and eastern Europe but lower over parts of southwest Europe at the coarse compared to the finer resolution. Across Europe, differences in the AF associated with long-term exposure to population-weighted MDA8 O3 range between -0.9 and +2.6 % (largest positive differences in southern Europe), while differences in the AF associated with long-term exposure to population-weighted annual mean PM2.5 range from -4.7 to +2.8 % (largest positive differences in eastern Europe) of the total mortality. Therefore this study, with its unique focus on Europe, demonstrates that health impact assessments calculated using modelled pollutant concentrations, are sensitive to a change in model resolution by up to ˜ ±5 % of the total mortality across Europe.
Learning to love the rain in Bergen (Norway) and other lessons from a Climate Services neophyte
NASA Astrophysics Data System (ADS)
Sobolowski, Stefan; Wakker, Joyce
2014-05-01
A question that is often asked of regional climate modelers generally, and Climate Service providers specifically, is: "What is the added-value of regional climate simulations and how can I use this information?" The answer is, unsurprisingly, not straightforward and depends greatly on what one needs to know. In particular it is important for scientist to communicate directly with the users of this information to determine what kind of information is important for them to do their jobs. This study is part of the ECLISE project (Enabling Climate Information Services for Europe) and involves a user at the municipality of Bergen's (Norway) water and drainage administration and a provider from Uni Research and the Bjerknes Center for Climate Research. The water and drain administration is responsible for communicating potential future changes in extreme precipitation, particularly short-term high-intensity rainfall, which is common in Bergen and making recommendations to the engineering department for changes in design criteria. Thus, information that enables better decision-making is crucial. This study then actually has two relevant components for climate services: 1) is a scientific exercise to evaluate the performance of high resolution regional climate simulations and their ability to reproduce high intensity short duration precipitation and 2) an exercise in communication between a provider community and user community with different concerns, mandates, methodological approaches and even vocabularies. A set of Weather Research and Forecasting (WRF) simulations was run at high resolution (8km) over a large domain covering much of Scandinavia and Northern Europe. One simulation was driven by so-called "perfect" boundary conditions taken from reanalysis data (ERA-interim, 1989-2010) the second and third simulations used Norway's global climate model as boundary forcing (NorESM) and were run for a historical period (1950-2005) and a 30yr. end of the century time slice under the rcp4.5 "middle of the road" emissions scenario (2071-2100). A unique feature of the WRF modeling system is the ability to write data for selected locations at every time step, thus creating time series of very high temporal resolution which can be compared to observations. This high temporal resolution also allowed us to directly calculate intensity-duration-frequency (IDF) curves for intense precipitation of short to long duration (5 minutes - 1 day) for a number of return periods (2-100 years) with out resorting to factors to calculate rainfall intensities at higher temporal resolutions, as is commonly done. We investigated the IDF curves using a number of parametric and non-parametric approaches. Given the relatively short time periods of the modeled data the standard Gumble approach is presented here. This is also done to maintain consistency with previous calculations by the water and drain administration. Curves were also generated from observed time series at two locations in Bergen. Both the historical, GCM-driven simulation and the ERA-interim driven simulation closely match the observed IDF curves for all return periods up to durations of about 10 minutes where WRF then fails to reproduce the very short, very high intensity events. IDF curves under future conditions were also generated and the changes were compared with the current standard approach of applying climate change-factors to observed extreme precipitation in order to account for structural errors in global and regional climate models. Our investigation suggests that high-resolution regional simulations can capture many of the topographic features and dynamical processes necessary to accurately model extreme rainfall, even in at highly local scales and over complex terrain such as Bergen, Norway. The exercise also produced many lessons for climate service providers and users alike.
Adaptive hyperspectral imager: design, modeling, and control
NASA Astrophysics Data System (ADS)
McGregor, Scot; Lacroix, Simon; Monmayrant, Antoine
2015-08-01
An adaptive, hyperspectral imager is presented. We propose a system with easily adaptable spectral resolution, adjustable acquisition time, and high spatial resolution which is independent of spectral resolution. The system yields the possibility to define a variety of acquisition schemes, and in particular near snapshot acquisitions that may be used to measure the spectral content of given or automatically detected regions of interest. The proposed system is modelled and simulated, and tests on a first prototype validate the approach to achieve near snapshot spectral acquisitions without resorting to any computationally heavy post-processing, nor cumbersome calibration
A novel super-resolution camera model
NASA Astrophysics Data System (ADS)
Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli
2015-05-01
Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.
NASA Astrophysics Data System (ADS)
Wengel, C.; Latif, M.; Park, W.; Harlaß, J.; Bayr, T.
2018-05-01
A long-standing difficulty of climate models is to capture the annual cycle (AC) of eastern equatorial Pacific (EEP) sea surface temperature (SST). In this study, we first examine the EEP SST AC in a set of integrations of the coupled Kiel Climate Model, in which only atmosphere model resolution differs. When employing coarse horizontal and vertical atmospheric resolution, significant biases in the EEP SST AC are observed. These are reflected in an erroneous timing of the cold tongue's onset and termination as well as in an underestimation of the boreal spring warming amplitude. A large portion of these biases are linked to a wrong simulation of zonal surface winds, which can be traced back to precipitation biases on both sides of the equator and an erroneous low-level atmospheric circulation over land. Part of the SST biases also is related to shortwave radiation biases related to cloud cover biases. Both wind and cloud cover biases are inherent to the atmospheric component, as shown by companion uncoupled atmosphere model integrations forced by observed SSTs. Enhancing atmosphere model resolution, horizontal and vertical, markedly reduces zonal wind and cloud cover biases in coupled as well as uncoupled mode and generally improves simulation of the EEP SST AC. Enhanced atmospheric resolution reduces convection biases and improves simulation of surface winds over land. Analysis of a subset of models from the Coupled Model Intercomparison Project phase 5 (CMIP5) reveals that in these models, very similar mechanisms are at work in driving EEP SST AC biases.
The evolution of extreme precipitations in high resolution scenarios over France
NASA Astrophysics Data System (ADS)
Colin, J.; Déqué, M.; Somot, S.
2009-09-01
Over the past years, improving the modelling of extreme events and their variability at climatic time scales has become one of the challenging issue raised in the regional climate research field. This study shows the results of a high resolution (12 km) scenario run over France with the limited area model (LAM) ALADIN-Climat, regarding the representation of extreme precipitations. The runs were conducted in the framework of the ANR-SCAMPEI national project on high resolution scenarios over French mountains. As a first step, we attempt to quantify one of the uncertainties implied by the use of LAM : the size of the area on which the model is run. In particular, we address the issue of whether a relatively small domain allows the model to create its small scale process. Indeed, high resolution scenarios cannot be run on large domains because of the computation time. Therefore one needs to answer this preliminary question before producing and analyzing such scenarios. To do so, we worked in the framework of a « big brother » experiment. We performed a 23-year long global simulation in present-day climate (1979-2001) with the ARPEGE-Climat GCM, at a resolution of approximately 50 km over Europe (stretched grid). This first simulation, named ARP50, constitutes the « big brother » reference of our experiment. It has been validated in comparison with the CRU climatology. Then we filtered the short waves (up to 200 km) from ARP50 in order to obtain the equivalent of coarse resolution lateral boundary conditions (LBC). We have carried out three ALADIN-Climat simulations at a 50 km resolution with these LBC, using different configurations of the model : * FRA50, run over a small domain (2000 x 2000 km, centered over France), * EUR50, run over a larger domain (5000 x 5000 km, centered over France as well), * EUR50-SN, run over the large domain (using spectral nudging). Considering the facts that ARPEGE-Climat and ALADIN-Climat models share the same physics and dynamics and that both regional and global simulations were run at the same resolution, ARP50 can be regarded as a reference with which FRA50, EUR50 and EUR50-SN should each be compared. After an analysis of the differences between the regional simulations and ARP50 in annual and seasonal mean, we focus on the representation of rainfall extremes comparing two dimensional fields of various index inspired from STARDEX and quantile-quantile plots. The results show a good agreement with the ARP50 reference for all three regional simulations and little differences are found between them. This result indicates that the use of small domains is not significantly detrimental to the modelling of extreme precipitation events. It also shows that the spectral nudging technique has no detrimental effect on the extreme precipitation. Therefore, high resolution scenarios performed on a relatively small domain such as the ones run for SCAMPEI, can be regarded as good tools to explore their possible evolution in the future climate. Preliminary results on the response of precipitation extremes over South-East France are given.
NASA Astrophysics Data System (ADS)
Zhang, Fan; Szilágyi, Béla
2013-10-01
At the beginning of binary black hole simulations, there is a pulse of spurious radiation (or junk radiation) resulting from the initial data not matching astrophysical quasi-equilibrium inspiral exactly. One traditionally waits for the junk radiation to exit the computational domain before taking physical readings, at the expense of throwing away a segment of the evolution, and with the hope that junk radiation exits cleanly. We argue that this hope does not necessarily pan out, as junk radiation could excite long-lived constraint violation. Another complication with the initial data is that they contain orbital eccentricity that needs to be removed, usually by evolving the early part of the inspiral multiple times with gradually improved input parameters. We show that this procedure is also adversely impacted by junk radiation. In this paper, we do not attempt to eliminate junk radiation directly, but instead tackle the much simpler problem of ameliorating its long-lasting effects. We report on the success of a method that achieves this goal by combining the removal of junk radiation and eccentricity into a single procedure. Namely, we periodically stop a low resolution simulation; take the numerically evolved metric data and overlay it with eccentricity adjustments; run it through an initial data solver (i.e. the solver receives as free data the numerical output of the previous iteration); restart the simulation; repeat until eccentricity becomes sufficiently low; and then launch the high resolution “production run” simulation. This approach has the following benefits: (1) We do not have to contend with the influence of junk radiation on eccentricity measurements for later iterations of the eccentricity reduction procedure. (2) We reenforce constraints every time the initial data solver is invoked, removing the constraint violation excited by junk radiation previously. (3) The wasted simulation segment associated with the junk radiation’s evolution is absorbed into the eccentricity reduction iterations. Furthermore, (1) and (2) together allow us to carry out our joint-elimination procedure at low resolution, even when the subsequent “production run” is intended as a high resolution simulation.
NASA Astrophysics Data System (ADS)
Martins, J. H. C.; Figueira, P.; Santos, N. C.; Melo, C.; Garcia Muñoz, A.; Faria, J.; Pepe, F.; Lovis, C.
2018-05-01
The characterization of planetary atmospheres is a daunting task, pushing current observing facilities to their limits. The next generation of high-resolution spectrographs mounted on large telescopes - such as ESPRESSO@VLT and HIRES@ELT - will allow us to probe and characterize exoplanetary atmospheres in greater detail than possible to this point. We present a method that permits the recovery of the colour-dependent reflectivity of exoplanets from high-resolution spectroscopic observations. Determining the wavelength-dependent albedo will provide insight into the chemical properties and weather of the exoplanet atmospheres. For this work, we simulated ESPRESSO@VLT and HIRES@ELT high-resolution observations of known planetary systems with several albedo configurations. We demonstrate how the cross correlation technique applied to theses simulated observations can be used to successfully recover the geometric albedo of exoplanets over a range of wavelengths. In all cases, we were able to recover the wavelength dependent albedo of the simulated exoplanets and distinguish between several atmospheric models representing different atmospheric configurations. In brief, we demonstrate that the cross correlation technique allows for the recovery of exoplanetary albedo functions from optical observations with the next generation of high-resolution spectrographs that will be mounted on large telescopes with reasonable exposure times. Its recovery will permit the characterization of exoplanetary atmospheres in terms of composition and dynamics and consolidates the cross correlation technique as a powerful tool for exoplanet characterization.
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
Time-resolved High Spectral Resolution Observation of 2MASSW J0746425+200032AB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ji; Mawet, Dimitri; Prato, Lisa, E-mail: ji.wang@caltech.edu
Many brown dwarfs (BDs) exhibit photometric variability at levels from tenths to tens of percents. The photometric variability is related to magnetic activity or patchy cloud coverage, characteristic of BDs near the L–T transition. Time-resolved spectral monitoring of BDs provides diagnostics of cloud distribution and condensate properties. However, current time-resolved spectral studies of BDs are limited to low spectral resolution ( R ∼ 100) with the exception of the study of Luhman 16 AB at a resolution of 100,000 using the VLT+CRIRES. This work yielded the first map of BD surface inhomogeneity, highlighting the importance and unique contribution of highmore » spectral resolution observations. Here, we report on the time-resolved high spectral resolution observations of a nearby BD binary, 2MASSW J0746425+200032AB. We find no coherent spectral variability that is modulated with rotation. Based on simulations, we conclude that the coverage of a single spot on 2MASSW J0746425+200032AB is smaller than 1% or 6.25% if spot contrast is 50% or 80% of its surrounding flux, respectively. Future high spectral resolution observations aided by adaptive optics systems can put tighter constraints on the spectral variability of 2MASSW J0746425+200032AB and other nearby BDs.« less
NASA Astrophysics Data System (ADS)
Wekerle, C.; Wang, Q.; Danilov, S.; Jung, T.; Schourup-Kristensen, V.
2016-02-01
Atlantic Water (AW) passes through the Nordic Seas and enters the Arctic Ocean through the shallow Barents Sea and the deep Fram Strait. Since the 1990's, observations indicate a series of anomalously warm pulses of Atlantic Water that entered the Arctic Ocean. In fact, poleward oceanic heat transport may even increase in the future, which might have implications for the heat uptake in the Arctic Ocean as well as for the sea ice cover. The ability of models to faithfully simulate the pathway of the AW and accompanying dynamics is thus of high climate relevance. In this study, we explore the potential of a global multi-resolution sea ice-ocean model with a locally eddy-permitting resolution (around 4.5 km) in the Nordic seas region and Arctic Ocean in improving the representation of Atlantic Water inflow, and more broadly, the dynamics of the circulation in the Northern North Atlantic and Arctic. The simulation covers the time period 1969-2009. We find that locally increased resolution improves the localization and thickness of the Atlantic Water layer in the Nordic seas, compared with a 20 km resolution reference simulation. In particular, the inflow of Atlantic Waters through the Greenland Scotland Ridge and the narrow branches of the Norwegian Atlantic Current can be realistically represented. Lateral spreading due to simulated eddies essentially reduces the bias in the surface temperature. In addition, a qualitatively good agreement of the simulated eddy kinetic energy field with observations can be achieved. This study indicates that a substantial improvement in representing local ocean dynamics can be reached through the local refinement, which requires a rather moderate computational effort. The successful model assessment allows us to further investigate the variability and mechanisms behind Atlantic Water transport into the Arctic Ocean.
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Pfeil, Thomas; Potjans, Tobias C.; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists. PMID:22822388
Picosecond Resolution Time-to-Digital Converter Using Gm-C Integrator and SAR-ADC
NASA Astrophysics Data System (ADS)
Xu, Zule; Miyahara, Masaya; Matsuzawa, Akira
2014-04-01
A picosecond resolution time-to-digital converter (TDC) is presented. The resolution of a conventional delay chain TDC is limited by the delay of a logic buffer. Various types of recent TDCs are successful in breaking this limitation, but they require a significant calibration effort to achieve picosecond resolution with a sufficient linear range. To address these issues, we propose a simple method to break the resolution limitation without any calibration: a Gm-C integrator followed by a successive approximation register analog-to-digital converter (SAR-ADC). This translates the time interval into charge, and then the charge is quantized. A prototype chip was fabricated in 90 nm CMOS. The measurement results reveal a 1 ps resolution, a -0.6/0.7 LSB differential nonlinearity (DNL), a -1.1/2.3 LSB integral nonlinearity (INL), and a 9-bit range. The measured 11.74 ps single-shot precision is caused by the noise of the integrator. We analyze the noise of the integrator and propose an improved front-end circuit to reduce this noise. The proposal is verified by simulations showing the maximum single-shot precision is less than 1 ps. The proposed front-end circuit can also diminish the mismatch effects.
Novel crystal timing calibration method based on total variation
NASA Astrophysics Data System (ADS)
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
High-resolution RCMs as pioneers for future GCMs
NASA Astrophysics Data System (ADS)
Schar, C.; Ban, N.; Arteaga, A.; Charpilloz, C.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Leutwyler, D.; Lüthi, D.; Piaget, N.; Ruedisuehli, S.; Schlemmer, L.; Schulthess, T. C.; Wernli, H.
2017-12-01
Currently large efforts are underway to refine the horizontal resolution of global and regional climate models to O(1 km), with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement will move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. High resolution is particularly attractive in order to better represent critical cloud feedback processes (e.g. related to global climate sensitivity and extratropical summer convection) and extreme events (such as heavy precipitation events, floods, and hurricanes). The presentation will be illustrated using decade-long simulations at 2 km horizontal grid spacing, some of these covering the European continent on a computational mesh with 1536x1536x60 grid points. To accomplish such simulations, use is made of emerging heterogeneous supercomputing architectures, using a version of the COSMO limited-area weather and climate model that is able to run entirely on GPUs. Results show that kilometer-scale resolution dramatically improves the simulation of precipitation in terms of the diurnal cycle and short-term extremes. The modeling framework is used to address changes of precipitation scaling with climate change. It is argued that already today, modern supercomputers would in principle enable global atmospheric convection-resolving climate simulations, provided appropriately refactored codes were available, and provided solutions were found to cope with the rapidly growing output volume. A discussion will be provided of key challenges affecting the design of future high-resolution climate models. It is suggested that km-scale RCMs should be exploited to pioneer this terrain, at a time when GCMs are not yet available at such resolutions. Areas of interest include the development of new parameterization schemes adequate for km-scale resolution, the exploration of new validation methodologies and data sets, the assessment of regional-scale climate feedback processes, and the development of alternative output analysis methodologies.
NASA Astrophysics Data System (ADS)
Yang, Ben; Zhou, Yang; Zhang, Yaocun; Huang, Anning; Qian, Yun; Zhang, Lujun
2018-03-01
Closure assumption in convection parameterization is critical for reasonably modeling the precipitation diurnal variation in climate models. This study evaluates the precipitation diurnal cycles over East Asia during the summer of 2008 simulated with three convective available potential energy (CAPE) based closure assumptions, i.e. CAPE-relaxing (CR), quasi-equilibrium (QE), and free-troposphere QE (FTQE) and investigates the impacts of planetary boundary layer (PBL) mixing, advection, and radiation on the simulation by using the weather research and forecasting model. The sensitivity of precipitation diurnal cycle to PBL vertical resolution is also examined. Results show that the precipitation diurnal cycles simulated with different closures all exhibit large biases over land and the simulation with FTQE closure agrees best with observation. In the simulation with QE closure, the intensified PBL mixing after sunrise is responsible for the late-morning peak of convective precipitation, while in the simulation with FTQE closure, convective precipitation is mainly controlled by advection cooling. The relative contributions of different processes to precipitation formation are functions of rainfall intensity. In the simulation with CR closure, the dynamical equilibrium in the free troposphere still can be reached, implying the complex cause-effect relationship between atmospheric motion and convection. For simulations in which total CAPE is consumed for the closures, daytime precipitation decreases with increased PBL resolution because thinner model layer produces lower convection starting layer, leading to stronger downdraft cooling and CAPE consumption. The sensitivity of the diurnal peak time of precipitation to closure assumption can also be modulated by changes in PBL vertical resolution. The results of this study help us better understand the impacts of various processes on the precipitation diurnal cycle simulation.
Capabilities of stochastic rainfall models as data providers for urban hydrology
NASA Astrophysics Data System (ADS)
Haberlandt, Uwe
2017-04-01
For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G., 2013. High resolution regional climate model simulations for Germany: part I — validation. Climate Dynamics, 40(1): 401-414. Haberlandt, U., Ebner von Eschenbach, A.-D., Buchwald, I., 2008. A space-time hybrid hourly rainfall model for derived flood frequency analysis. Hydrol. Earth Syst. Sci., 12: 1353-1367.
Simulations of a micro-PET system based on liquid xenon
NASA Astrophysics Data System (ADS)
Miceli, A.; Glister, J.; Andreyev, A.; Bryman, D.; Kurchaninov, L.; Lu, P.; Muennich, A.; Retiere, F.; Sossi, V.
2012-03-01
The imaging performance of a high-resolution preclinical micro-positron emission tomography (micro-PET) system employing liquid xenon (LXe) as the gamma-ray detection medium was simulated. The arrangement comprises a ring of detectors consisting of trapezoidal LXe time projection ionization chambers and two arrays of large area avalanche photodiodes for the measurement of ionization charge and scintillation light. A key feature of the LXePET system is the ability to identify individual photon interactions with high energy resolution and high spatial resolution in three dimensions and determine the correct interaction sequence using Compton reconstruction algorithms. The simulated LXePET imaging performance was evaluated by computing the noise equivalent count rate, the sensitivity and point spread function for a point source according to the NEMA-NU4 standard. The image quality was studied with a micro-Derenzo phantom. Results of these simulation studies included noise equivalent count rate peaking at 1326 kcps at 188 MBq (705 kcps at 184 MBq) for an energy window of 450-600 keV and a coincidence window of 1 ns for mouse (rat) phantoms. The absolute sensitivity at the center of the field of view was 12.6%. Radial, tangential and axial resolutions of 22Na point sources reconstructed with a list-mode maximum likelihood expectation maximization algorithm were ⩽0.8 mm (full-width at half-maximum) throughout the field of view. Hot-rod inserts of <0.8 mm diameter were resolvable in the transaxial image of a micro-Derenzo phantom. The simulations show that a LXe system would provide new capabilities for significantly enhancing PET images.
NASA Astrophysics Data System (ADS)
Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.
2012-12-01
This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.
Patch-Based Super-Resolution of MR Spectroscopic Images: Application to Multiple Sclerosis
Jain, Saurabh; Sima, Diana M.; Sanaei Nezhad, Faezeh; Hangel, Gilbert; Bogner, Wolfgang; Williams, Stephen; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2017-01-01
Purpose: Magnetic resonance spectroscopic imaging (MRSI) provides complementary information to conventional magnetic resonance imaging. Acquiring high resolution MRSI is time consuming and requires complex reconstruction techniques. Methods: In this paper, a patch-based super-resolution method is presented to increase the spatial resolution of metabolite maps computed from MRSI. The proposed method uses high resolution anatomical MR images (T1-weighted and Fluid-attenuated inversion recovery) to regularize the super-resolution process. The accuracy of the method is validated against conventional interpolation techniques using a phantom, as well as simulated and in vivo acquired human brain images of multiple sclerosis subjects. Results: The method preserves tissue contrast and structural information, and matches well with the trend of acquired high resolution MRSI. Conclusions: These results suggest that the method has potential for clinically relevant neuroimaging applications. PMID:28197066
Introducing CGOLS: The Cholla Galactic Outflow Simulation Suite
NASA Astrophysics Data System (ADS)
Schneider, Evan E.; Robertson, Brant E.
2018-06-01
We present the Cholla Galactic OutfLow Simulations (CGOLS) suite, a set of extremely high resolution global simulations of isolated disk galaxies designed to clarify the nature of multiphase structure in galactic winds. Using the GPU-based code Cholla, we achieve unprecedented resolution in these simulations, modeling galaxies over a 20 kpc region at a constant resolution of 5 pc. The simulations include a feedback model designed to test the effects of different mass- and energy-loading factors on galactic outflows over kiloparsec scales. In addition to describing the simulation methodology in detail, we also present the results from an adiabatic simulation that tests the frequently adopted analytic galactic wind model of Chevalier & Clegg. Our results indicate that the Chevalier & Clegg model is a good fit to nuclear starburst winds in the nonradiative region of parameter space. Finally, we investigate the role of resolution and convergence in large-scale simulations of multiphase galactic winds. While our largest-scale simulations show convergence of observable features like soft X-ray emission, our tests demonstrate that simulations of this kind with resolutions greater than 10 pc are not yet converged, confirming the need for extreme resolution in order to study the structure of winds and their effects on the circumgalactic medium.
NASA Astrophysics Data System (ADS)
Austin, D. E.; Ahrens, T. J.; Beauchamp, J. L.
2000-10-01
We have developed and tested a small impact-ionization time-of-flight mass spectrometer for analysis of cosmic dust, suitable for use on deep space missions. This mass spectrometer, named Dustbuster, incorporates a large target area and a reflectron, simultaneously optimizing mass resolution, sensitivity, and collection efficiency. Dust particles hitting the 65-cm2 target plate are partially ionized. The resulting ions are accelerated through a modified reflectron that focuses the ions in space and time to produce high-resolution spectra. The instrument, shown below, measures 10 x 10 x 20 cm, has a mass of 500 g, and consumes little power. Laser desorption ionization of metal and mineral samples (embedded in the impact plate) simulates particle impacts for instrument performance tests. Mass resolution in these experiments is near 200, permitting resolution of isotopes. The mass spectrometer can be combined with other instrument components to determine dust particle trajectories and sizes. This project was funded by NASA's Planetary Instrument Definition and Development Program.
Peng, Hao; Levin, Craig S
2013-01-01
We studied the performance of a dual-panel positron emission tomography (PET) camera dedicated to breast cancer imaging using Monte Carlo simulation. The proposed system consists of two 4 cm thick 12 × 15 cm2 area cadmium zinc telluride (CZT) panels with adjustable separation, which can be put in close proximity to the breast and/or axillary nodes. Unique characteristics distinguishing the proposed system from previous efforts in breast-dedicated PET instrumentation are the deployment of CZT detectors with superior spatial and energy resolution, using a cross-strip electrode readout scheme to enable 3D positioning of individual photon interaction coordinates in the CZT, which includes directly measured photon depth-of-interaction (DOI), and arranging the detector slabs edge-on with respect to incoming 511 keV photons for high photon sensitivity. The simulation results show that the proposed CZT dual-panel PET system is able to achieve superior performance in terms of photon sensitivity, noise equivalent count rate, spatial resolution and lesion visualization. The proposed system is expected to achieve ~32% photon sensitivity for a point source at the center and a 4 cm panel separation. For a simplified breast phantom adjacent to heart and torso compartments, the peak noise equivalent count (NEC) rate is predicted to be ~94.2 kcts s−1 (breast volume: 720 cm3 and activity concentration: 3.7 kBq cm−3) for a ~10% energy window around 511 keV and ~8 ns coincidence time window. The system achieves 1 mm intrinsic spatial resolution anywhere between the two panels with a 4 cm panel separation if the detectors have DOI resolution less than 2 mm. For a 3 mm DOI resolution, the system exhibits excellent sphere resolution uniformity (σrms/mean) ≤ 10%) across a 4 cm width FOV. Simulation results indicate that the system exhibits superior hot sphere visualization and is expected to visualize 2 mm diameter spheres with a 5:1 activity concentration ratio within roughly 7 min imaging time. Furthermore, we observe that the degree of spatial resolution degradation along the direction orthogonal to the two panels that is typical of a limited angle tomography configuration is mitigated by having high-resolution DOI capabilities that enable more accurate positioning of oblique response lines. PMID:20400807
UWB Tracking System Design for Free-Flyers
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Phan, Chan; Ngo, Phong; Gross, Julia; Dusl, John
2004-01-01
This paper discusses an ultra-wideband (UWB) tracking system design effort for Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A tracking algorithm TDOA (Time Difference of Arrival) that operates cooperatively with the UWB system is developed in this research effort. Matlab simulations show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. Lab experiments demonstrate the UWB tracking capability with fine resolution.
Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT
NASA Astrophysics Data System (ADS)
Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee
2014-03-01
State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.
Towards a Fine-Resolution Global Coupled Climate System for Prediction on Decadal/Centennial Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClean, Julie L.
The over-arching goal of this project was to contribute to the realization of a fully coupled fine resolution Earth System Model simulation in which a weather-scale atmosphere is coupled to an ocean in which mesoscale eddies are largely resolved. Both a prototype fine-resolution fully coupled ESM simulation and a first-ever multi-decadal forced fine-resolution global coupled ocean/ice simulation were configured, tested, run, and analyzed as part of this grant. Science questions focused on the gains from the use of high horizontal resolution, particularly in the ocean and sea-ice, with respect to climatically important processes. Both these fine resolution coupled ocean/sea icemore » and fully-coupled simulations and precedent stand-alone eddy-resolving ocean and eddy-permitting coupled ocean/ice simulations were used to explore the high resolution regime. Overall, these studies showed that the presence of mesoscale eddies significantly impacted mixing processes and the global meridional overturning circulation in the ocean simulations. Fourteen refereed publications and a Ph.D. dissertation resulted from this grant.« less
Zhou, Hao; Lei, Guo Ping; Yang, Xue Xin; Zhao, Yu Hui; Zhang, Ji Xin
2018-04-01
Under the scenarios of climate change, balancing the land and water resources is one of the key problems needed to be solved in land development. To reveal the water dynamics of the cultivated land in Naoli River Basin, we simulated the future scenarios by using the future land use simulation model based on Landsat Satellite images, the DEM data and the meteorological data. Results showed that the growth rate of cultivated land gradually decreased. It showed different changing characteristics in different time periods, which led to different balancing effect between land and water resources. In 1990, the water dynamics of the cultivated land resources was in good state, At the same time, the adjustment of crops structure caused the paddy fields increased dramatically. During 2002 to 2014, the cultivated land that in moderate and serious moisture shortage state increased slightly, the water deficit was deteriorating to a certain degree, and maintained sound development of water profit and loss situation gradually. By comparing the simulation accuracy with different spatial resolutions and time scales, we selected 200 m as the spatial resolution of the simulation, and simulated the land use status in 2038. The simulation results showed that the cultivated land's water profit and loss degree in the river basin showed significant polarization characteristic, in that the water profit and loss degree of the cultivated land would be further intensified, the area with the higher grades of moisture profit and loss degree would distribute more centralized, and partially high evaluated grades for the moisture shortage would expand. It is needed to develop the cultivated land irrigation schemes and adjust the cultivated land in Naoli River Basin to balance soil and water resources.
Monte Carlo simulations of neutron-scattering instruments using McStas
NASA Astrophysics Data System (ADS)
Nielsen, K.; Lefmann, K.
2000-06-01
Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.
Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby; ...
2016-10-22
Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby
Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less
NASA Astrophysics Data System (ADS)
Dallal, Ahmed H.
Safety is an essential requirement for air traffic management and control systems. Aircraft are not allowed to get closer to each other than a specified safety distance, to avoid any conflicts and collisions between aircraft. Forecast analysis predicts a tremendous increase in the number of flights. Subsequently, automated tools are needed to help air traffic controllers resolve air born conflicts. In this dissertation, we consider the problem of conflict resolution of aircraft flows with the assumption that aircraft are flowing through a fixed specified control volume at a constant speed. In this regard, several centralized and decentralized resolution rules have been proposed for path planning and conflict avoidance. For the case of two intersecting flows, we introduce the concept of conflict touches, and a collaborative decentralized conflict resolution rule is then proposed and analyzed for two intersecting flows. The proposed rule is also able to resolved airborne conflicts that resulted from resolving another conflict via the domino effect. We study the safety conditions under the proposed conflict resolution and collision avoidance rule. Then, we use Lyapunov analysis to analytically prove the convergence of conflict resolution dynamics under the proposed rule. The analysis show that, under the proposed conflict resolution rule, the system of intersecting aircraft flows is guaranteed to converge to safe, conflict free, trajectories within a bounded time. Simulations are provided to verify the analytically derived conclusions and study the convergence of the conflict resolution dynamics at different encounter angles. Simulation results show that lateral deviations taken by aircraft in each flow, to resolve conflicts, are bounded, and aircraft converged to safe and conflict free trajectories, within a finite time.
A closed-loop time-alignment system for baseband combining
NASA Technical Reports Server (NTRS)
Feria, Y.
1994-01-01
In baseband combining, the key element is the time alignment of the baseband signals. This article describes a closed-loop time-alignment system that estimates and adjusts the relative delay between two baseband signals received from two different antennas for the signals to be coherently combined. This system automatically determines which signal is advanced and delays it accordingly with a resolution of a sample period. The performance of the loop is analyzed, and the analysis is verified through simulation. The variance of the delay estimates and the signal-to-noise ratio degradation in the simulations agree with the theoretical calculations.
Design and construction of an Offner spectrometer based on geometrical analysis of ring fields.
Kim, Seo Hyun; Kong, Hong Jin; Lee, Jong Ung; Lee, Jun Ho; Lee, Jai Hoon
2014-08-01
A method to obtain an aberration-corrected Offner spectrometer without ray obstruction is proposed. A new, more efficient spectrometer optics design is suggested in order to increase its spectral resolution. The derivation of a new ring equation to eliminate ray obstruction is based on geometrical analysis of the ring fields for various numerical apertures. The analytical design applying this equation was demonstrated using the optical design software Code V in order to manufacture a spectrometer working in wavelengths of 900-1700 nm. The simulation results show that the new concept offers an analytical initial design taking the least time of calculation. The simulated spectrometer exhibited a modulation transfer function over 80% at Nyquist frequency, root-mean-square spot diameters under 8.6 μm, and a spectral resolution of 3.2 nm. The final design and its realization of a high resolution Offner spectrometer was demonstrated based on the simulation result. The equation and analytical design procedure shown here can be applied to most Offner systems regardless of the wavelength range.
A fast mass spring model solver for high-resolution elastic objects
NASA Astrophysics Data System (ADS)
Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian
2017-03-01
Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.
NASA Astrophysics Data System (ADS)
Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas
2010-05-01
In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.
NASA Astrophysics Data System (ADS)
Harris, S.; Labahn, J. W.; Frank, J. H.; Ihme, M.
2017-11-01
Data assimilation techniques can be integrated with time-resolved numerical simulations to improve predictions of transient phenomena. In this study, optimal interpolation and nudging are employed for assimilating high-speed high-resolution measurements obtained for an inert jet into high-fidelity large-eddy simulations. This experimental data set was chosen as it provides both high spacial and temporal resolution for the three-component velocity field in the shear layer of the jet. Our first objective is to investigate the impact that data assimilation has on the resulting flow field for this inert jet. This is accomplished by determining the region influenced by the data assimilation and corresponding effect on the instantaneous flow structures. The second objective is to determine optimal weightings for two data assimilation techniques. The third objective is to investigate how the frequency at which the data is assimilated affects the overall predictions. Graduate Research Assistant, Department of Mechanical Engineering.
Agent-based large-scale emergency evacuation using real-time open government data.
DOT National Transportation Integrated Search
2014-01-01
The open government initiatives have provided tremendous data resources for the : transportation system and emergency services in urban areas. This paper proposes : a traffic simulation framework using high temporal resolution demographic data : and ...
Performance Modeling of an Airborne Raman Water Vapor Lidar
NASA Technical Reports Server (NTRS)
Whiteman, D. N.; Schwemmer, G.; Berkoff, T.; Plotkin, H.; Ramos-Izquierdo, L.; Pappalardo, G.
2000-01-01
A sophisticated Raman lidar numerical model had been developed. The model has been used to simulate the performance of two ground-based Raman water vapor lidar systems. After tuning the model using these ground-based measurements, the model is used to simulate the water vapor measurement capability of an airborne Raman lidar under both day-and night-time conditions for a wide range of water vapor conditions. The results indicate that, under many circumstances, the daytime measurements possess comparable resolution to an existing airborne differential absorption water vapor lidar while the nighttime measurement have higher resolution. In addition, a Raman lidar is capable of measurements not possible using a differential absorption system.
The Extended Pulsar Magnetosphere
NASA Technical Reports Server (NTRS)
Constantinos, Kalapotharakos; Demosthenes, Kazanas; Ioannis, Contopoulos
2012-01-01
We present the structure of the 3D ideal MHD pulsar magnetosphere to a radius ten times that of the light cylinder, a distance about an order of magnitude larger than any previous such numerical treatment. Its overall structure exhibits a stable, smooth, well-defined undulating current sheet which approaches the kinematic split monopole solution of Bogovalov 1999 only after a careful introduction of diffusivity even in the highest resolution simulations. It also exhibits an intriguing spiral region at the crossing of two zero charge surfaces on the current sheet, which shows a destabilizing behavior more prominent in higher resolution simulations. We discuss the possibility that this region is physically (and not numerically) unstable. Finally, we present the spiral pulsar antenna radiation pattern.
The Mesoscale Ionospheric Simulation Testbed (MIST) Regional Data Assimilation Model (Invited)
NASA Astrophysics Data System (ADS)
Comberiate, J.; Kelly, M. A.; Miller, E.; Paxton, L.
2013-12-01
The Mesoscale Ionospheric Simulation Testbed (MIST) provides a regional nowcast and forecast of electron density values and has sufficient resolution to include equatorial plasma bubbles. The SSUSI instrument on the DMSP F18 satellite has high-resolution nightly observations of plasma bubbles at 8 PM local time throughout the current solar maximum. MIST can assimilate SSUSI UV observations, GPS TEC measurements, and SCINDA S4 readings simultaneously into a single scintillation map over a region of interest. MIST also models ionospheric physics to provide a short-term UHF scintillation forecast based on assimilated data. We will present examples of electron density and scintillation maps from MIST. We will also discuss the potential to predict scintillation occurrence up to 6 hours in advance using observations of the equatorial arcs from SSUSI observations at 5:30 PM local time on the DMSP F17 satellite.
NASA Astrophysics Data System (ADS)
Prince, Alyssa; Trout, Joseph; di Mercurio, Alexis
2017-01-01
The Weather Research and Forecasting (WRF) Model is a nested-grid, mesoscale numerical weather prediction system maintained by the Developmental Testbed Center. The model simulates the atmosphere by integrating partial differential equations, which use the conservation of horizontal momentum, conservation of thermal energy, and conservation of mass along with the ideal gas law. This research investigated the possible use of WRF in investigating the effects of weather on wing tip wake turbulence. This poster shows the results of an investigation into the accuracy of WRF using different grid resolutions. Several atmospheric conditions were modeled using different grid resolutions. In general, the higher the grid resolution, the better the simulation, but the longer the model run time. This research was supported by Dr. Manuel A. Rios, Ph.D. (FAA) and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA'' (13-G-006). Dr. Manuel A. Rios, Ph.D. (FAA), and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''
NASA Astrophysics Data System (ADS)
Schalge, Bernd; Rihani, Jehan; Haese, Barbara; Baroni, Gabriele; Erdal, Daniel; Haefliger, Vincent; Lange, Natascha; Neuweiler, Insa; Hendricks-Franssen, Harrie-Jan; Geppert, Gernot; Ament, Felix; Kollet, Stefan; Cirpka, Olaf; Saavedra, Pablo; Han, Xujun; Attinger, Sabine; Kunstmann, Harald; Vereecken, Harry; Simmer, Clemens
2017-04-01
Currently, an integrated approach to simulating the earth system is evolving where several compartment models are coupled to achieve the best possible physically consistent representation. We used the model TerrSysMP, which fully couples subsurface, land surface and atmosphere, in a synthetic study that mimicked the Neckar catchment in Southern Germany. A virtual reality run at a high resolution of 400m for the land surface and subsurface and 1.1km for the atmosphere was made. Ensemble runs at a lower resolution (800m for the land surface and subsurface) were also made. The ensemble was generated by varying soil and vegetation parameters and lateral atmospheric forcing among the different ensemble members in a systematic way. It was found that the ensemble runs deviated for some variables and some time periods largely from the virtual reality reference run (the reference run was not covered by the ensemble), which could be related to the different model resolutions. This was for example the case for river discharge in the summer. We also analyzed the spread of model states as function of time and found clear relations between the spread and the time of the year and weather conditions. For example, the ensemble spread of latent heat flux related to uncertain soil parameters was larger under dry soil conditions than under wet soil conditions. Another example is that the ensemble spread of atmospheric states was more influenced by uncertain soil and vegetation parameters under conditions of low air pressure gradients (in summer) than under conditions with larger air pressure gradients in winter. The analysis of the ensemble of fully coupled model simulations provided valuable insights in the dynamics of land-atmosphere feedbacks which we will further highlight in the presentation.
NASA Astrophysics Data System (ADS)
Feeney, Christopher; Smith, Hugh; Chiverrell, Richard; Hooke, Janet; Cooper, James
2017-04-01
Sediment residence time represents the duration of particle storage, from initial deposition to remobilisation, within reservoirs such as floodplains. Residence time influences rates of downstream redistribution of sediment and associated contaminants and is a useful indicator of landform stability and hence, preservation potential of alluvial archives of environmental change. River channel change controls residence times, reworking sediments via lateral migration, avulsion and incision through floodplain deposits. As reworking progresses, the floodplain age distribution is 'updated', reflecting the time since 'older' sediments were removed and replaced with 'younger' ones. The relationship between ages and the spatial extents they occupy can be used to estimate the average floodplain sediment residence times. While dating techniques, historic maps and remote sensing can reconstruct age distributions from historic reworking, modelling provides advantages, including: i) capturing detailed river channel changes and resulting floodplain ages over longer timescales and higher resolutions than from historic mapping, and ii) control over inputs to simulate hypothetical scenarios to investigate the effects of different environmental drivers on residence times. CAESAR-Lisflood is a landform evolution model capable of simulating variable channel width, divergent flow, and both braided and meandering planforms. However, the model's ability to accurately simulate channel changes requires evaluation if it is to be useful for quantitative evaluation of floodplain sediment residence times. This study aims to simulate recent historic river channel changes along ten 1 km reaches in northern England. Simulation periods were defined by available overlapping historic map and mean daily flow datasets, ranging 27-39 years. LiDAR-derived 2 m DEMs were modified to smooth out present-day channels and burn in historic channel locations. To reduce run times, DEMs were resampled to coarser resolutions based on the size of the channel and historic rate of lateral channel migration. Separate pre-defined coarse and finer channel bed and floodplain grain size distributions were used, respectively, in combination with constructed reach DEMs for model simulations. Calibration was performed by modifying selected parameters to obtain best fits between observed and modelled channel planforms. Initial simulations suggest the model can broadly reproduce observed planform change and is comparable in terms of channel sinuosities and the mean radius of curvature. As such, CAESAR-Lisflood may provide a useful tool for evaluating floodplain sediment residence times under environmental change scenarios.
Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
2012-01-01
Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.
NASA Technical Reports Server (NTRS)
Wolf, Bart J.; Johnson, D. R.
1995-01-01
A kinetic energy (KE) analysis of the forcing of a mesoscale upper-tropospheric jet streak by organized diabatic processes within the simulated convective system (SCS) that was discussed in Part 1 is presented in this study. The relative contributions of the ageostrophic components of motion to the generation of KE of the convectively generated jet streak are compared, along with the KE generation by the rotational (nondivergent) and irrotational (divergent) mass transport. The sensitivity of the numerical simulations of SCS development to resolution is also briefly examined. Analysis within isentropic coordinates provides for an explicit determination of the influence of the diabatic processes on the generation of KE. The upper-level production of specific KE is due predominatly to the inertial advective ageostrophic component (IAD), and as such represents the primary process through which the KE of the convectively generated jet streak is realized. A secondary contribution by the inertial diabatic (IDI) term is observed. Partitioning the KE generation into its rotational and irrotational components reveals that the latter, which is directly linked to the diabatic heating within the SCS through isentropic continuity requirements, is the ultimate source of KE generation as the global area integral of generation by the rotational component vanishes. Comparison with an identical dry simulation reveals that the net generation of KE must be attributed to latent heating. Both the IAD and IDI ageostrophic components play important roles in this regard. Examination of results from simulations conducted at several resolutions supports the previous findings in that the effects of diabatic processes and ageostrophic motion on KE generation remain consistent. Resolution does impact the location and timing of SCS development, a result that has important implications in forecasting the onset of convection that develops from evolution of the large-scale flow and moisture transport. Marked differences are observed in the momentum field aloft subsequent to the life cycle of the SCS in the 1 deg, 30-level base case (MP130) simulation discussed in Part 1 versus its 2 deg counterparts in that the MP130 simulation with higher spatial resolution contains from 14% to 30% more total KE.
NASA Astrophysics Data System (ADS)
Kolstein, M.; Chmeissani, M.
2016-01-01
The Voxel Imaging PET (VIP) Pathfinder project presents a novel design using pixelated semiconductor detectors for nuclear medicine applications to achieve the intrinsic image quality limits set by physics. The conceptual design can be extended to a Compton gamma camera. The use of a pixelated CdTe detector with voxel sizes of 1 × 1 × 2 mm3 guarantees optimal energy and spatial resolution. However, the limited time resolution of semiconductor detectors makes it impossible to use Time Of Flight (TOF) with VIP PET. TOF is used in order to improve the signal to noise ratio (SNR) by using only the most probable portion of the Line-Of-Response (LOR) instead of its entire length. To overcome the limitation of CdTe time resolution, we present in this article a simulation study using β+-γ emitting isotopes with a Compton-PET scanner. When the β+ annihilates with an electron it produces two gammas which produce a LOR in the PET scanner, while the additional gamma, when scattered in the scatter detector, provides a Compton cone that intersects with the aforementioned LOR. The intersection indicates, within a few mm of uncertainty along the LOR, the origin of the beta-gamma decay. Hence, one can limit the part of the LOR used by the image reconstruction algorithm.
NASA Technical Reports Server (NTRS)
Ross, Kenton W.; Russell, Jeffrey; Ryan, Robert E.
2006-01-01
The success of MODIS (the Moderate Resolution Imaging Spectrometer) in creating unprecedented, timely, high-quality data for vegetation and other studies has created great anticipation for data from VIIRS (the Visible/Infrared Imager Radiometer Suite). VIIRS will be carried onboard the joint NASA/Department of Defense/National Oceanic and Atmospheric Administration NPP (NPOESS (National Polar-orbiting Operational Environmental Satellite System) Preparatory Project). Because the VIIRS instruments will have lower spatial resolution than the current MODIS instruments 400 m versus 250 m at nadir for the channels used to generate Normalized Difference Vegetation Index data, scientists need the answer to this question: how will the change in resolution affect vegetation studies? By using simulated VIIRS measurements, this question may be answered before the VIIRS instruments are deployed in space. Using simulated VIIRS products, the U.S. Department of Agriculture and other operational agencies can then modify their decision support systems appropriately in preparation for receipt of actual VIIRS data. VIIRS simulations and validations will be based on the ART (Application Research Toolbox), an integrated set of algorithms and models developed in MATLAB(Registerd TradeMark) that enables users to perform a suite of simulations and statistical trade studies on remote sensing systems. Specifically, the ART provides the capability to generate simulated multispectral image products, at various scales, from high spatial hyperspectral and/or multispectral image products. The ART uses acquired ( real ) or synthetic datasets, along with sensor specifications, to create simulated datasets. For existing multispectral sensor systems, the simulated data products are used for comparison, verification, and validation of the simulated system s actual products. VIIRS simulations will be performed using Hyperion and MODIS datasets. The hyperspectral and hyperspatial properties of Hyperion data will be used to produce simulated MODIS and VIIRS products. Hyperion-derived MODIS data will be compared with near-coincident MODIS collects to validate both spectral and spatial synthesis, which will ascertain the accuracy of converting from MODIS to VIIRS. MODIS-derived VIIRS data is needed for global coverage and for the generation of time series for regional and global investigations. These types of simulations will have errors associated with aliasing for some scene types. This study will help quantify these errors and will identify cases where high-quality, MODIS-derived VIIRS data will be available.
Time-reversal transcranial ultrasound beam focusing using a k-space method
Jing, Yun; Meral, F. Can; Clement, Greg. T.
2012-01-01
This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...
2017-11-27
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
How Perturbing Ocean Floor Disturbs Tsunami Waves
NASA Astrophysics Data System (ADS)
Salaree, A.; Okal, E.
2017-12-01
Bathymetry maps play, perhaps the most crucial role in optimal tsunami simulations. Regardless of the simulation method, on one hand, it is desirable to include every detailed bathymetry feature in the simulation grids in order to predict tsunami amplitudes as accurately as possible, but on the other hand, large grids result in long simulation times. It is therefore, of interest to investigate a "sufficiency" level - if any - for the amount of details in bathymetry grids needed to reconstruct the most important features in tsunami simulations, as obtained from the actual bathymetry. In this context, we use a spherical harmonics series approach to decompose the bathymetry of the Pacific ocean into its components down to a resolution of 4 degrees (l=100) and create bathymetry grids by accumulating the resulting terms. We then use these grids to simulate the tsunami behavior from pure thrust events around the Pacific through the MOST algorithm (e.g. Titov & Synolakis, 1995; Titov & Synolakis, 1998). Our preliminary results reveal that one would only need to consider the sum of the first 40 coefficients (equivalent to a resolution of 1000 km) to reproduce the main components of the "real" results. This would result in simpler simulations, and potentially allowing for more efficient tsunami warning algorithms.
Specification and Analysis of Parallel Machine Architecture
1990-03-17
Parallel Machine Architeture C.V. Ramamoorthy Computer Science Division Dept. of Electrical Engineering and Computer Science University of California...capacity. (4) Adaptive: The overhead in resolution of deadlocks, etc. should be in proportion to their frequency. (5) Avoid rollbacks: Rollbacks can be...snapshots of system state graphically at a rate proportional to simulation time. Some of the examples are as follow: (1) When the simulation clock of
UWB Tracking System Design with TDOA Algorithm
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan
2006-01-01
This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).
The timing resolution of scintillation-detector systems: Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Choong, Woon-Seng
2009-11-01
Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and decreasing transit time spread. However, only substantial improvement in the timing resolution is obtained with improved transit time spread if the first photoelectron timing is less than the transit time spread. While the calculated timing performance does not seem to be affected by the pixel size of the crystal, it improves for an etched crystal compared to a polished crystal. In addition, the calculated timing resolution degrades with increasing crystal length. These observations can be explained by studying the initial photoelectron rate. Experimental measurements provide reasonably good agreement with the calculated timing resolution. The Monte Carlo analysis developed in this work will allow us to optimize the scintillation detectors for timing and to understand the physical factors limiting their performance.
The timing resolution of scintillation-detector systems: Monte Carlo analysis.
Choong, Woon-Seng
2009-11-07
Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and decreasing transit time spread. However, only substantial improvement in the timing resolution is obtained with improved transit time spread if the first photoelectron timing is less than the transit time spread. While the calculated timing performance does not seem to be affected by the pixel size of the crystal, it improves for an etched crystal compared to a polished crystal. In addition, the calculated timing resolution degrades with increasing crystal length. These observations can be explained by studying the initial photoelectron rate. Experimental measurements provide reasonably good agreement with the calculated timing resolution. The Monte Carlo analysis developed in this work will allow us to optimize the scintillation detectors for timing and to understand the physical factors limiting their performance.
Cyclone Simulation via Action Minimization
NASA Astrophysics Data System (ADS)
Plotkin, D. A.; Weare, J.; Abbot, D. S.
2016-12-01
A postulated impact of climate change is an increase in intensity of tropical cyclones (TCs). This hypothesized effect results from the fact that TCs are powered subsaturated boundary layer air picking up water vapor from the surface ocean as it flows inwards towards the eye. This water vapor serves as the energy input for TCs, which can be idealized as heat engines. The inflowing air has a nearly identical temperature as the surface ocean; therefore, warming of the surface leads to a warmer atmospheric boundary layer. By the Clausius-Clapeyron relationship, warmer boundary layer air can hold more water vapor and thus results in more energetic storms. Changes in TC intensity are difficult to predict due to the presence of fine structures (e.g. convective structures and rainbands) with length scales of less than 1 km, while general circulation models (GCMs) generally have horizontal resolutions of tens of kilometers. The models are therefore unable to capture these features, which are critical to accurately simulating cyclone structure and intensity. Further, strong TCs are rare events, meaning that long multi-decadal simulations are necessary to generate meaningful statistics about intense TC activity. This adds to the computational expense, making it yet more difficult to generate accurate statistics about long-term changes in TC intensity due to global warming via direct simulation. We take an alternative approach, applying action minimization techniques developed in molecular dynamics to the WRF weather/climate model. We construct artificial model trajectories that lead from quiescent (TC-free) states to TC states, then minimize the deviation of these trajectories from true model dynamics. We can thus create Monte Carlo model ensembles that are biased towards cyclogenesis, which reduces computational expense by limiting time spent in non-TC states. This allows for: 1) selective interrogation of model states with TCs; 2) finding the likeliest paths for transitions between TC-free and TC states; and 3) an increase in horizontal resolution due to computational savings achieved by reducing time spent simulating TC-free states. This increase in resolution, coupled with a decrease in simulation time, allows for prediction of the change in TC frequency and intensity distributions resulting from climate change.
Time-Frequency Approach for Stochastic Signal Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Ripul; Akula, Aparna; Kumar, Satish
2011-10-20
The detection of events in a stochastic signal has been a subject of great interest. One of the oldest signal processing technique, Fourier Transform of a signal contains information regarding frequency content, but it cannot resolve the exact onset of changes in the frequency, all temporal information is contained in the phase of the transform. On the other hand, Spectrogram is better able to resolve temporal evolution of frequency content, but has a trade-off in time resolution versus frequency resolution in accordance with the uncertainty principle. Therefore, time-frequency representations are considered for energetic characterisation of the non-stationary signals. Wigner Villemore » Distribution (WVD) is the most prominent quadratic time-frequency signal representation and used for analysing frequency variations in signals.WVD allows for instantaneous frequency estimation at each data point, for a typical temporal resolution of fractions of a second. This paper through simulations describes the way time frequency models are applied for the detection of event in a stochastic signal.« less
Time-Frequency Approach for Stochastic Signal Detection
NASA Astrophysics Data System (ADS)
Ghosh, Ripul; Akula, Aparna; Kumar, Satish; Sardana, H. K.
2011-10-01
The detection of events in a stochastic signal has been a subject of great interest. One of the oldest signal processing technique, Fourier Transform of a signal contains information regarding frequency content, but it cannot resolve the exact onset of changes in the frequency, all temporal information is contained in the phase of the transform. On the other hand, Spectrogram is better able to resolve temporal evolution of frequency content, but has a trade-off in time resolution versus frequency resolution in accordance with the uncertainty principle. Therefore, time-frequency representations are considered for energetic characterisation of the non-stationary signals. Wigner Ville Distribution (WVD) is the most prominent quadratic time-frequency signal representation and used for analysing frequency variations in signals.WVD allows for instantaneous frequency estimation at each data point, for a typical temporal resolution of fractions of a second. This paper through simulations describes the way time frequency models are applied for the detection of event in a stochastic signal.
A Specialized Multi-Transmit Head Coil for High Resolution fMRI of the Human Visual Cortex at 7T.
Sengupta, Shubharthi; Roebroeck, Alard; Kemper, Valentin G; Poser, Benedikt A; Zimmermann, Jan; Goebel, Rainer; Adriany, Gregor
2016-01-01
To design, construct and validate radiofrequency (RF) transmit and receive phased array coils for high-resolution visual cortex imaging at 7 Tesla. A 4 channel transmit and 16 channel receive array was constructed on a conformal polycarbonate former. Transmit field efficiency and homogeneity were simulated and validated, along with the Specific Absorption Rate, using [Formula: see text] mapping techniques and electromagnetic simulations. Receiver signal-to-noise ratio (SNR), temporal SNR (tSNR) across EPI time series, g-factors for accelerated imaging and noise correlations were evaluated and compared with a commercial 32 channel whole head coil. The performance of the coil was further evaluated with human subjects through functional MRI (fMRI) studies at standard and submillimeter resolutions of upto 0.8mm isotropic. The transmit and receive sections were characterized using bench tests and showed good interelement decoupling, preamplifier decoupling and sample loading. SNR for the 16 channel coil was ∼ 1.5 times that of the commercial coil in the human occipital lobe, and showed better g-factor values for accelerated imaging. fMRI tests conducted showed better response to Blood Oxygen Level Dependent (BOLD) activation, at resolutions of 1.2mm and 0.8mm isotropic. The 4 channel phased array transmit coil provides homogeneous excitation across the visual cortex, which, in combination with the dual row 16 channel receive array, makes for a valuable research tool for high resolution anatomical and functional imaging of the visual cortex at 7T.
Recent advances in a linear micromirror array for high-resolution projection
NASA Astrophysics Data System (ADS)
Picard, Francis; Doucet, Michel; Niall, Keith K.; Larouche, Carl; Savard, Maxime; Crisan, Silviu; Thibault, Simon; Jerominek, Hubert
2004-05-01
The visual displays of contemporary military flight simulators lack adequate definition to represent scenes in basic fast-jet fighter tasks. For example, air-to-air and air-to-ground targets are not projected with sufficient contrast and resolution for a pilot to perceive aspect, aspect rate and object detail at real world slant ranges. Simulator display geometries require the development of ultra-high resolution projectors with greater than 20 megapixel resolution at 60 Hz frame rate. A new micromirror device has been developed to address this requirement; it is able to modulate light intensity in an analog fashion with switching times shorter than 5 μs. When combined with a scanner, a laser and Schlieren optics, a linear array of these flexible micromirrors can display images composed of thousands of lines at a frame rate of 60 Hz. Recent results related to evaluation of this technology for high resolution projection are presented. Alternate operation modes for light modulation with flexible micromirrors are proposed. The related importance of controlling the residual micromirror curvature is discussed and results of experiments investigating the use of the deposition pressure to achieve such control are reported. Moreover, activities aiming at minimizing the micromirror response time and, so doing, maximizing the number of image columns per image frame are discussed. Finally, contrast measurement and estimate of the contrast limit achievable with the flexible micromirror technology are presented. All reported activities support the development of a fully addressable 2000-element micromirror array.
DDDAMS-based Urban Surveillance and Crowd Control via UAVs and UGVs
2015-12-04
for crowd dynamics modeling by incorporating multi-resolution data, where a grid-based method is used to model crowd motion with UAVs’ low -resolution...information and more computational intensive (and time-consuming). Given that the deployment of fidelity selection results in simulation faces computational... low fidelity information FOV y (A) DR x (A) DR y (A) Not detected high fidelity information Table 1: Parameters for UAV and UGV for their detection
The validity of flow approximations when simulating catchment-integrated flash floods
NASA Astrophysics Data System (ADS)
Bout, B.; Jetten, V. G.
2018-01-01
Within hydrological models, flow approximations are commonly used to reduce computation time. The validity of these approximations is strongly determined by flow height, flow velocity and the spatial resolution of the model. In this presentation, the validity and performance of the kinematic, diffusive and dynamic flow approximations are investigated for use in a catchment-based flood model. Particularly, the validity during flood events and for varying spatial resolutions is investigated. The OpenLISEM hydrological model is extended to implement both these flow approximations and channel flooding based on dynamic flow. The flow approximations are used to recreate measured discharge in three catchments, among which is the hydrograph of the 2003 flood event in the Fella river basin. Furthermore, spatial resolutions are varied for the flood simulation in order to investigate the influence of spatial resolution on these flow approximations. Results show that the kinematic, diffusive and dynamic flow approximation provide least to highest accuracy, respectively, in recreating measured discharge. Kinematic flow, which is commonly used in hydrological modelling, substantially over-estimates hydrological connectivity in the simulations with a spatial resolution of below 30 m. Since spatial resolutions of models have strongly increased over the past decades, usage of routed kinematic flow should be reconsidered. The combination of diffusive or dynamic overland flow and dynamic channel flooding provides high accuracy in recreating the 2003 Fella river flood event. Finally, in the case of flood events, spatial modelling of kinematic flow substantially over-estimates hydrological connectivity and flow concentration since pressure forces are removed, leading to significant errors.
Single-view 3D reconstruction of correlated gamma-neutron sources
Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.
2017-01-05
We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less
Single-view 3D reconstruction of correlated gamma-neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.
We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less
Optimization of Collision Detection in Surgical Simulations
NASA Astrophysics Data System (ADS)
Custură-Crăciun, Dan; Cochior, Daniel; Neagu, Corneliu
2014-11-01
Just like flight and spaceship simulators already represent a standard, we expect that soon enough, surgical simulators should become a standard in medical applications. A simulations quality is strongly related to the image quality as well as the degree of realism of the simulation. Increased quality requires increased resolution, increased representation speed but more important, a larger amount of mathematical equations. To make it possible, not only that we need more efficient computers, but especially more calculation process optimizations. A simulator executes one of the most complex sets of calculations each time it detects a contact between the virtual objects, therefore optimization of collision detection is fatal for the work-speed of a simulator and hence in its quality
NASA Astrophysics Data System (ADS)
Surti, S.; Karp, J. S.
2015-07-01
Current generation of commercial time-of-flight (TOF) PET scanners utilize 20-25 mm thick LSO or LYSO crystals and have an axial FOV (AFOV) in the range of 16-22 mm. Longer AFOV scanners would provide increased intrinsic sensitivity and require fewer bed positions for whole-body imaging. Recent simulation work has investigated the sensitivity gains that can be achieved with these long AFOV scanners, and has motivated new areas of investigation such as imaging with a very low dose of injected activity as well as providing whole-body dynamic imaging capability in one bed position. In this simulation work we model a 72 cm long scanner and prioritize the detector design choices in terms of timing resolution, crystal size (spatial resolution), crystal thickness (detector sensitivity), and depth-of-interaction (DOI) measurement capability. The generated list data are reconstructed with a list-mode OSEM algorithm using a Gaussian TOF kernel that depends on the timing resolution and blob basis functions for regularization. We use lesion phantoms and clinically relevant metrics for lesion detectability and contrast measurement. The scan time was fixed at 10 min for imaging a 100 cm long object assuming a 50% overlap between adjacent bed positions. Results show that a 72 cm long scanner can provide a factor of ten reduction in injected activity compared to an identical 18 cm long scanner to get equivalent lesion detectability. While improved timing resolution leads to further gains, using 3 mm (as opposed to 4 mm) wide crystals does not show any significant benefits for lesion detectability. A detector providing 2-level DOI information with equal crystal thickness also does not show significant gains. Finally, a 15 mm thick crystal leads to lower lesion detectability than a 20 mm thick crystal when keeping all other detector parameters (crystal width, timing resolution, and DOI capability) the same. However, improved timing performance with 15 mm thick crystals can provide similar or better performance than that achieved by a detector using 20 mm thick crystals.
Uncertainty estimates of altimetric Global Mean Sea Level timeseries
NASA Astrophysics Data System (ADS)
Scharffenberg, Martin; Hemming, Michael; Stammer, Detlef
2016-04-01
An attempt is being presented concerned with providing uncertainty measures for global mean sea level time series. For this purpose sea surface height (SSH) fields, simulated by the high resolution STORM/NCEP model for the period 1993 - 2010, were subsampled along altimeter tracks and processed similar to techniques used by five working groups to estimate GMSL. Results suggest that the spatial and temporal resolution have a substantial impact on GMSL estimates. Major impacts can especially result from the interpolation technique or the treatment of SSH outliers and easily lead to artificial temporal variability in the resulting time series.
Using Empirical Orthogonal Teleconnections to Analyze Interannual Precipitation Variability in China
NASA Astrophysics Data System (ADS)
Stephan, C.; Klingaman, N. P.; Vidale, P. L.; Turner, A. G.; Demory, M. E.; Guo, L.
2017-12-01
Interannual rainfall variability in China affects agriculture, infrastructure and water resource management. A consistent and objective method, Empirical Orthogonal Teleconnection (EOT) analysis, is applied to precipitation observations over China in all seasons. Instead of maximizing the explained space-time variance, the method identifies regions in China that best explain the temporal variability in domain-averaged rainfall. It produces known teleconnections, that include high positive correlations with ENSO in eastern China in winter, along the Yangtze River in summer, and in southeast China during spring. New findings include that variability along the southeast coast in winter, in the Yangtze valley in spring, and in eastern China in autumn, are associated with extratropical Rossby wave trains. The same analysis is applied to six climate simulations of the Met Office Unified Model with and without air-sea coupling and at various horizontal resolutions of 40, 90 and 200 km. All simulations reproduce the observed patterns of interannual rainfall variability in winter, spring and autumn; the leading pattern in summer is present in all but one simulation. However, only in two simulations are all patterns associated with the observed physical mechanism. Coupled simulations capture more observed patterns of variability and associate more of them with the correct physical mechanism, compared to atmosphere-only simulations at the same resolution. Finer resolution does not improve the fidelity of these patterns or their associated mechanisms. Evaluating climate models by only geographical distribution of mean precipitation and its interannual variance is insufficient; attention must be paid to associated mechanisms.
The influence of model resolution on ozone in industrial volatile organic compound plumes.
Henderson, Barron H; Jeffries, Harvey E; Kim, Byeong-Uk; Vizuete, William G
2010-09-01
Regions with concentrated petrochemical industrial activity (e.g., Houston or Baton Rouge) frequently experience large, localized releases of volatile organic compounds (VOCs). Aircraft measurements suggest these released VOCs create plumes with ozone (O3) production rates 2-5 times higher than typical urban conditions. Modeling studies found that simulating high O3 productions requires superfine (1-km) horizontal grid cell size. Compared with fine modeling (4-kmin), the superfine resolution increases the peak O3 concentration by as much as 46%. To understand this drastic O3 change, this study quantifies model processes for O3 and "odd oxygen" (Ox) in both resolutions. For the entire plume, the superfine resolution increases the maximum O3 concentration 3% but only decreases the maximum Ox concentration 0.2%. The two grid sizes produce approximately equal Ox mass but by different reaction pathways. Derived sensitivity to oxides of nitrogen (NOx) and VOC emissions suggests resolution-specific sensitivity to NOx and VOC emissions. Different sensitivity to emissions will result in different O3 responses to subsequently encountered emissions (within the city or downwind). Sensitivity of O3 to emission changes also results in different simulated O3 responses to the same control strategies. Sensitivity of O3 to NOx and VOC emission changes is attributed to finer resolved Eulerian grid and finer resolved NOx emissions. Urban NOx concentration gradients are often caused by roadway mobile sources that would not typically be addressed with Plume-in-Grid models. This study shows that grid cell size (an artifact of modeling) influences simulated control strategies and could bias regulatory decisions. Understanding the dynamics of VOC plume dependence on grid size is the first step toward providing more detailed guidance for resolution. These results underscore VOC and NOx resolution interdependencies best addressed by finer resolution. On the basis of these results, the authors suggest a need for quantitative metrics for horizontal grid resolution in future model guidance.
NASA Astrophysics Data System (ADS)
Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim
2016-04-01
The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).
Alternative techniques for high-resolution spectral estimation of spectrally encoded endoscopy
NASA Astrophysics Data System (ADS)
Mousavi, Mahta; Duan, Lian; Javidi, Tara; Ellerbee, Audrey K.
2015-09-01
Spectrally encoded endoscopy (SEE) is a minimally invasive optical imaging modality capable of fast confocal imaging of internal tissue structures. Modern SEE systems use coherent sources to image deep within the tissue and data are processed similar to optical coherence tomography (OCT); however, standard processing of SEE data via the Fast Fourier Transform (FFT) leads to degradation of the axial resolution as the bandwidth of the source shrinks, resulting in a well-known trade-off between speed and axial resolution. Recognizing the limitation of FFT as a general spectral estimation algorithm to only take into account samples collected by the detector, in this work we investigate alternative high-resolution spectral estimation algorithms that exploit information such as sparsity and the general region position of the bulk sample to improve the axial resolution of processed SEE data. We validate the performance of these algorithms using bothMATLAB simulations and analysis of experimental results generated from a home-built OCT system to simulate an SEE system with variable scan rates. Our results open a new door towards using non-FFT algorithms to generate higher quality (i.e., higher resolution) SEE images at correspondingly fast scan rates, resulting in systems that are more accurate and more comfortable for patients due to the reduced image time.
Evacuee Compliance Behavior Analysis using High Resolution Demographic Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Wei; Han, Lee; Liu, Cheng
2014-01-01
The purpose of this study is to examine whether evacuee compliance behavior with route assignments from different resolutions of demographic data would impact the evacuation performance. Most existing evacuation strategies assume that travelers will follow evacuation instructions, while in reality a certain percent of evacuees do not comply with prescribed instructions. In this paper, a comparison study of evacuation assignment based on Traffic Analysis Zones (TAZ) and high resolution LandScan USA Population Cells (LPC) were conducted for the detailed road network representing Alexandria, Virginia. A revised platform for evacuation modeling built on high resolution demographic data and activity-based microscopic trafficmore » simulation is proposed. The results indicate that evacuee compliance behavior affects evacuation efficiency with traditional TAZ assignment, but it does not significantly compromise the efficiency with high resolution LPC assignment. The TAZ assignment also underestimates the real travel time during evacuation, especially for high compliance simulations. This suggests that conventional evacuation studies based on TAZ assignment might not be effective at providing efficient guidance to evacuees. From the high resolution data perspective, traveler compliance behavior is an important factor but it does not impact the system performance significantly. The highlight of evacuee compliance behavior analysis should be emphasized on individual evacuee level route/shelter assignments, rather than the whole system performance.« less
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
1994-01-01
Simulated data from the UCLA cumulus ensemble model are used to investigate the quasi-universal validity of closure assumptions used in existing cumulus parameterizations. A closure assumption is quasi-universally valid if it is sensitive neither to convective cloud regimes nor to horizontal resolutions of large-scale/mesoscale models. The dependency of three types of closure assumptions, as classified by Arakawa and Chen, on the horizontal resolution is addressed in this study. Type I is the constraint on the coupling of the time tendencies of large-scale temperature and water vapor mixing ratio. Type II is the constraint on the coupling of cumulus heating and cumulus drying. Type III is a direct constraint on the intensity of a cumulus ensemble. The macroscopic behavior of simulated cumulus convection is first compared with the observed behavior in view of Type I and Type II closure assumptions using 'quick-look' and canonical correlation analyses. It is found that they are statistically similar to each other. The three types of closure assumptions are further examined with simulated data averaged over selected subdomain sizes ranging from 64 to 512 km. It is found that the dependency of Type I and Type II closure assumptions on the horizontal resolution is very weak and that Type III closure assumption is somewhat dependent upon the horizontal resolution. The influences of convective and mesoscale processes on the closure assumptions are also addressed by comparing the structures of canonical components with the corresponding vertical profiles in the convective and stratiform regions of cumulus ensembles analyzed directly from simulated data. The implication of these results for cumulus parameterization is discussed.
NASA Astrophysics Data System (ADS)
Wagenbrenner, N. S.; Forthofer, J.; Gibson, C.; Lamb, B. K.
2017-12-01
Frequent strong gap winds were measured in a deep, steep, wildfire-prone river canyon of central Idaho, USA during July-September 2013. Analysis of archived surface pressure data indicate that the gap wind events were driven by regional scale surface pressure gradients. The events always occurred between 0400 and 1200 LT and typically lasted 3-4 hours. The timing makes these events particularly hazardous for wildland firefighting applications since the morning is typically a period of reduced fire activity and unsuspecting firefighters could be easily endangered by the onset of strong downcanyon winds. The gap wind events were not explicitly forecast by operational numerical weather prediction (NWP) models due to the small spatial scale of the canyon ( 1-2 km wide) compared to the horizontal resolution of operational NWP models (3 km or greater). Custom WRF simulations initialized with NARR data were run at 1 km horizontal resolution to assess whether higher resolution NWP could accurately simulate the observed gap winds. Here, we show that the 1 km WRF simulations captured many of the observed gap wind events, although the strength of the events was underpredicted. We also present evidence from these WRF simulations which suggests that the Salmon River Canyon is near the threshold of WRF-resolvable terrain features when the standard WRF coordinate system and discretization schemes are used. Finally, we show that the strength of the gap wind events can be predicted reasonably well as a function of the surface pressure gradient across the gap, which could be useful in the absence of high-resolution NWP. These are important findings for wildland firefighting applications in narrow gaps where routine forecasts may not provide warning for wind effects induced by high-resolution terrain features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sotomayor, Marcos
Hair cell mechanotransduction happens in tens of microseconds, involves forces of a few picoNewtons, and is mediated by nanometer-scale molecular conformational changes. As proteins involved in this process become identified and their high resolution structures become available, multiple tools are being used to explore their “single-molecule responses” to force. Optical tweezers and atomic force microscopy offer exquisite force and extension resolution, but cannot reach the high loading rates expected for high frequency auditory stimuli. Molecular dynamics (MD) simulations can reach these fast time scales, and also provide a unique view of the molecular events underlying protein mechanics, but its predictionsmore » must be experimentally verified. Thus a combination of simulations and experiments might be appropriate to study the molecular mechanics of hearing. Here I review the basics of MD simulations and the different methods used to apply force and study protein mechanics in silico. Simulations of tip link proteins are used to illustrate the advantages and limitations of this method.« less
Cart3D Simulations for the First AIAA Sonic Boom Prediction Workshop
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2014-01-01
Simulation results for the First AIAA Sonic Boom Prediction Workshop (LBW1) are presented using an inviscid, embedded-boundary Cartesian mesh method. The method employs adjoint-based error estimation and adaptive meshing to automatically determine resolution requirements of the computational domain. Results are presented for both mandatory and optional test cases. These include an axisymmetric body of revolution, a 69deg delta wing model and a complete model of the Lockheed N+2 supersonic tri-jet with V-tail and flow through nacelles. In addition to formal mesh refinement studies and examination of the adjoint-based error estimates, mesh convergence is assessed by presenting simulation results for meshes at several resolutions which are comparable in size to the unstructured grids distributed by the workshop organizers. Data provided includes both the pressure signals required by the workshop and information on code performance in both memory and processing time. Various enhanced techniques offering improved simulation efficiency will be demonstrated and discussed.
Simulation and optimization of a dc SQUID with finite capacitance
NASA Astrophysics Data System (ADS)
de Waal, V. J.; Schrijner, P.; Llurba, R.
1984-02-01
This paper deals with the calculations of the noise and the optimization of the energy resolution of a dc SQUID with finite junction capacitance. Up to now noise calculations of dc SQUIDs were performed using a model without parasitic capacitances across the Josephson junctions. As the capacitances limit the performance of the SQUID, for a good optimization one must take them into account. The model consists of two coupled nonlinear second-order differential equations. The equations are very suitable for simulation with an analog circuit. We implemented the model on a hybrid computer. The noise spectrum from the model is calculated with a fast Fourier transform. A calculation of the energy resolution for one set of parameters takes about 6 min of computer time. Detailed results of the optimization are given for products of inductance and temperature of LT=1.2 and 5 nH K. Within a range of β and β c between 1 and 2, which is optimum, the energy resolution is nearly independent of these variables. In this region the energy resolution is near the value calculated without parasitic capacitances. Results of the optimized energy resolution are given as a function of LT between 1.2 and 10 mH K.
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
NASA Technical Reports Server (NTRS)
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies
NASA Astrophysics Data System (ADS)
Hutchings, L. J.; Ryan, J.
2010-12-01
Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a realistic number of stations and events needed to provide the data.
NASA Astrophysics Data System (ADS)
Kobayashi, Kenichiro; Otsuka, Shigenori; Apip; Saito, Kazuo
2016-08-01
This paper presents a study on short-term ensemble flood forecasting specifically for small dam catchments in Japan. Numerical ensemble simulations of rainfall from the Japan Meteorological Agency nonhydrostatic model (JMA-NHM) are used as the input data to a rainfall-runoff model for predicting river discharge into a dam. The ensemble weather simulations use a conventional 10 km and a high-resolution 2 km spatial resolutions. A distributed rainfall-runoff model is constructed for the Kasahori dam catchment (approx. 70 km2) and applied with the ensemble rainfalls. The results show that the hourly maximum and cumulative catchment-average rainfalls of the 2 km resolution JMA-NHM ensemble simulation are more appropriate than the 10 km resolution rainfalls. All the simulated inflows based on the 2 and 10 km rainfalls become larger than the flood discharge of 140 m3 s-1, a threshold value for flood control. The inflows with the 10 km resolution ensemble rainfall are all considerably smaller than the observations, while at least one simulated discharge out of 11 ensemble members with the 2 km resolution rainfalls reproduces the first peak of the inflow at the Kasahori dam with similar amplitude to observations, although there are spatiotemporal lags between simulation and observation. To take positional lags into account of the ensemble discharge simulation, the rainfall distribution in each ensemble member is shifted so that the catchment-averaged cumulative rainfall of the Kasahori dam maximizes. The runoff simulation with the position-shifted rainfalls shows much better results than the original ensemble discharge simulations.
NASA Astrophysics Data System (ADS)
Clark, E.; Lettenmaier, D. P.
2014-12-01
Satellite radar altimetry is widely used for measuring global sea level variations and, increasingly, water height variations of inland water bodies. Existing satellite radar altimeters measure water surfaces directly below the spacecraft (approximately at nadir). Over the ocean, most of these satellites use radiometry to measure the delay of radar signals caused by water vapor in the atmosphere (also known as the wet troposphere delay (WTD)). However, radiometry can only be used to estimate this delay over the largest inland water bodies, such as the Great Lakes, due to spatial resolution issues. As a result, atmospheric models are typically used to simulate and correct for the WTD at the time of observations. The resolutions of these models are quite coarse, at best about 5000 km2 at 30˚N. The upcoming NASA- and CNES-led Surface Water and Ocean Topography (SWOT) mission, on the other hand, will use interferometric synthetic aperture radar (InSAR) techniques to measure a 120-km-wide swath of the Earth's surface. SWOT is expected to make useful measurements of water surface elevation and extent (and storage change) for inland water bodies at spatial scales as small as 250 m, which is much smaller than current altimetry targets and several orders of magnitude smaller than the models used for wet troposphere corrections. Here, we calculate WTD from very high-resolution (4/3-km to 4-km) simulations of the Weather Research and Forecasting (WRF) regional climate model, and use the results to evaluate spatial variations in WTD. We focus on six U.S. reservoirs: Lake Elwell (MT), Lake Pend Oreille (ID), Upper Klamath Lake (OR), Elephant Butte (NM), Ray Hubbard (TX), and Sam Rayburn (TX). The reservoirs vary in climate, shape, use, and size. Because evaporation from open water impacts local water vapor content, we compare time series of WTD over land and water in the vicinity of each reservoir. To account for resolution effects, we examine the difference in WRF-simulated WTD averaged over ECMWF and NCEP-NCAR resolution grid cells and compare the magnitudes of each over reservoirs. Finally, we also test the degree to which, if uncorrected, the WTD would dampen or strengthen measured changes in water levels (and storage) at each reservoir.
Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.
Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H
2013-05-01
In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.
Kalman Filter Techniques for Accelerated Cartesian Dynamic Cardiac Imaging
Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.
2012-01-01
In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories, because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and SNR. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. PMID:22926804
Effect of DEM mesh size on AnnAGNPS simulation and slope correction.
Wang, Xiaoyan; Lin, Q
2011-08-01
The objective of this paper is to study the impact of the mesh size of the digital elevation model (DEM) on terrain attributes within an Annualized AGricultural NonPoint Source pollution (AnnAGNPS) Model simulation at watershed scale and provide a correction of slope gradient for low resolution DEMs. The effect of different grid sizes of DEMs on terrain attributes was examined by comparing eight DEMs (30, 40, 50, 60, 70, 80, 90, and 100 m). The accuracy of the AnnAGNPS stimulation on runoff, sediments, and nutrient loads is evaluated. The results are as follows: (1) Rnoff does not vary much with decrease of DEM resolution whereas soil erosion and total nitrogen (TN) load change prominently. There is little effect on runoff simulation of AnnAGNPS modeling by the amended slope using an adjusted 50 m DEM. (2) A decrease of sediment yield and TN load is observed with an increase of DEM mesh size from 30 to 60 m; a slight decrease of sediment and TN load with the DEM mesh size bigger than 60 m. There is similar trend for total phosphorus (TP) variation, but with less range of variation, the simulation of sediment, TN, and TP increase, in which sediment increase up to 1.75 times compared to the model using unadjusted 50 m DEM. In all, the amended simulation still has a large difference relative to the results using 30 m DEM. AnnAGNPS is less reliable for sediment loading prediction in a small hilly watershed. (3) Resolution of DEM has significant impact on slope gradient. The average, minimum, maximum of slope from the various DEMs reduced obviously with the decrease of DEM precision. For the grade of 0∼15°, the slopes at lower resolution DEM are generally bigger than those at higher resolution DEM. But for the grade bigger than 15°, the slopes at lower resolution DEM are generally smaller than those at higher resolution DEM. So it is necessary to adjust the slope with a fitting equation. A cubic model is used for correction of slope gradient from lower resolution to that from higher resolution. Results for Dage watershed showed that fine meshes are desired to avoid large underestimates of sediment and total nitrogen loads and moderate underestimates of total phosphorus loads even with the slopes for the 50 m DEM adjusted to be more similar to the slopes from the 30 m DEM. Decreasing the mesh size beyond this threshold does not substantially affect the computed runoff flux but generated prediction errors for nitrogen and sediment yields. So the appropriate DEM will control error and make simulation at acceptable level.
Halo abundance matching: accuracy and conditions for numerical convergence
NASA Astrophysics Data System (ADS)
Klypin, Anatoly; Prada, Francisco; Yepes, Gustavo; Heß, Steffen; Gottlöber, Stefan
2015-03-01
Accurate predictions of the abundance and clustering of dark matter haloes play a key role in testing the standard cosmological model. Here, we investigate the accuracy of one of the leading methods of connecting the simulated dark matter haloes with observed galaxies- the halo abundance matching (HAM) technique. We show how to choose the optimal values of the mass and force resolution in large volume N-body simulations so that they provide accurate estimates for correlation functions and circular velocities for haloes and their subhaloes - crucial ingredients of the HAM method. At the 10 per cent accuracy, results converge for ˜50 particles for haloes and ˜150 particles for progenitors of subhaloes. In order to achieve this level of accuracy a number of conditions should be satisfied. The force resolution for the smallest resolved (sub)haloes should be in the range (0.1-0.3)rs, where rs is the scale radius of (sub)haloes. The number of particles for progenitors of subhaloes should be ˜150. We also demonstrate that the two-body scattering plays a minor role for the accuracy of N-body simulations thanks to the relatively small number of crossing-times of dark matter in haloes, and the limited force resolution of cosmological simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkin, E. R.; Bicknell, G. V., E-mail: parkin@mso.anu.edu.au
Global three-dimensional magnetohydrodynamic (MHD) simulations of turbulent accretion disks are presented which start from fully equilibrium initial conditions in which the magnetic forces are accounted for and the induction equation is satisfied. The local linear theory of the magnetorotational instability (MRI) is used as a predictor of the growth of magnetic field perturbations in the global simulations. The linear growth estimates and global simulations diverge when nonlinear motions-perhaps triggered by the onset of turbulence-upset the velocity perturbations used to excite the MRI. The saturated state is found to be independent of the initially excited MRI mode, showing that once themore » disk has expelled the initially net flux field and settled into quasi-periodic oscillations in the toroidal magnetic flux, the dynamo cycle regulates the global saturation stress level. Furthermore, time-averaged measures of converged turbulence, such as the ratio of magnetic energies, are found to be in agreement with previous works. In particular, the globally averaged stress normalized to the gas pressure <{alpha}{sub P}>bar = 0.034, with notably higher values achieved for simulations with higher azimuthal resolution. Supplementary tests are performed using different numerical algorithms and resolutions. Convergence with resolution during the initial linear MRI growth phase is found for 23-35 cells per scale height (in the vertical direction).« less
NASA Astrophysics Data System (ADS)
Lebeaupin Brossier, Cindy; Arsouze, Thomas; Béranger, Karine; Bouin, Marie-Noëlle; Bresson, Emilie; Ducrocq, Véronique; Giordani, Hervé; Nuret, Mathieu; Rainaud, Romain; Taupier-Letage, Isabelle
2014-12-01
The western Mediterranean Sea is a source of heat and humidity for the atmospheric low-levels in autumn. Large exchanges take place at the air-sea interface, especially during intense meteorological events, such as heavy precipitation and/or strong winds. The Ocean Mixed Layer (OML), which is quite thin at this time of year (∼ 20 m-depth), evolves rapidly under such intense fluxes. This study investigates the ocean responses under intense meteorological events that occurred during HyMeX SOP1 (5 September-6 November 2012). The OML conditions and tendencies are derived from a high-resolution ocean simulation using the sub-regional eddy-resolving NEMO-WMED36 model (1/36°-resolution), driven at the surface by hourly air-sea fluxes from the AROME-WMED forecasts (2.5 km-resolution). The high space-time resolution of the atmospheric forcing allows the highly variable surface fluxes, which induce rapid changes in the OML, to be well represented and linked to small-scale atmospheric processes. First, the simulation results are compared to ocean profiles from several platforms obtained during the campaign. Then, this study focuses on the short-term OML evolution during three events. In particular, we examine the OML cooling and mixing under strong wind events, potentially associated with upwelling, as well as the surface freshening under heavy precipitation events, producing low-salinity lenses. Tendencies demonstrate the major role of the surface forcing in the temperature and/or salinity anomaly formation. At the same time, mixing [restratification] rapidly occurs. As expected, the sign of this tendency term is very dependent on the local vertical stratification which varies at fine scale in the Mediterranean. It also controls [disables] the vertical propagation. In the Alboran Sea, the strong dynamics redistribute the OML anomalies, sometimes up to 7 days after their formation. Elsewhere, despite local amplitude modulations due to internal wave excitation by strong winds, the integrated effect of the horizontal advection is almost null on the anomalies' spread and decay. Finally, diffusion has a small contribution.
NASA Technical Reports Server (NTRS)
Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.
1989-01-01
The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.
Development of ALARO-Climate regional climate model for a very high resolution
NASA Astrophysics Data System (ADS)
Skalak, Petr; Farda, Ales; Brozkova, Radmila; Masek, Jan
2013-04-01
ALARO-Climate is a new regional climate model (RCM) derived from the ALADIN LAM model family. It is based on the numerical weather prediction model ALARO and developed at the Czech Hydrometeorological Institute. The model is expected to able to work in the so called "grey zone" physics (horizontal resolution of 4 - 7 km) and at the same time retain its ability to be operated in resolutions in between 20 and 50 km, which are typical for contemporary generation of regional climate models. Here we present the main features of the RCM ALARO-Climate and results of the first model simulations on longer time-scales (1961-1990). The model was driven by the ERA-40/Interim re-analyses and run on the large pan-European integration domain ("ENSEMBLES / Euro-Cordex domain") with spatial resolution of 25 km. The simulated model climate was compared with the gridded observation of air temperature (mean, maximum, minimum) and precipitation from the E-OBS version 7 dataset. The validation of the first ERA-40 simulation has revealed significant cold biases in all seasons (between -4 and -2 °C) and overestimation of precipitation on 20% to 60% in the selected Central Europe target area (0° - 30° eastern longitude ; 40° - 60° northern latitude). The consequent adaptations in the model and their effect on the simulated properties of climate variables are illustrated. Acknowledgements: This study was performed within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation) and CzechGlobe Centre (CZ.1.05/1.1.00/02.0073). The partial support was also provided under the projects P209-11-0956 of the Czech Science Foundation and CZ.1.07/2.4.00/31.0056 (Operational Programme of Education for Competitiveness of Ministry of Education, Youth and Sports of the Czech Republic).
Ultra-Scale Computing for Emergency Evacuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhaduri, Budhendra L; Nutaro, James J; Liu, Cheng
2010-01-01
Emergency evacuations are carried out in anticipation of a disaster such as hurricane landfall or flooding, and in response to a disaster that strikes without a warning. Existing emergency evacuation modeling and simulation tools are primarily designed for evacuation planning and are of limited value in operational support for real time evacuation management. In order to align with desktop computing, these models reduce the data and computational complexities through simple approximations and representations of real network conditions and traffic behaviors, which rarely represent real-world scenarios. With the emergence of high resolution physiographic, demographic, and socioeconomic data and supercomputing platforms, itmore » is possible to develop micro-simulation based emergency evacuation models that can foster development of novel algorithms for human behavior and traffic assignments, and can simulate evacuation of millions of people over a large geographic area. However, such advances in evacuation modeling and simulations demand computational capacity beyond the desktop scales and can be supported by high performance computing platforms. This paper explores the motivation and feasibility of ultra-scale computing for increasing the speed of high resolution emergency evacuation simulations.« less
Ortiz-Rascón, E; Bruce, N C; Rodríguez-Rosales, A A; Garduño-Mejía, J
2016-03-01
We describe the behavior of linearity in diffuse imaging by evaluating the differences between time-resolved images produced by photons arriving at the detector at different times. Two approaches are considered: Monte Carlo simulations and experimental results. The images of two complete opaque bars embedded in a transparent or in a turbid medium with a slab geometry are analyzed; the optical properties of the turbid medium sample are close to those of breast tissue. A simple linearity test was designed involving a direct comparison between the intensity profile produced by two bars scanned at the same time and the intensity profile obtained by adding two profiles of each bar scanned one at a time. It is shown that the linearity improves substantially when short time of flight photons are used in the imaging process, but even then the nonlinear behavior prevails. As the edge response function (ERF) has been used widely for testing the spatial resolution in imaging systems, the main implication of a time dependent linearity is the weakness of the linearity assumption when evaluating the spatial resolution through the ERF in diffuse imaging systems, and the need to evaluate the spatial resolution by other methods.
Regional model simulations of New Zealand climate
NASA Astrophysics Data System (ADS)
Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.
1998-03-01
Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.
Ultrafast compression of graphite observed with sub-ps time resolution diffraction on LCLS
NASA Astrophysics Data System (ADS)
Armstrong, Michael; Goncharov, A.; Crowhurst, J.; Zaug, J.; Radousky, H.; Grivickas, P.; Bastea, S.; Goldman, N.; Stavrou, E.; Belof, J.; Gleason, A.; Lee, H. J.; Nagler, R.; Holtgrewe, N.; Walter, P.; Pakaprenka, V.; Nam, I.; Granados, E.; Presher, C.; Koroglu, B.
2017-06-01
We will present ps time resolution pulsed x-ray diffraction measurements of rapidly compressed highly oriented pyrolytic graphite along its basal plane at the Materials under Extreme Conditions (MEC) sector of the Linac Coherent Light Source (LCLS). These experiments explore the possibility of rapid (<100 ps time scale) material transformations occurring under very highly anisotropic compression conditions. Under such conditions, non-equilibrium mechanisms may play a role in the transformation process. We will present experimental results and simulations which explore this possibility. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Security, LLC under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Gomes, J. L.; Chou, S. C.; Yaguchi, S. M.
2012-04-01
Physics parameterizations and the model vertical and horizontal resolutions, for example, can significantly contribute to the uncertainty in the numerical weather predictions, especially at regions with complex topography. The objective of this study is to assess the influences of model precipitation production schemes and horizontal resolution on the diurnal cycle of precipitation in the Eta Model . The model was run in hydrostatic mode at 3- and 5-km grid sizes, the vertical resolution was set to 50 layers, and the time steps to 6 and 10 s, respectively. The initial and boundary conditions were taken from ERA-Interim reanalysis. Over the sea the 0.25-deg sea surface temperature from NOAA was used. The model was setup to run for each resolution over Angra dos Reis, located in the Southeast region of Brazil, for the rainy period between 18 December 2009 and 01 de January 2010, the model simulation range was 48 hours. In one set of runs the cumulus parameterization was switched off, in this case the model precipitation was fully simulated by cloud microphysics scheme, and in the other set the model was run with weak cumulus convection. The results show that as the model horizontal resolution increases from 5 to 3 km, the spatial pattern of the precipitation hardly changed, although the maximum precipitation core increased in magnitude. Daily data from automatic station data was used to evaluate the runs and shows that the diurnal cycle of temperature and precipitation were better simulated for 3 km when compared against observations. The model configuration results without cumulus convection shows a small contraction in the precipitating area and an increase in the simulated maximum values. The diurnal cycle of precipitation was better simulated with some activity of the cumulus convection scheme. The skill scores for the period and for different forecast ranges are higher at weak and moderate precipitation rates.
The UPSCALE project: a large simulation campaign
NASA Astrophysics Data System (ADS)
Mizielinski, Matthew; Roberts, Malcolm; Vidale, Pier Luigi; Schiemann, Reinhard; Demory, Marie-Estelle; Strachan, Jane
2014-05-01
The development of a traceable hierarchy of HadGEM3 global climate models, based upon the Met Office Unified Model, at resolutions from 135 km to 25 km, now allows the impact of resolution on the mean state, variability and extremes of climate to be studied in a robust fashion. In 2011 we successfully obtained a single-year grant of 144 million core hours of supercomputing time from the PRACE organization to run ensembles of 27 year atmosphere-only (HadGEM3-A GA3.0) climate simulations at 25km resolution, as used in present global weather forecasting, on HERMIT at HLRS. Through 2012 the UPSCALE project (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) ran over 650 years of simulation at resolutions of 25 km (N512), 60 km (N216) and 135 km (N96) to look at the value of high resolution climate models in the study of both present climate and a potential future climate scenario based on RCP8.5. Over 400 TB of data was produced using HERMIT, with additional simulations run on HECToR (UK supercomputer) and MONSooN (Met Office NERC Supercomputing Node). The data generated was transferred to the JASMIN super-data cluster, hosted by STFC CEDA in the UK, where analysis facilities are allowing rapid scientific exploitation of the data set. Many groups across the UK and Europe are already taking advantage of these facilities and we welcome approaches from other interested scientists. This presentation will briefly cover the following points; Purpose and requirements of the UPSCALE project and facilities used. Technical implementation and hurdles (model porting and optimisation, automation, numerical failures, data transfer). Ensemble specification. Current analysis projects and access to the data set. A full description of UPSCALE and the data set generated has been submitted to Geoscientific Model development, with overview information available from http://proj.badc.rl.ac.uk/upscale .
NASA Astrophysics Data System (ADS)
Mikhaylova, Ekaterina; Tabacchini, Valerio; Borghi, Giacomo; Mollet, Pieter; D'Hoe, Ester; Schaart, Dennis R.; Vandenberghe, Stefaan
2017-11-01
The goal of this simulation study is the performance evaluation and comparison of six potential designs for a time-of-flight PET scanner for pediatric patients of up to about 12 years of age. It is designed to have a high sensitivity and provide high-contrast and high-resolution images. The simulated pediatric PET is a full ring scanner, consisting of 32 × 32 mm2 monolithic LYSO:Ce crystals coupled to digital silicon photomultiplier arrays. The six considered designs differ in axial lengths (27.2 cm, 54.4 cm and 102 cm) and crystal thicknesses (22 mm and 11 mm). The simulations are based on measured detector response data. We study two possible detector arrangements: 22 mm-thick crystals with dual-sided readout and 11 mm-thick crystals with back-sided readout. The six designs are simulated by means of the GEANT4 application for tomographic emission software, using the measured spatial, energy and time response of the monolithic scintillator detectors as input. The performance of the six designs is compared on the basis of four studies: (1) spatial resolution; (2) NEMA NU2-2012 sensitivity and scatter fraction (SF) tests; (3) non-prewhitening signal-to-noise ratio observer study; and (4) receiver operating characteristics analysis. Based on the results, two designs are identified as cost-effective solutions for fast and efficient imaging of children: one with 54.4 cm axial field-of-view (FOV) and 22 mm-thick crystals, and another one with 102 cm axial FOV and 11 cm-thick crystals. The first one has a higher center point sensitivity than the second one, but requires dual-sided readout. The second design has the advantage of allowing a whole-body scan in a single bed position acquisition. Both designs have the potential to provide an excellent spatial resolution (˜2 mm) and an ultra-high sensitivity (>100 cps kBq-1 ).
NASA Astrophysics Data System (ADS)
Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen
2016-11-01
In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.
High speed imaging of dynamic processes with a switched source x-ray CT system
NASA Astrophysics Data System (ADS)
Thompson, William M.; Lionheart, William R. B.; Morton, Edward J.; Cunningham, Mike; Luggar, Russell D.
2015-05-01
Conventional x-ray computed tomography (CT) scanners are limited in their scanning speed by the mechanical constraints of their rotating gantries and as such do not provide the necessary temporal resolution for imaging of fast-moving dynamic processes, such as moving fluid flows. The Real Time Tomography (RTT) system is a family of fast cone beam CT scanners which instead use multiple fixed discrete sources and complete rings of detectors in an offset geometry. We demonstrate the potential of this system for use in the imaging of such high speed dynamic processes and give results using simulated and real experimental data. The unusual scanning geometry results in some challenges in image reconstruction, which are overcome using algebraic iterative reconstruction techniques and explicit regularisation. Through the use of a simple temporal regularisation term and by optimising the source firing pattern, we show that temporal resolution of the system may be increased at the expense of spatial resolution, which may be advantageous in some situations. Results are given showing temporal resolution of approximately 500 µs with simulated data and 3 ms with real experimental data.
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
NASA Astrophysics Data System (ADS)
KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.
2017-12-01
The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.
Design study of an in situ PET scanner for use in proton beam therapy
NASA Astrophysics Data System (ADS)
Surti, S.; Zou, W.; Daube-Witherspoon, M. E.; McDonough, J.; Karp, J. S.
2011-05-01
Proton beam therapy can deliver a high radiation dose to a tumor without significant damage to surrounding healthy tissue or organs. One way of verifying the delivered dose distribution is to image the short-lived positron emitters produced by the proton beam as it travels through the patient. A potential solution to the limitations of PET imaging in proton beam therapy is the development of a high sensitivity, in situ PET scanner that starts PET imaging almost immediately after patient irradiation while the patient is still lying on the treatment bed. A partial ring PET design is needed for this application in order to avoid interference between the PET detectors and the proton beam, as well as restrictions on patient positioning on the couch. A partial ring also allows us to optimize the detector separation (and hence the sensitivity) for different patient sizes. Our goal in this investigation is to evaluate an in situ PET scanner design for use in proton therapy that provides tomographic imaging in a partial ring scanner design using time-of-flight (TOF) information and an iterative reconstruction algorithm. GEANT4 simulation of an incident proton beam was used to produce a positron emitter distribution, which was parameterized and then used as the source distribution inside a water-filled cylinder for EGS4 simulations of a PET system. Design optimization studies were performed as a function of crystal type and size, system timing resolution, scanner angular coverage and number of positron emitter decays. Data analysis was performed to measure the accuracy of the reconstructed positron emitter distribution as well as the range of the positron emitter distribution. We simulated scanners with varying crystal sizes (2-4 mm) and type (LYSO and LaBr3) and our results indicate that 4 mm wide LYSO or LaBr3 crystals (resulting in 4-5 mm spatial resolution) are adequate; for a full-ring, non-TOF scanner we predict a low bias (<0.6 mm) and a good precision (<1 mm) in the estimated range relative to the simulated positron distribution. We then varied the angular acceptance of the scanner ranging from 1/2 to 2/3 of 2π a partial ring TOF imaging with good timing resolution (<=600 ps) is necessary to produce accurate tomographic images. A two-third ring scanner with 300 ps timing resolution leads to a bias of 1.0 mm and a precision of 1.4 mm in the range estimate. With a timing resolution of 600 ps, the bias increases to 2.0 mm while the precision in the range estimate is similar. For a half-ring scanner design, more distortions are present in the image, which is characterized by the increased error in the profile difference estimate. We varied the number of positron decays imaged by the PET scanner by an order of magnitude and we observe some decrease in the precision of the range estimate for lower number of decays, but all partial ring scanner designs studied have a precision <=1.5 mm. The largest number tested, 150 M total positron decays, is considered realistic for a clinical fraction of delivered dose, while the range of positron decays investigated in this work covers a variable number of situations corresponding to delays in scan start time and the total scan time. Thus, we conclude that for partial ring systems, an angular acceptance of at least 1/2 (of 2π) together with timing resolution of 300 ps is needed to achieve accurate and precise range estimates. With 600 ps timing resolution an angular acceptance of 2/3 (of 2π) is required to achieve satisfactory range estimates. These results indicate that it would be feasible to develop a partial-ring dedicated PET scanner based on either LaBr3 or LYSO to accurately characterize the proton dose for therapy planning.
The SCEC TeraShake Earthquake Simulation
NASA Astrophysics Data System (ADS)
Minster, J.; Olsen, K. B.; Moore, R.; Day, S.; Maechling, P.; Jordan, T.; Faerman, M.; Cui, Y.; Ely, G.; Hu, Y.; Shkoller, B.; Marcinkovich, C.; Bielak, J.; Okaya, D.; Archuleta, R.; Wilkins-Diehr, N.; Cutchin, S.; Chourasia, A.; Kremenek, G.; Jagatheesan, A.; Brieger, L.; Majundar, A.; Chukkapalli, G.; Xin, Q.; Moore, R.; Banister, B.; Thorp, D.; Kovatch, P.; Diegel, L.; Sherwin, T.; Jordan, C.; Thiebaux, M.; Lopez, J.
2004-12-01
The southern portion of the San Andreas fault, between Cajon Creek and Bombay Beach has not seen a major event since 1690, and has therefore accumulated a slip deficit of 5-6 m. The potential for this portion of the fault to rupture in a single M7.7 event is a major component of seismic hazard in southern California and northern Mexico. TeraShake is a large-scale finite-difference (fourth-order) simulation of such an event based on Olsen's Anelastic Wave Propagation Model (AWM) code, and conducted in the context of the Southern California Earthquake Center Community Modeling Environment (CME). The fault geometry is taken from the 2002 USGS National Hazard Maps. The kinematic slip function is transported and scaled from published inversions for the 2002 Denali (M7.9) earthquake. The three-dimensional crustal structure is the SCEC Community Velocity model. The 600km x 300km x 80km simulation domain extends from the Ventura Basin and Tehachapi region to the north and to Mexicali and Tijuana to the south. It includes all major population centers in southern California, and is modeled at 200m resolution using a rectangular, 1.8 giganode, 3000 x 1500 x 400 mesh. The simulated duration is 200 seconds, with a temporal resolution of 0.01seconds, maximum frequency of 0.5Hz, for a total of 20,000 time steps. The simulation is planned to run at the San Diego Supercomputer Center (SDSC) on 240 processors of the IBM Power4, DataStar machine. Validation runs conducted at one sixteenth (4D) resolution have shown that this is the optimal configuration in the trade-off between computational and I/O demands. The full run will consume about 18,000 CPU.hours. Each time step produces a 21.6GByte mesh snapshot of the entire ground motion velocity vectors. A 4D wavefield containing 2,000 time steps, amounting to 43 Tbytes of data, will be stored at SDSC. Surface data will be archived for every time step for synthetic seismogram engineering analysis, totaling 1 Tbyte. The data will be registered with the SCEC Digital Library supported by the SDSC Storage Resource Broker (SRB). Data collections will be annotated with simulation metadata, which will allow data discovery operations on metadata-based queries. The binary output will be described using HDF5 headers. Each file will be fingerprinted with MD5 checksums to preserve and validate data integrity. Data access, management and data product derivation will be provided through a set of SRB APIs, including java, C, web service and data grid workflow interfaces. High resolution visualizations of the wave propagation phenomena will be produced under diverse camera views. The surface data will be analyzed online by remote web clients plotting synthetic seismograms. Data mining operations, spectral analysis and data subsetting are planned as future work. The TeraShake simulation project has provided some insights about the cyberinfrastructure needed to advance computational geoscience, which we will discuss.
NASA Astrophysics Data System (ADS)
Popota, F. D.; Aguiar, P.; España, S.; Lois, C.; Udias, J. M.; Ros, D.; Pavia, J.; Gispert, J. D.
2015-01-01
In this work a comparison between experimental and simulated data using GATE and PeneloPET Monte Carlo simulation packages is presented. All simulated setups, as well as the experimental measurements, followed exactly the guidelines of the NEMA NU 4-2008 standards using the microPET R4 scanner. The comparison was focused on spatial resolution, sensitivity, scatter fraction and counting rates performance. Both GATE and PeneloPET showed reasonable agreement for the spatial resolution when compared to experimental measurements, although they lead to slight underestimations for the points close to the edge. High accuracy was obtained between experiments and simulations of the system’s sensitivity and scatter fraction for an energy window of 350-650 keV, as well as for the counting rate simulations. The latter was the most complicated test to perform since each code demands different specifications for the characterization of the system’s dead time. Although simulated and experimental results were in excellent agreement for both simulation codes, PeneloPET demanded more information about the behavior of the real data acquisition system. To our knowledge, this constitutes the first validation of these Monte Carlo codes for the full NEMA NU 4-2008 standards for small animal PET imaging systems.
Popota, F D; Aguiar, P; España, S; Lois, C; Udias, J M; Ros, D; Pavia, J; Gispert, J D
2015-01-07
In this work a comparison between experimental and simulated data using GATE and PeneloPET Monte Carlo simulation packages is presented. All simulated setups, as well as the experimental measurements, followed exactly the guidelines of the NEMA NU 4-2008 standards using the microPET R4 scanner. The comparison was focused on spatial resolution, sensitivity, scatter fraction and counting rates performance. Both GATE and PeneloPET showed reasonable agreement for the spatial resolution when compared to experimental measurements, although they lead to slight underestimations for the points close to the edge. High accuracy was obtained between experiments and simulations of the system's sensitivity and scatter fraction for an energy window of 350-650 keV, as well as for the counting rate simulations. The latter was the most complicated test to perform since each code demands different specifications for the characterization of the system's dead time. Although simulated and experimental results were in excellent agreement for both simulation codes, PeneloPET demanded more information about the behavior of the real data acquisition system. To our knowledge, this constitutes the first validation of these Monte Carlo codes for the full NEMA NU 4-2008 standards for small animal PET imaging systems.
Resolving the Small-Scale Structure of the Circumgalactic Medium in Cosmological Simulations
NASA Astrophysics Data System (ADS)
Corlies, Lauren
2017-08-01
We propose to resolve the circumgalactic medium (CGM) of L* galaxies down to 100 Msun (250 pc) in a full cosmological simulation to examine how mixing and cooling shape the physical nature of this gas on the scales expected from observations. COS has provided the best characterization of the low-z CGM to date, revealing the extent and amount of low- and high-ions and hinting at the kinematic relations between them. Yet cosmological galaxy simulations that can reproduce the stellar properties of galaxies have all struggled to reproduce these results even qualitatively. However, while the COS data imply that the low-ion absorption is occurring on sub-kpc scales, such scales can not be traced by simulations with resolution between 1-5 kpc in the CGM. Our proposed simulations will, for the first time, reach the resolution required to resolve these structures in the outer halo of L* galaxies. Using the adaptive mesh refinement code enzo, we will experiment with the size, shape, and resolution of an enforced high refinement region extending from the disk into the CGM to identify the best configuration for probing the flows of gas throughout the CGM. Our test case has found that increasing the resolution alone can have dramatic consequences for the density, temperature, and kinematics along a line of sight. Coupling this technique with an independent feedback study already underway will help disentangle the roles of global and small scale physics in setting the physical state of the CGM. Finally, we will use the MISTY pipeline to generate realistic mock spectra for direct comparison with COS data which will be made available through MAST.
Molecular dynamics simulations of field emission from a prolate spheroidal tip
NASA Astrophysics Data System (ADS)
Torfason, Kristinn; Valfells, Agust; Manolescu, Andrei
2016-12-01
High resolution molecular dynamics simulations with full Coulomb interactions of electrons are used to investigate field emission from a prolate spheroidal tip. The space charge limited current is several times lower than the current calculated with the Fowler-Nordheim formula. The image-charge is taken into account with a spherical approximation, which is good around the top of the tip, i.e., region where the current is generated.
NASA Astrophysics Data System (ADS)
Hong, Xiaodong; Reynolds, Carolyn A.; Doyle, James D.; May, Paul; O'Neill, Larry
2017-06-01
Atmosphere-ocean interaction, particular the ocean response to strong atmospheric forcing, is a fundamental component of the Madden-Julian Oscillation (MJO). In this paper, we examine how model errors in previous Madden-Julian Oscillation (MJO) events can affect the simulation of subsequent MJO events due to increased errors that develop in the upper-ocean before the MJO initiation stage. Two fully coupled numerical simulations with 45-km and 27-km horizontal resolutions were integrated for a two-month period from November to December 2011 using the Navy's limited area Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS®). There are three MJO events that occurred subsequently in early November, mid-November, and mid-December during the simulations. The 45-km simulation shows an excessive warming of the SSTs during the suppressed phase that occurs before the initiation of the second MJO event due to erroneously strong surface net heat fluxes. The simulated second MJO event stalls over the Maritime Continent which prevents the recovery of the deep mixed layer and associated barrier layer. Cross-wavelet analysis of solar radiation and SSTs reveals that the diurnal warming is absent during the second suppressed phase after the second MJO event. The mixed layer heat budget indicates that the cooling is primarily caused by horizontal advection associated with the stalling of the second MJO event and the cool SSTs fail to initiate the third MJO event. When the horizontal resolution is increased to 27-km, three MJOs are simulated and compare well with observations on multi-month timescales. The higher-resolution simulation of the second MJO event and more-realistic upper-ocean response promote the onset of the third MJO event. Simulations performed with analyzed SSTs indicate that the stalling of the second MJO in the 45-km run is a robust feature, regardless of ocean forcing, while the diurnal cycle analysis indicates that both 45-km and 27-km ocean resolutions respond realistically when provided with realistic atmospheric forcing. Thus, the problem in the 45-km simulation appears to originate in the atmosphere. Additional simulations show that while the details of the simulations are sensitive to small changes in the initial integration time, the large differences between the 45-km and 27-km runs during the suppressed phase in early December are robust.
Analyzing Transient Turbuelnce in a Stenosed Carotid Artery by Proper Orthogonal Decomposition
NASA Astrophysics Data System (ADS)
Grinberg, Leopold; Yakhot, Alexander; Karniadakis, George
2009-11-01
High resolution 3D simulation (involving 100M degrees of freedom) were employed to study transient turbulent flow in a carotid arterial bifurcation with a stenosed internal carotid artery (ICA). In the performed simulation an intermittent (in space and time) laminar-turbulent-laminar regime was observed. The simulation reveals the mechanism of the onset of turbulent flow in the stenosed ICA where the narrowing in the artery generates a strong jet flow. Time- and space-window Proper Orthogonal Decomposition (POD) was applied to quantify the different flow regimes in the occluded artery. A simplified version of the POD analysis that utilizes 2D slices only - more appropriate in the clinical setting - was also investigated.
Can we improve streamflow simulation by using higher resolution rainfall information?
NASA Astrophysics Data System (ADS)
Lobligeois, Florent; Andréassian, Vazken; Perrin, Charles
2013-04-01
The catchment response to rainfall is the interplay between space-time variability of precipitation, catchment characteristics and antecedent hydrological conditions. Precipitation dominates the high frequency hydrological response, and its simulation is thus dependent on the way rainfall is represented. One of the characteristics which distinguishes distributed from lumped models is their ability to represent explicitly the spatial variability of precipitation and catchment characteristics. The sensitivity of runoff hydrographs to the spatial variability of forcing data has been a major concern of researchers over the last three decades. However, although the literature on the relationship between spatial rainfall and runoff response is abundant, results are contrasted and sometimes contradictory. Several studies concluded that including information on rainfall spatial distribution improves discharge simulation (e.g. Ajami et al., 2004, among others) whereas other studies showed the lack of significant improvement in simulations with better information on rainfall spatial pattern (e.g. Andréassian et al., 2004, among others). The difficulties to reach a clear consensus is mainly due to the fact that each modeling study is implemented only on a few catchments whereas the impact of the spatial distribution of rainfall on runoff is known to be catchment and event characteristics-dependent. Many studies are virtual experiments and only compare flow simulations, which makes it difficult to reach conclusions transposable to real-life case studies. Moreover, the hydrological rainfall-runoff models differ between the studies and the parameterization strategies sometimes tend to advantage the distributed approach (or the lumped one). Recently, Météo-France developed a rainfall reanalysis over the whole French territory at the 1-kilometer resolution and the hourly time step over a 10-year period combining radar data and raingauge measurements: weather radar data were corrected and adjusted with both hourly and daily raingauge data. Based on this new high resolution product, we propose a framework to evaluate the improvements in streamflow simulation by using higher resolution rainfall information. Semi-distributed modelling is performed for different spatial resolution of precipitation forcing: from lumped to semi-distributed simulations. Here we do not work on synthetic (simulated) streamflow, but with actual measurements, on a large set of 181 French catchments representing a variety of size and climate. The rainfall-runoff model is re-calibrated for each resolution of rainfall spatial distribution over a 5-year sub-period and evaluated on the complementary sub-period in validation mode. The results are analysed by catchment classes based on catchment area and for various types of rainfall events based on the spatial variability of precipitation. References Ajami, N. K., Gupta, H. V, Wagener, T. & Sorooshian, S. (2004) Calibration of a semi-distributed hydrologic model for streamflow estimation along a river system. Journal of Hydrology 298(1-4), 112-135. Andréassian, V., Oddos, A., Michel, C., Anctil, F., Perrin, C. & Loumagne, C. (2004) Impact of spatial aggregation of inputs and parameters on the efficiency of rainfall-runoff models: A theoretical study using chimera watersheds. Water Resources Research 40(5), 1-9.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Fuyu; Collins, William D.; Wehner, Michael F.
High-resolution climate models have been shown to improve the statistics of tropical storms and hurricanes compared to low-resolution models. The impact of increasing horizontal resolution in the tropical storm simulation is investigated exclusively using a series of Atmospheric Global Climate Model (AGCM) runs with idealized aquaplanet steady-state boundary conditions and a fixed operational storm-tracking algorithm. The results show that increasing horizontal resolution helps to detect more hurricanes, simulate stronger extreme rainfall, and emulate better storm structures in the models. However, increasing model resolution does not necessarily produce stronger hurricanes in terms of maximum wind speed, minimum sea level pressure, andmore » mean precipitation, as the increased number of storms simulated by high-resolution models is mainly associated with weaker storms. The spatial scale at which the analyses are conducted appears to have more important control on these meteorological statistics compared to horizontal resolution of the model grid. When the simulations are analyzed on common low-resolution grids, the statistics of the hurricanes, particularly the hurricane counts, show reduced sensitivity to the horizontal grid resolution and signs of scale invariant.« less
Requirement of spatiotemporal resolution for imaging intracellular temperature distribution
NASA Astrophysics Data System (ADS)
Hiroi, Noriko; Tanimoto, Ryuichi; , Kaito, Ii; Ozeki, Mitsunori; Mashimo, Kota; Funahashi, Akira
2017-04-01
Intracellular temperature distribution is an emerging target in biology nowadays. Because thermal diffusion is rapid dynamics in comparison with molecular diffusion, we need a spatiotemporally high-resolution imaging technology to catch this phenomenon. We demonstrate that time-lapse imaging which consists of single-shot 3D volume images acquired at high-speed camera rate is desired for the imaging of intracellular thermal diffusion based on the simulation results of thermal diffusion from a nucleus to cytosol.
A Real-Time MODIS Vegetation Composite for Land Surface Models and Short-Term Forecasting
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; LaFontaine, Frank J.; Kumar, Sujay V.; Jedlovec, Gary J.
2011-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center is producing real-time, 1- km resolution Normalized Difference Vegetation Index (NDVI) gridded composites over a Continental U.S. domain. These composites are updated daily based on swath data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor aboard the polar orbiting NASA Aqua and Terra satellites, with a product time lag of about one day. A simple time-weighting algorithm is applied to the NDVI swath data that queries the previous 20 days of data to ensure a continuous grid of data populated at all pixels. The daily composites exhibited good continuity both spatially and temporally during June and July 2010. The composites also nicely depicted high greenness anomalies that resulted from significant rainfall over southwestern Texas, Mexico, and New Mexico during July due to early-season tropical cyclone activity. The SPoRT Center is in the process of computing greenness vegetation fraction (GVF) composites from the MODIS NDVI data at the same spatial and temporal resolution for use in the NASA Land Information System (LIS). The new daily GVF dataset would replace the monthly climatological GVF database (based on Advanced Very High Resolution Radiometer [AVHRR] observations from 1992-93) currently available to the Noah land surface model (LSM) in both LIS and the public version of the Weather Research and Forecasting (WRF) model. The much higher spatial resolution (1 km versus 0.15 degree) and daily updates based on real-time satellite observations have the capability to greatly improve the simulation of the surface energy budget in the Noah LSM within LIS and WRF. Once code is developed in LIS to incorporate the daily updated GVFs, the SPoRT Center will conduct simulation sensitivity experiments to quantify the impacts and improvements realized by the MODIS real-time GVF data. This presentation will describe the methodology used to develop the 1-km MODIS NDVI composites and show sample output from summer 2010, compare the MODIS GVF data to the AVHRR monthly climatology, and illustrate the sensitivity of the Noah LSM within LIS and/or the coupled LIS/WRF system to the new MODIS GVF dataset.
The effect of thermal velocities on structure formation in N-body simulations of warm dark matter
NASA Astrophysics Data System (ADS)
Leo, Matteo; Baugh, Carlton M.; Li, Baojiu; Pascoli, Silvia
2017-11-01
We investigate the impact of thermal velocities in N-body simulations of structure formation in warm dark matter models. Adopting the commonly used approach of adding thermal velocities, randomly selected from a Fermi-Dirac distribution, to the gravitationally-induced velocities of the simulation particles, we compare the matter and velocity power spectra measured from CDM and WDM simulations, in the latter case with and without thermal velocities. This prescription for adding thermal velocities introduces numerical noise into the initial conditions, which influences structure formation. At early times, the noise affects dramatically the power spectra measured from simulations with thermal velocities, with deviations of the order of ~ Script O(10) (in the matter power spectra) and of the order of ~ Script O(102) (in the velocity power spectra) compared to those extracted from simulations without thermal velocities. At late times, these effects are less pronounced with deviations of less than a few percent. Increasing the resolution of the N-body simulation shifts these discrepancies to higher wavenumbers. We also find that spurious haloes start to appear in simulations which include thermal velocities at a mass that is ~3 times larger than in simulations without thermal velocities.
Calibration of the Large Area X-Ray Proportional Counter (LAXPC) Instrument on board AstroSat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antia, H. M.; Yadav, J. S.; Chauhan, Jai Verdhan
We present the calibration and background model for the Large Area X-ray Proportional Counter (LAXPC) detectors on board AstroSat . The LAXPC instrument has three nominally identical detectors to achieve a large collecting area. These detectors are independent of each other, and in the event analysis mode they record the arrival time and energy of each photon that is detected. The detectors have a time resolution of 10 μ s and a dead-time of about 42 μ s. This makes LAXPC ideal for timing studies. The energy resolution and peak channel-to-energy mapping were obtained from calibration on the ground usingmore » radioactive sources coupled with GEANT4 simulations of the detectors. The response matrix was further refined from observations of the Crab after launch. At around 20 keV the energy resolution of the detectors is 10%–15%, while the combined effective area of the three detectors is about 6000 cm{sup 2}.« less
Huang, Shih-Ying; Savic, Dragana; Yang, Jaewon; Shrestha, Uttam; Seo, Youngho
2014-11-01
Simultaneous imaging systems combining positron emission tomography (PET) and magnetic resonance imaging (MRI) have been actively investigated. A PET/MR imaging system (GE Healthcare) comprised of a time-of-flight (TOF) PET system utilizing silicon photomultipliers (SiPMs) and 3-tesla (3T) MRI was recently installed at our institution. The small-ring (60 cm diameter) TOF PET subsystem of this PET/MRI system can generate images with higher spatial resolution compared with conventional PET systems. We have examined theoretically and experimentally the effect of uniform magnetic fields on the spatial resolution for high-energy positron emitters. Positron emitters including 18 F, 124 I, and 68 Ga were simulated in water using the Geant4 Monte Carlo toolkit in the presence of a uniform magnetic field (0, 3, and 7 Tesla). The positron annihilation position was tracked to determine the 3D spatial distribution of the 511-keV gammy ray emission. The full-width at tenth maximum (FWTM) of the positron point spread function (PSF) was determined. Experimentally, 18 F and 68 Ga line source phantoms in air and water were imaged with an investigational PET/MRI system and a PET/CT system to investigate the effect of magnetic field on the spatial resolution of PET. The full-width half maximum (FWHM) of the line spread function (LSF) from the line source was determined as the system spatial resolution. Simulations and experimental results show that the in-plane spatial resolution was slightly improved at field strength as low as 3 Tesla, especially when resolving signal from high-energy positron emitters in the air-tissue boundary.
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
Adaptive temporal refinement in injection molding
NASA Astrophysics Data System (ADS)
Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek
2018-05-01
Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.
NASA Technical Reports Server (NTRS)
Yasunari, Teppei J.; Colarco, Peter R.; Lau, William K. M.; Osada, Kazuo; Kido, Mizuka; Mahanama, Sarith P. P.; Kim, Kyu-Myong; Da Silva, Arlindo M.
2015-01-01
We compared the observed total dust deposition fluxes during precipitation (TDP) mainly at Toyama in Japan during the period January - April 2009 with results available from four NASA GEOS-5 global model experiments. The modeled results were obtained from three previous experiments and carried out in one experiment, which were all driven by assimilated meteorology and simulating aerosol distributions for the time period. We focus mainly on the observations of two distinct TDP events, which were reported in Osada et al. (2011), at Toyama, Japan, in February (Event B) and March 2009 (Event C). Although all of our GEOS-5 simulations captured aspects of the observed TDP, we found that our low horizontal spatial resolution control experiment performed generally the worst. The other three experiments were run at a higher spatial resolution, with the first differing only in that respect from the control, the second adding imposed a prescribed corrected precipitation product, and the final experiment adding as well assimilation of aerosol optical depth based on MODIS observations. During Event C, the increased horizontal resolution could increase TDP with precipitation increase. There was no significant improvement, however, due to the imposition of the corrected precipitation product. The simulation that incorporated aerosol data assimilation performed was by far the best for this event, but even so could only reproduce less than half of the observed TDP despite the significantly increased atmospheric dust mass concentrations. All three of the high spatial resolution experiments had higher simulated precipitation at Toyama than was observed and that in the lower resolution control run. During Event B, the aerosol data assimilation run did not perform appreciably better than the other higher resolution simulations, suggesting that upstream conditions (i.e., upstream cloudiness), or vertical or horizontal misplacement of the dust plume did not allow for significant improvement in the simulated aerosol distributions. Furthermore, a detailed comparison of observed hourly precipitation and surface particulate mass concentration data suggests that the observed TDP during Event B was highly dependent on short periods of weak precipitation correlated with elevated dust surface concentrations, important details possibly not captured well in a current global model.
Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool
NASA Astrophysics Data System (ADS)
Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.
2015-03-01
Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV < 1% of all the four indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András
2015-04-01
For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.
Particle Number Dependence of the N-body Simulations of Moon Formation
NASA Astrophysics Data System (ADS)
Sasaki, Takanori; Hosono, Natsuki
2018-04-01
The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.
NASA Astrophysics Data System (ADS)
Soares, P. M. M.; Cardoso, R. M.
2017-12-01
Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons and regions. For temperature, the added value is smaller. AcknowledgmentsThe authors wish to acknowledge SOLAR (PTDC/GEOMET/7078/2014) and FCT UID/GEO/50019/ 2013 (Instituto Dom Luiz) projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vickery, A.; Niels Bohr Institute, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen; Deen, P. P.
In recent years the use of repetition rate multiplication (RRM) on direct geometry neutron spectrometers has been established and is the common mode of operation on a growing number of instruments. However, the chopper configurations are not ideally optimised for RRM with a resultant 100 fold flux difference across a broad wavelength band. This paper presents chopper configurations that will produce a relative constant (RC) energy resolution and a relative variable (RV) energy resolution for optimised use of RRM. The RC configuration provides an almost uniform ΔE/E for all incident wavelengths and enables an efficient use of time as themore » entire dynamic range is probed with equivalent statistics, ideal for single shot measurements of transient phenomena. The RV energy configuration provides an almost uniform opening time at the sample for all incident wavelengths with three orders of magnitude in time resolution probed for a single European Spallation Source (ESS) period, which is ideal to probe complex relaxational behaviour. These two chopper configurations have been simulated for the Versatile Optimal Resolution direct geometry spectrometer, VOR, that will be built at ESS.« less
NASA Astrophysics Data System (ADS)
Pankatz, Klaus; Kerkweg, Astrid
2015-04-01
The work presented is part of the joint project "DecReg" ("Regional decadal predictability") which is in turn part of the project "MiKlip" ("Decadal predictions"), an effort funded by the German Federal Ministry of Education and Research to improve decadal predictions on a global and regional scale. In MiKlip, one big question is if regional climate modeling shows "added value", i.e. to evaluate, if regional climate models (RCM) produce better results than the driving models. However, the scope of this study is to look more closely at the setup specific details of regional climate modeling. As regional models only simulate a small domain, they have to inherit information about the state of the atmosphere at their lateral boundaries from external data sets. There are many unresolved questions concerning the setup of lateral boundary conditions (LBC). External data sets come from global models or from global reanalysis data-sets. A temporal resolution of six hours is common for this kind of data. This is mainly due to the fact, that storage space is a limiting factor, especially for climate simulations. However, theoretically, the coupling frequency could be as high as the time step of the driving model. Meanwhile, it is unclear if a more frequent update of the LBCs has a significant effect on the climate in the domain of the RCM. The first study examines how the RCM reacts to a higher update frequency. The study is based on a 30 year time slice experiment for three update frequencies of the LBC, namely six hours, one hour and six minutes. The evaluation of means, standard deviations and statistics of the climate in the regional domain shows only small deviations, some statistically significant though, of 2m temperature, sea level pressure and precipitation. The second part of the first study assesses parameters linked to cyclone activity, which is affected by the LBC update frequency. Differences in track density and strength are found when comparing the simulations. Theoretically, regional down-scaling should act like a magnifying glass. It should reveal details on small scales which a global model cannot resolve, but it should not affect the large scale flow. As the development of the small scale features takes some time, it is important that the air stays long enough within the regional domain. The spin-up time of the small scale features is, of course, dependent on the resolution of the LBC and the resolution of the RCM. The second study examines the quality of decadal hind-casts over Europe of the decade 2001-2010 when the horizontal resolution of the driving model, namely 2.8°, 1.8°, 1.4°, 1.1°, from which the LBC are calculated, is altered. The study shows, that a smaller resolution gap between LBC resolution and RCM resolution might be beneficial.
Fast Plasma Instrument for MMS: Simulation Results
NASA Technical Reports Server (NTRS)
Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.
2008-01-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the results obtained by the Cluster/PEACE electron spectrometers. The data analyzed was selected because it represented a potential reconnection event as currently published.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qing; Leung, Lai-Yung R.; Rauscher, Sara
This study investigates the resolution dependency of precipitation extremes in an aqua-planet framework. Strong resolution dependency of precipitation extremes is seen over both tropics and extra-tropics, and the magnitude of this dependency also varies with dynamical cores. Moisture budget analyses based on aqua-planet simulations with the Community Atmosphere Model (CAM) using the Model for Prediction Across Scales (MPAS) and High Order Method Modeling Environment (HOMME) dynamical cores but the same physics parameterizations suggest that during precipitation extremes moisture supply for surface precipitation is mainly derived from advective moisture convergence. The resolution dependency of precipitation extremes mainly originates from advective moisturemore » transport in the vertical direction. At most vertical levels over the tropics and in the lower atmosphere over the subtropics, the vertical eddy transport of mean moisture field dominates the contribution to precipitation extremes and its resolution dependency. Over the subtropics, the source of moisture, its associated energy, and the resolution dependency during extremes are dominated by eddy transport of eddies moisture at the mid- and upper-troposphere. With both MPAS and HOMME dynamical cores, the resolution dependency of the vertical advective moisture convergence is mainly explained by dynamical changes (related to vertical velocity or omega), although the vertical gradients of moisture act like averaging kernels to determine the sensitivity of the overall resolution dependency to the changes in omega at different vertical levels. The natural reduction of variability with coarser resolution, represented by areal data averaging (aggregation) effect, largely explains the resolution dependency in omega. The thermodynamic changes, which likely result from non-linear feedback in response to the large dynamical changes, are small compared to the overall changes in dynamics (omega). However, after excluding the data aggregation effect in omega, thermodynamic changes become relatively significant in offsetting the effect of dynamics leading to reduce differences between the simulated and aggregated results. Compared to MPAS, the simulated stronger vertical motion with HOMME also results in larger resolution dependency. Compared to the simulation at fine resolution, the vertical motion during extremes is insufficiently resolved/parameterized at the coarser resolution even after accounting for the natural reduction in variability with coarser resolution, and this is more distinct in the simulation with HOMME. To reduce uncertainties in simulated precipitation extremes, future development in cloud parameterizations must address their sensitivity to spatial resolution as well as dynamical cores.« less
NASA Astrophysics Data System (ADS)
Garousi Nejad, I.; He, S.; Tang, Q.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Tarboton, D. G.; Ohara, N.; Lin, H.
2017-12-01
Spatial scale is one of the main considerations in hydrological modeling of snowmelt in mountainous areas. The size of model elements controls the degree to which variability can be explicitly represented versus what needs to be parameterized using effective properties such as averages or other subgrid variability parameterizations that may degrade the quality of model simulations. For snowmelt modeling terrain parameters such as slope, aspect, vegetation and elevation play an important role in the timing and quantity of snowmelt that serves as an input to hydrologic runoff generation processes. In general, higher resolution enhances the accuracy of the simulation since fine meshes represent and preserve the spatial variability of atmospheric and surface characteristics better than coarse resolution. However, this increases computational cost and there may be a scale beyond which the model response does not improve due to diminishing sensitivity to variability and irreducible uncertainty associated with the spatial interpolation of inputs. This paper examines the influence of spatial resolution on the snowmelt process using simulations of and data from the Animas River watershed, an alpine mountainous area in Colorado, USA, using an unstructured distributed physically based hydrological model developed for a parallel computing environment, ADHydro. Five spatial resolutions (30 m, 100 m, 250 m, 500 m, and 1 km) were used to investigate the variations in hydrologic response. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of ADHydro to obtain a balance between representing spatial variability and the computational cost. According to the results, variation in the input variables and parameters due to using different spatial resolution resulted in changes in the obtained hydrological variables, especially snowmelt, both at the basin-scale and distributed across the model mesh.
Weather extremes in very large, high-resolution ensembles: the weatherathome experiment
NASA Astrophysics Data System (ADS)
Allen, M. R.; Rosier, S.; Massey, N.; Rye, C.; Bowery, A.; Miller, J.; Otto, F.; Jones, R.; Wilson, S.; Mote, P.; Stone, D. A.; Yamazaki, Y. H.; Carrington, D.
2011-12-01
Resolution and ensemble size are often seen as alternatives in climate modelling. Models with sufficient resolution to simulate many classes of extreme weather cannot normally be run often enough to assess the statistics of rare events, still less how these statistics may be changing. As a result, assessments of the impact of external forcing on regional climate extremes must be based either on statistical downscaling from relatively coarse-resolution models, or statistical extrapolation from 10-year to 100-year events. Under the weatherathome experiment, part of the climateprediction.net initiative, we have compiled the Met Office Regional Climate Model HadRM3P to run on personal computer volunteered by the general public at 25 and 50km resolution, embedded within the HadAM3P global atmosphere model. With a global network of about 50,000 volunteers, this allows us to run time-slice ensembles of essentially unlimited size, exploring the statistics of extreme weather under a range of scenarios for surface forcing and atmospheric composition, allowing for uncertainty in both boundary conditions and model parameters. Current experiments, developed with the support of Microsoft Research, focus on three regions, the Western USA, Europe and Southern Africa. We initially simulate the period 1959-2010 to establish which variables are realistically simulated by the model and on what scales. Our next experiments are focussing on the Event Attribution problem, exploring how the probability of various types of extreme weather would have been different over the recent past in a world unaffected by human influence, following the design of Pall et al (2011), but extended to a longer period and higher spatial resolution. We will present the first results of the unique, global, participatory experiment and discuss the implications for the attribution of recent weather events to anthropogenic influence on climate.
Unstructured mesh adaptivity for urban flooding modelling
NASA Astrophysics Data System (ADS)
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
NASA Astrophysics Data System (ADS)
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.
NASA Astrophysics Data System (ADS)
BéRanger, Karine; Drillet, Yann; Houssais, Marie-NoëLle; Testor, Pierre; Bourdallé-Badie, Romain; Alhammoud, Bahjat; Bozec, Alexandra; Mortier, Laurent; Bouruet-Aubertot, Pascale; CréPon, Michel
2010-12-01
The impact of the atmospheric forcing on the winter ocean convection in the Mediterranean Sea was studied with a high-resolution ocean general circulation model. The major areas of focus are the Levantine basin, the Aegean-Cretan Sea, the Adriatic Sea, and the Gulf of Lion. Two companion simulations differing by the horizontal resolution of the atmospheric forcing were compared. The first simulation (MED16-ERA40) was forced by air-sea fields from ERA40, which is the ECMWF reanalysis. The second simulation (MED16-ECMWF) was forced by the ECMWF-analyzed surface fields that have a horizontal resolution twice as high as those of ERA40. The analysis of the standard deviations of the atmospheric fields shows that increasing the resolution of the atmospheric forcing leads in all regions to a better channeling of the winds by mountains and to the generation of atmospheric mesoscale patterns. Comparing the companion ocean simulation results with available observations in the Adriatic Sea and in the Gulf of Lion shows that MED16-ECMWF is more realistic than MED16-ERA40. In the eastern Mediterranean, although deep water formation occurs in the two experiments, the depth reached by the convection is deeper in MED16-ECMWF. In the Gulf of Lion, deep water formation occurs only in MED16-ECMWF. This larger sensitivity of the western Mediterranean convection to the forcing resolution is investigated by running a set of sensitivity experiments to analyze the impact of different time-space resolutions of the forcing on the intense winter convection event in winter 1998-1999. The sensitivity to the forcing appears to be mainly related to the effect of wind channeling by the land orography, which can only be reproduced in atmospheric models of sufficient resolution. Thus, well-positioned patterns of enhanced wind stress and ocean surface heat loss are able to maintain a vigorous gyre circulation favoring efficient preconditioning of the area at the beginning of winter and to drive realistic buoyancy loss and mixing responsible for strong convection at the end of winter.
NASA Technical Reports Server (NTRS)
Ott, L.; Putman, B.; Collatz, J.; Gregg, W.
2012-01-01
Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement concepts to create realistic pseudo-datasets. Pseudo-data are averaged over coarse model grid cell areas to better understand the ability of measurements to characterize CO2 distributions and spatial gradients on both short (daily to weekly) and long (monthly to seasonal) time scales
Investigation of OPET Performance Using GATE, a Geant4-Based Simulation Software.
Rannou, Fernando R; Kohli, Vandana; Prout, David L; Chatziioannou, Arion F
2004-10-01
A combined optical positron emission tomography (OPET) system is capable of both optical and PET imaging in the same setting, and it can provide information/interpretation not possible in single-mode imaging. The scintillator array here serves the dual function of coupling the optical signal from bioluminescence/fluorescence to the photodetector and also of channeling optical scintillations from the gamma rays. We report simulation results of the PET part of OPET using GATE, a Geant4 simulation package. The purpose of this investigation is the definition of the geometric parameters of the OPET tomograph. OPET is composed of six detector blocks arranged in a hexagonal ring-shaped pattern with an inner radius of 15.6 mm. Each detector consists of a two-dimensional array of 8 × 8 scintillator crystals each measuring 2 × 2 × 10 mm(3). Monte Carlo simulations were performed using the GATE software to measure absolute sensitivity, depth of interaction, and spatial resolution for two ring configurations, with and without gantry rotations, two crystal materials, and several crystal lengths. Images were reconstructed with filtered backprojection after angular interleaving and transverse one-dimensional interpolation of the sinogram. We report absolute sensitivities nearly seven times that of the prototype microPET at the center of field of view and 2.0 mm tangential and 2.3 mm radial resolutions with gantry rotations up to an 8.0 mm radial offset. These performance parameters indicate that the imaging spatial resolution and sensitivity of the OPET system will be suitable for high-resolution and high-sensitivity small-animal PET imaging.
Detecting population-environmental interactions with mismatched time series data.
Ferguson, Jake M; Reichert, Brian E; Fletcher, Robert J; Jager, Henriëtte I
2017-11-01
Time series analysis is an essential method for decomposing the influences of density and exogenous factors such as weather and climate on population regulation. However, there has been little work focused on understanding how well commonly collected data can reconstruct the effects of environmental factors on population dynamics. We show that, analogous to similar scale issues in spatial data analysis, coarsely sampled temporal data can fail to detect covariate effects when interactions occur on timescales that are fast relative to the survey period. We propose a method for modeling mismatched time series data that couples high-resolution environmental data to low-resolution abundance data. We illustrate our approach with simulations and by applying it to Florida's southern Snail kite population. Our simulation results show that our method can reliably detect linear environmental effects and that detecting nonlinear effects requires high-resolution covariate data even when the population turnover rate is slow. In the Snail kite analysis, our approach performed among the best in a suite of previously used environmental covariates explaining Snail kite dynamics and was able to detect a potential phenological shift in the environmental dependence of Snail kites. Our work provides a statistical framework for reliably detecting population-environment interactions from coarsely surveyed time series. An important implication of this work is that the low predictability of animal population growth by weather variables found in previous studies may be due, in part, to how these data are utilized as covariates. © 2017 by the Ecological Society of America.
Detecting population–environmental interactions with mismatched time series data
Ferguson, Jake M.; Reichert, Brian E.; Fletcher, Robert J.; Jager, Henriëtte I.
2017-01-01
Time series analysis is an essential method for decomposing the influences of density and exogenous factors such as weather and climate on population regulation. However, there has been little work focused on understanding how well commonly collected data can reconstruct the effects of environmental factors on population dynamics. We show that, analogous to similar scale issues in spatial data analysis, coarsely sampled temporal data can fail to detect covariate effects when interactions occur on timescales that are fast relative to the survey period. We propose a method for modeling mismatched time series data that couples high-resolution environmental data to low-resolution abundance data. We illustrate our approach with simulations and by applying it to Florida’s southern Snail kite population. Our simulation results show that our method can reliably detect linear environmental effects and that detecting nonlinear effects requires high-resolution covariate data even when the population turnover rate is slow. In the Snail kite analysis, our approach performed among the best in a suite of previously used environmental covariates explaining Snail kite dynamics and was able to detect a potential phenological shift in the environmental dependence of Snail kites. Our work provides a statistical framework for reliably detecting population–environment interactions from coarsely surveyed time series. An important implication of this work is that the low predictability of animal population growth by weather variables found in previous studies may be due, in part, to how these data are utilized as covariates. PMID:28759123
NASA Astrophysics Data System (ADS)
Zhu, Dehua; Echendu, Shirley; Xuan, Yunqing; Webster, Mike; Cluckie, Ian
2016-11-01
Impact-focused studies of extreme weather require coupling of accurate simulations of weather and climate systems and impact-measuring hydrological models which themselves demand larger computer resources. In this paper, we present a preliminary analysis of a high-performance computing (HPC)-based hydrological modelling approach, which is aimed at utilizing and maximizing HPC power resources, to support the study on extreme weather impact due to climate change. Here, four case studies are presented through implementation on the HPC Wales platform of the UK mesoscale meteorological Unified Model (UM) with high-resolution simulation suite UKV, alongside a Linux-based hydrological model, Hydrological Predictions for the Environment (HYPE). The results of this study suggest that the coupled hydro-meteorological model was still able to capture the major flood peaks, compared with the conventional gauge- or radar-driving forecast, but with the added value of much extended forecast lead time. The high-resolution rainfall estimation produced by the UKV performs similarly to that of radar rainfall products in the first 2-3 days of tested flood events, but the uncertainties particularly increased as the forecast horizon goes beyond 3 days. This study takes a step forward to identify how the online mode approach can be used, where both numerical weather prediction and the hydrological model are executed, either simultaneously or on the same hardware infrastructures, so that more effective interaction and communication can be achieved and maintained between the models. But the concluding comments are that running the entire system on a reasonably powerful HPC platform does not yet allow for real-time simulations, even without the most complex and demanding data simulation part.
Influence of imaging resolution on color fidelity in digital archiving.
Zhang, Pengchang; Toque, Jay Arre; Ide-Ektessabi, Ari
2015-11-01
Color fidelity is of paramount importance in digital archiving. In this paper, the relationship between color fidelity and imaging resolution was explored by calculating the color difference of an IT8.7/2 color chart with a CIELAB color difference formula for scanning and simulation images. Microscopic spatial sampling was used in selecting the image pixels for the calculations to highlight the loss of color information. A ratio, called the relative imaging definition (RID), was defined to express the correlation between image resolution and color fidelity. The results show that in order for color differences to remain unrecognizable, the imaging resolution should be at least 10 times higher than the physical dimension of the smallest feature in the object being studied.
A Specialized Multi-Transmit Head Coil for High Resolution fMRI of the Human Visual Cortex at 7T
Sengupta, Shubharthi; Roebroeck, Alard; Kemper, Valentin G.; Poser, Benedikt A.; Zimmermann, Jan; Goebel, Rainer; Adriany, Gregor
2016-01-01
Purpose To design, construct and validate radiofrequency (RF) transmit and receive phased array coils for high-resolution visual cortex imaging at 7 Tesla. Methods A 4 channel transmit and 16 channel receive array was constructed on a conformal polycarbonate former. Transmit field efficiency and homogeneity were simulated and validated, along with the Specific Absorption Rate, using B1+ mapping techniques and electromagnetic simulations. Receiver signal-to-noise ratio (SNR), temporal SNR (tSNR) across EPI time series, g-factors for accelerated imaging and noise correlations were evaluated and compared with a commercial 32 channel whole head coil. The performance of the coil was further evaluated with human subjects through functional MRI (fMRI) studies at standard and submillimeter resolutions of upto 0.8mm isotropic. Results The transmit and receive sections were characterized using bench tests and showed good interelement decoupling, preamplifier decoupling and sample loading. SNR for the 16 channel coil was ∼ 1.5 times that of the commercial coil in the human occipital lobe, and showed better g-factor values for accelerated imaging. fMRI tests conducted showed better response to Blood Oxygen Level Dependent (BOLD) activation, at resolutions of 1.2mm and 0.8mm isotropic. Conclusion The 4 channel phased array transmit coil provides homogeneous excitation across the visual cortex, which, in combination with the dual row 16 channel receive array, makes for a valuable research tool for high resolution anatomical and functional imaging of the visual cortex at 7T. PMID:27911950
NASA Astrophysics Data System (ADS)
Nardo, A.; Li, B.; Teunissen, P. J. G.
2016-01-01
Integer Ambiguity Resolution (IAR) is the key to fast and precise GNSS positioning. The proper diagnostic metric for successful IAR is provided by the ambiguity success rate being the probability of correct integer estimation. In this contribution we analyse the performance of different GPS+Galileo models in terms of number of epochs needed to reach a pre-determined success rate, for various ground and space-based applications. The simulation-based controlled model environment enables us to gain insight into the factors contributing to the ambiguity resolution strength of the different GPS+Galileo models. Different scenarios of modernized GPS+Galileo are studied, encompassing the long baseline ground case as well as the medium dynamics case (airplane) and the space-based Low Earth Orbiter (LEO) case. In our analyses of these models the capabilities of partial ambiguity resolution (PAR) are demonstrated and compared to the limitations of full ambiguity resolution (FAR). The results show that PAR is generally a more efficient way than FAR to reduce the time needed to achieve centimetre-level positioning precision. For long single baselines, PAR can achieve time reductions of fifty percent to achieve such precision levels, while for multiple baselines it even becomes more effective, reaching reductions up to eighty percent for four station networks. For a LEO, the rapidly changing observation geometry does not even allow FAR, while PAR is then still possible for both dual- and triple-frequency scenarios. With the triple-frequency GPS+Galileo model the availability of precise positioning improves by fifteen percent with respect to the dual-frequency scenario.
Air Quality Science and Regulatory Efforts Require Geostationary Satellite Measurements
NASA Technical Reports Server (NTRS)
Pickering, Kenneth E.; Allen, D. J.; Stehr, J. W.
2006-01-01
Air quality scientists and regulatory agencies would benefit from the high spatial and temporal resolution trace gas and aerosol data that could be provided by instruments on a geostationary platform. More detailed time-resolved data from a geostationary platform could be used in tracking regional transport and in evaluating mesoscale air quality model performance in terms of photochemical evolution throughout the day. The diurnal cycle of photochemical pollutants is currently missing from the data provided by the current generation of atmospheric chemistry satellites which provide only one measurement per day. Often peak surface ozone mixing ratios are reached much earlier in the day during major regional pollution episodes than during local episodes due to downward mixing of ozone that had been transported above the boundary layer overnight. The regional air quality models often do not simulate this downward mixing well enough and underestimate surface ozone in regional episodes. Having high time-resolution geostationary data will make it possible to determine the magnitude of this lower-and mid-tropospheric transport that contributes to peak eight-hour average ozone and 24-hour average PM2.5 concentrations. We will show ozone and PM(sub 2.5) episodes from the CMAQ model and suggest ways in which geostationary satellite data would improve air quality forecasting. Current regulatory modeling is typically being performed at 12 km horizontal resolution. State and regional air quality regulators in regions with complex topography and/or land-sea breezes are anxious to move to 4-km or finer resolution simulations. Geostationary data at these or finer resolutions will be useful in evaluating such models.
Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks
Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav
2017-01-01
Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880
Capturing, using, and managing quality assurance knowledge for shuttle post-MECO flight design
NASA Technical Reports Server (NTRS)
Peters, H. L.; Fussell, L. R.; Goodwin, M. A.; Schultz, Roger D.
1991-01-01
Ascent initialization values used by the Shuttle's onboard computer for nominal and abort mission scenarios are verified by a six degrees of freedom computer simulation. The procedure that the Ascent Post Main Engine Cutoff (Post-MECO) group uses to perform quality assurance (QA) of the simulation is time consuming. Also, the QA data, checklists and associated rationale, though known by the group members, is not sufficiently documented, hindering transfer of knowledge and problem resolution. A new QA procedure which retains the current high level of integrity while reducing the time required to perform QA is needed to support the increasing Shuttle flight rate. Documenting the knowledge is also needed to increase its availability for training and problem resolution. To meet these needs, a knowledge capture process, embedded into the group activities, was initiated to verify the existing QA checks, define new ones, and document all rationale. The resulting checks were automated in a conventional software program to achieve the desired standardization, integrity, and time reduction. A prototype electronic knowledge base was developed with Macintosh's HyperCard to serve as a knowledge capture tool and data storage.
Wacker, M; Witte, H
2013-01-01
This review outlines the methodological fundamentals of the most frequently used non-parametric time-frequency analysis techniques in biomedicine and their main properties, as well as providing decision aids concerning their applications. The short-term Fourier transform (STFT), the Gabor transform (GT), the S-transform (ST), the continuous Morlet wavelet transform (CMWT), and the Hilbert transform (HT) are introduced as linear transforms by using a unified concept of the time-frequency representation which is based on a standardized analytic signal. The Wigner-Ville distribution (WVD) serves as an example of the 'quadratic transforms' class. The combination of WVD and GT with the matching pursuit (MP) decomposition and that of the HT with the empirical mode decomposition (EMD) are explained; these belong to the class of signal-adaptive approaches. Similarities between linear transforms are demonstrated and differences with regard to the time-frequency resolution and interference (cross) terms are presented in detail. By means of simulated signals the effects of different time-frequency resolutions of the GT, CMWT, and WVD as well as the resolution-related properties of the interference (cross) terms are shown. The method-inherent drawbacks and their consequences for the application of the time-frequency techniques are demonstrated by instantaneous amplitude, frequency and phase measures and related time-frequency representations (spectrogram, scalogram, time-frequency distribution, phase-locking maps) of measured magnetoencephalographic (MEG) signals. The appropriate selection of a method and its parameter settings will ensure readability of the time-frequency representations and reliability of results. When the time-frequency characteristics of a signal strongly correspond with the time-frequency resolution of the analysis then a method may be considered 'optimal'. The MP-based signal-adaptive approaches are preferred as these provide an appropriate time-frequency resolution for all frequencies while simultaneously reducing interference (cross) terms.
Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granroth, Garrett E; Chen, Meili; Kohl, James Arthur
2007-01-01
Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersivemore » sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.« less
Effective description of a 3D object for photon transportation in Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Suganuma, R.; Ogawa, K.
2000-06-01
Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.
Combining Coarse-Grained Protein Models with Replica-Exchange All-Atom Molecular Dynamics
Wabik, Jacek; Kmiecik, Sebastian; Gront, Dominik; Kouza, Maksim; Koliński, Andrzej
2013-01-01
We describe a combination of all-atom simulations with CABS, a well-established coarse-grained protein modeling tool, into a single multiscale protocol. The simulation method has been tested on the C-terminal beta hairpin of protein G, a model system of protein folding. After reconstructing atomistic details, conformations derived from the CABS simulation were subjected to replica-exchange molecular dynamics simulations with OPLS-AA and AMBER99sb force fields in explicit solvent. Such a combination accelerates system convergence several times in comparison with all-atom simulations starting from the extended chain conformation, demonstrated by the analysis of melting curves, the number of native-like conformations as a function of time and secondary structure propagation. The results strongly suggest that the proposed multiscale method could be an efficient and accurate tool for high-resolution studies of protein folding dynamics in larger systems. PMID:23665897
Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit
NASA Astrophysics Data System (ADS)
Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi
2017-02-01
In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.; ...
2012-05-29
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
NASA Astrophysics Data System (ADS)
Lee, Y. H.; Min, K. H.
2017-12-01
We investigated the ability of high-resolution numerical weather prediction (NWP) model (nested grid spacing at 500 m) in simulating convective precipitation event over the Seoul metropolitan area on 16 August 2015. Intense rainfall occurred from 0930 UTC to 1030 UTC and subsequent trailing precipitation lasted until 1400 UTC. The synoptic condition for the convective event was characterized by a large value of convective available potential energy (CAPE) at the outer edge of a meso-high pressure system. Observational analysis showed that triggering mechanism for convective rainfall was provided by the convergence of northeasterly wind which was driven by a cold pool in the northeastern Kyonggi province. The cold pool formed after heavy rain occurred in northeastern Kyonggi province at 0500UTC. Several experiments were performed in order to evaluate the sensitivity of different initial conditions (IC12, IC18, IC00, IC06) and the impact of data assimilation (IC06A) on simulating the convective event. The quantitative precipitation forecasts (QPF) appeared to vary widely among the experiments, depending on the timing of ICs that were chosen. QPF amount was underestimated in all experiments when data assimilation was not performed. Among the four experiments, QPF amounts and locations were better simulated in the 1200 UTC 15 August (IC12) run due to large values of CAPE in late afternoon and the presence of low-level convergence zone in the metropolitan area. Although 0600 UTC 16 August (IC06) run simulated the largest CAPE in late afternoon, the location and amount of heavy rainfall were significantly different from observations. IC06 did not simulate the convergence of low-level wind associated with the mesoscale cold pool. However, when assimilation of surface observations and radar data at 0600 UTC was performed (IC06A), the simulation reproduced the location and amount of rainfall reasonably well, indicating that high-resolution NWP model with data assimilation can predict the local convective precipitation event with a short-life time (1 3 hours) effectively within 6 hours.
NASA Astrophysics Data System (ADS)
Cook, L. M.; Samaras, C.; McGinnis, S. A.
2017-12-01
Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
NASA Technical Reports Server (NTRS)
Ford, J. P.
1982-01-01
A survey conducted to evaluate user preference for resolution versus speckle relative to the geologic interpretability of spaceborne radar images is discussed. Thirteen different resolution/looks combinations are simulated from Seasat synthetic-aperture radar data of each of three test sites. The SAR images were distributed with questionnaires for analysis to 85 earth scientists. The relative discriminability of geologic targets at each test site for each simulation of resolution and speckle on the images is determined on the basis of a survey of the evaluations. A large majority of the analysts respond that for most targets a two-look image at the highest simulated resolution is best. For a constant data rate, a higher resolution is more important for target discrimination than a higher number of looks. It is noted that sand dunes require more looks than other geologic targets. At all resolutions, multiple-look images are preferred over the corresponding single-look image. In general, the number of multiple looks that is optimal for discriminating geologic targets is inversely related to the simulated resolution.
TDC Array Tradeoffs in Current and Upcoming Digital SiPM Detectors for Time-of-Flight PET
NASA Astrophysics Data System (ADS)
Tétrault, Marc-André; Therrien, Audrey Corbeil; Lemaire, William; Fontaine, Réjean; Pratte, Jean-François
2017-03-01
Radiation detection used in positron emission tomography (PET) exploits the timing information to remove background noise and refine position measurement through time-of-flight information. Fine time resolution in the order of 10 ps full-width at half-maximum (FWHM) would not only improve contrast in the image, but would also enable direct image reconstruction without iterative or back-projected algorithms. Currently, PET experimental setups based on silicon photomultipliers (SiPMs) reach 73 ps FWHM, where the scintillation process plays the larger role in spreading the timing resolution. This will change with the optimization of faster light emission mechanisms (prompt photons), where readout optoelectronics will once more have a noticeable contribution to the timing resolution limit. In addition to reducing electronic jitter as much as possible, other aspects of the design space must also explored, especially for digital SiPMs. Unlike traditional SiPMs, digital SiPMs can integrate circuits like time-to-digital converters (TDCs) directly with individual or groups of light sensing cells. Designers should consider the number of TDCs to integrate, the area they occupy, their power consumption, their resolution, and the impact of signal processing algorithms and find a compromise with the figure of merit and the coincidence timing resolution (CTR). This paper presents a parametric simulation flow for digital SiPM microsystems that evaluates CTR based on these aspects and on the best linear unbiased estimator (BLUE) in order to guide their design for present and future PET systems. For a small 1.1 × 1.1 × 3.0 mm3 LYSO crystal, the simulations indicate that for a low jitter digital SiPM microsystem with 18.2% photon detection efficiency, fewer than four timestamps with any multi-TDC configuration scheme nearly obtain the optimal CTR with BLUE (just below 100 ps FWHM), but with limited 5% improvement over only using the first observed photon. On the other hand, if a similar crystal but with 2.5% prompt photon fraction is considered, BLUE provides an improvement between 80% and 200% (depending on electronic jitter) over using only the first observed photon. In this case, a few tens of timestamps are required, yielding very different design guidelines than for standard LYSO scintillators.
NASA Astrophysics Data System (ADS)
Fountoukis, Christos; Megaritis, Athanasios G.; Skyllakou, Ksakousti; Charalampidis, Panagiotis E.; Denier van der Gon, Hugo A. C.; Crippa, Monica; Prévôt, André S. H.; Fachinger, Friederike; Wiedensohler, Alfred; Pilinis, Christodoulos; Pandis, Spyros N.
2016-03-01
We use a three-dimensional regional chemical transport model (PMCAMx) with high grid resolution and high-resolution emissions (4 × 4 km2) over the Paris greater area to simulate the formation of carbonaceous aerosol during a summer (July 2009) and a winter (January/February 2010) period as part of the MEGAPOLI (megacities: emissions, urban, regional, and global atmospheric pollution and climate effects, and Integrated tools for assessment and mitigation) campaigns. Model predictions of carbonaceous aerosol are compared against Aerodyne aerosol mass spectrometer and black carbon (BC) high time resolution measurements from three ground sites. PMCAMx predicts BC concentrations reasonably well reproducing the majority (70 %) of the hourly data within a factor of two during both periods. The agreement for the summertime secondary organic aerosol (OA) concentrations is also encouraging (mean bias = 0.1 µg m-3) during a photochemically intense period. The model tends to underpredict the summertime primary OA concentrations in the Paris greater area (by approximately 0.8 µg m-3) mainly due to missing primary OA emissions from cooking activities. The total cooking emissions are estimated to be approximately 80 mg d-1 per capita and have a distinct diurnal profile in which 50 % of the daily cooking OA is emitted during lunch time (12:00-14:00 LT) and 20 % during dinner time (20:00-22:00 LT). Results also show a large underestimation of secondary OA in the Paris greater area during wintertime (mean bias = -2.3 µg m-3) pointing towards a secondary OA formation process during low photochemical activity periods that is not simulated in the model.
NASA Astrophysics Data System (ADS)
Fountoukis, C.; Megaritis, A. G.; Skyllakou, K.; Charalampidis, P. E.; Denier van der Gon, H. A. C.; Crippa, M.; Prévôt, A. S. H.; Freutel, F.; Wiedensohler, A.; Pilinis, C.; Pandis, S. N.
2015-09-01
We use a three dimensional regional chemical transport model (PMCAMx) with high grid resolution and high resolution emissions (4 km × 4 km) over the Paris greater area to simulate the formation of carbonaceous aerosol during a summer (July 2009) and a winter (January/February 2010) period as part of the MEGAPOLI (Megacities: Emissions, urban, regional, and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation) campaigns. Model predictions of carbonaceous aerosol are compared against Aerodyne aerosol mass spectrometer and black carbon (BC) high time resolution measurements from three ground sites. PMCAMx predicts BC concentrations reasonably well reproducing the majority (70 %) of the hourly data within a factor of two during both periods. The agreement for the summertime secondary organic aerosol (OA) concentrations is also encouraging (mean bias = 0.1 μg m-3) during a photochemically intense period. The model tends to underpredict the summertime primary OA concentrations in the Paris greater area (by approximately 0.8 μg m-3) mainly due to missing primary OA emissions from cooking activities. The total cooking emissions are estimated to be approximately 80 mg d-1 per capita and have a distinct diurnal profile in which 50 % of the daily cooking OA is emitted during lunch time (12:00-14:00 LT) and 20 % during dinner time (20:00-22:00 LT). Results also show a large underestimation of secondary OA in the Paris greater area during wintertime (mean bias = -2.3 μg m-3) pointing towards a secondary OA formation process during low photochemical activity periods that is not simulated in the model.
A priori motion models for four-dimensional reconstruction in gated cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalush, D.S.; Tsui, B.M.W.; Cui, Lin
1996-12-31
We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these {open_quotes}most likely{close_quotes} motion vectors.more » To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies.« less
NASA Astrophysics Data System (ADS)
Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia
2018-03-01
Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.
Pulse shaping system research of CdZnTe radiation detector for high energy x-ray diagnostic
NASA Astrophysics Data System (ADS)
Li, Miao; Zhao, Mingkun; Ding, Keyu; Zhou, Shousen; Zhou, Benjie
2018-02-01
As one of the typical wide band-gap semiconductor materials, the CdZnTe material has high detection efficiency and excellent energy resolution for the hard X-ray and the Gamma ray. The generated signal of the CdZnTe detector needs to be transformed to the pseudo-Gaussian pulse with a small impulse-width to remove noise and improve the energy resolution by the following nuclear spectrometry data acquisition system. In this paper, the multi-stage pseudo-Gaussian shaping-filter has been investigated based on the nuclear electronic principle. The optimized circuit parameters were also obtained based on the analysis of the characteristics of the pseudo-Gaussian shaping-filter in our following simulations. Based on the simulation results, the falling-time of the output pulse was decreased and faster response time can be obtained with decreasing shaping-time τs-k. And the undershoot was also removed when the ratio of input resistors was set to 1 to 2.5. Moreover, a two stage sallen-key Gaussian shaping-filter was designed and fabricated by using a low-noise voltage feedback operation amplifier LMH6628. A detection experiment platform had been built by using the precise pulse generator CAKE831 as the imitated radiation pulse which was equivalent signal of the semiconductor CdZnTe detector. Experiment results show that the output pulse of the two stage pseudo-Gaussian shaping filter has minimum 200ns pulse width (FWHM), and the output pulse of each stage was well consistent with the simulation results. Based on the performance in our experiment, this multi-stage pseudo-Gaussian shaping-filter can reduce the event-lost caused by pile-up in the CdZnTe semiconductor detector and improve the energy resolution effectively.
NASA Astrophysics Data System (ADS)
Folch, Arnau; Barcons, Jordi; Kozono, Tomofumi; Costa, Antonio
2017-06-01
Atmospheric dispersal of a gas denser than air can threat the environment and surrounding communities if the terrain and meteorological conditions favour its accumulation in topographic depressions, thereby reaching toxic concentration levels. Numerical modelling of atmospheric gas dispersion constitutes a useful tool for gas hazard assessment studies, essential for planning risk mitigation actions. In complex terrains, microscale winds and local orographic features can have a strong influence on the gas cloud behaviour, potentially leading to inaccurate results if not captured by coarser-scale modelling. We introduce a methodology for microscale wind field characterisation based on transfer functions that couple a mesoscale numerical weather prediction model with a microscale computational fluid dynamics (CFD) model for the atmospheric boundary layer. The resulting time-dependent high-resolution microscale wind field is used as input for a shallow-layer gas dispersal model (TWODEE-2.1) to simulate the time evolution of CO2 gas concentration at different heights above the terrain. The strategy is applied to review simulations of the 1986 Lake Nyos event in Cameroon, where a huge CO2 cloud released by a limnic eruption spread downslopes from the lake, suffocating thousands of people and animals across the Nyos and adjacent secondary valleys. Besides several new features introduced in the new version of the gas dispersal code (TWODEE-2.1), we have also implemented a novel impact criterion based on the percentage of human fatalities depending on CO2 concentration and exposure time. New model results are quantitatively validated using the reported percentage of fatalities at several locations. The comparison with previous simulations that assumed coarser-scale steady winds and topography illustrates the importance of high-resolution modelling in complex terrains.
Simulating Complex Satellites and a Space-Based Surveillance Sensor Simulation
2009-09-01
high-resolution imagery (Fig. 1). Thus other means for characterizing satellites will need to be developed. Research into non- resolvable space object...computing power and time . The second way, which we are using here is to create simpler models of satellite bodies and use albedo-area calculations...their position, movement, size, and physical features. However, there are many satellites in orbit that are simply too small or too far away to resolve by
Evaluating climate models: Should we use weather or climate observations?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oglesby, Robert J; Erickson III, David J
2009-12-01
Calling the numerical models that we use for simulations of climate change 'climate models' is a bit of a misnomer. These 'general circulation models' (GCMs, AKA global climate models) and their cousins the 'regional climate models' (RCMs) are actually physically-based weather simulators. That is, these models simulate, either globally or locally, daily weather patterns in response to some change in forcing or boundary condition. These simulated weather patterns are then aggregated into climate statistics, very much as we aggregate observations into 'real climate statistics'. Traditionally, the output of GCMs has been evaluated using climate statistics, as opposed to their abilitymore » to simulate realistic daily weather observations. At the coarse global scale this may be a reasonable approach, however, as RCM's downscale to increasingly higher resolutions, the conjunction between weather and climate becomes more problematic. We present results from a series of present-day climate simulations using the WRF ARW for domains that cover North America, much of Latin America, and South Asia. The basic domains are at a 12 km resolution, but several inner domains at 4 km have also been simulated. These include regions of complex topography in Mexico, Colombia, Peru, and Sri Lanka, as well as a region of low topography and fairly homogeneous land surface type (the U.S. Great Plains). Model evaluations are performed using standard climate analyses (e.g., reanalyses; NCDC data) but also using time series of daily station observations. Preliminary results suggest little difference in the assessment of long-term mean quantities, but the variability on seasonal and interannual timescales is better described. Furthermore, the value-added by using daily weather observations as an evaluation tool increases with the model resolution.« less
NASA Technical Reports Server (NTRS)
Barre, Jerome; Edwards, David; Worden, Helen; Da Silva, Arlindo; Lahoz, William
2015-01-01
By the end of the current decade, there are plans to deploy several geostationary Earth orbit (GEO) satellite missions for atmospheric composition over North America, East Asia and Europe with additional missions proposed. Together, these present the possibility of a constellation of geostationary platforms to achieve continuous time-resolved high-density observations over continental domains for mapping pollutant sources and variability at diurnal and local scales. In this paper, we use a novel approach to sample a very high global resolution model (GEOS-5 at 7 km horizontal resolution) to produce a dataset of synthetic carbon monoxide pollution observations representative of those potentially obtainable from a GEO satellite constellation with predicted measurement sensitivities based on current remote sensing capabilities. Part 1 of this study focuses on the production of simulated synthetic measurements for air quality OSSEs (Observing System Simulation Experiments). We simulate carbon monoxide nadir retrievals using a technique that provides realistic measurements with very low computational cost. We discuss the sampling methodology: the projection of footprints and areas of regard for geostationary geometries over each of the North America, East Asia and Europe regions; the regression method to simulate measurement sensitivity; and the measurement error simulation. A detailed analysis of the simulated observation sensitivity is performed, and limitations of the method are discussed. We also describe impacts from clouds, showing that the efficiency of an instrument making atmospheric composition measurements on a geostationary platform is dependent on the dominant weather regime over a given region and the pixel size resolution. These results demonstrate the viability of the "instrument simulator" step for an OSSE to assess the performance of a constellation of geostationary satellites for air quality measurements.
NASA Astrophysics Data System (ADS)
Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.; Morley, Steven K.; Ozturk, Dogacan Su
2017-12-01
We simulated the entire month of January 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, SYM-H, AL, and cross-polar cap potential (CPCP). We find that the model does an excellent job of predicting the SYM-H index, with a root-mean-square error (RMSE) of 17-18 nT. Kp is predicted well during storm time conditions but overpredicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonably well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to overpredict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resolution, with the exception of the rate of occurrence for strongly negative AL values. The use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.
Muon reconstruction with a geometrical model in JUNO
NASA Astrophysics Data System (ADS)
Genster, C.; Schever, M.; Ludhova, L.; Soiron, M.; Stahl, A.; Wiebusch, C.
2018-03-01
The Jiangmen Neutrino Underground Observatory (JUNO) is a 20 kton liquid scintillator detector currently under construction near Kaiping in China. The physics program focuses on the determination of the neutrino mass hierarchy with reactor anti-neutrinos. For this purpose, JUNO is located 650 m underground with a distance of 53 km to two nuclear power plants. As a result, it is exposed to a muon flux that requires a precise muon reconstruction to make a veto of cosmogenic backgrounds viable. Established muon tracking algorithms use time residuals to a track hypothesis. We developed an alternative muon tracking algorithm that utilizes the geometrical shape of the fastest light. It models the full shape of the first, direct light produced along the muon track. From the intersection with the spherical PMT array, the track parameters are extracted with a likelihood fit. The algorithm finds a selection of PMTs based on their first hit times and charges. Subsequently, it fits on timing information only. On a sample of through-going muons with a full simulation of readout electronics, we report a spatial resolution of 20 cm of distance from the detector's center and an angular resolution of 1.6o over the whole detector. Additionally, a dead time estimation is performed to measure the impact of the muon veto. Including the step of waveform reconstruction on top of the track reconstruction, a loss in exposure of only 4% can be achieved compared to the case of a perfect tracking algorithm. When including only the PMT time resolution, but no further electronics simulation and waveform reconstruction, the exposure loss is only 1%.
NASA Astrophysics Data System (ADS)
Li, Puxi; Zhou, Tianjun; Zou, Liwei
2016-04-01
The authors evaluated the performance of Meteorological Research Institute (MRI) AGCM3.2 models in the simulations of climatology and interannual variability of the Spring Persistent Rains (SPR) over southeastern China. The possible impacts of different horizontal resolutions were also investigated based on the experiments with three different horizontal resolutions (i.e., 120, 60, and 20km). The model could reasonably reproduce the main rainfall center over southeastern China in boreal spring under the three different resolutions. In comparison with 120 simulation, it revealed that 60km and 20km simulations show the superiority in simulating rainfall centers anchored by the Nanling-Wuyi Mountains, but overestimate rainfall intensity. Water vapor budget diagnosis showed that, the 60km and 20km simulations tended to overestimate the water vapor convergence over southeastern China, which leads to wet biases. In the aspect of interannual variability of SPR, the model could reasonably reproduce the anomalous lower-tropospheric anticyclone in the western North Pacific (WNPAC) and positive precipitation anomalies over southeastern China in El Niño decaying spring. Compared with the 120km resolution, the large positive biases are substantially reduced in the mid and high resolution models which evidently improve the simulation of horizontal moisture advection in El Niño decaying spring. We highlight the importance of developing high resolution climate model as it could potentially improve the climatology and interannual variability of SPR.
KINETIC ENERGY FROM SUPERNOVA FEEDBACK IN HIGH-RESOLUTION GALAXY SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, Christine M.; Bryan, Greg L.; Ostriker, Jeremiah P.
We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (∼10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 10{sup 9} M{sub ⊙} dwarf halo. Wemore » find that in high-density media (≳50 cm{sup −3}) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.« less
Concept of a charged fusion product diagnostic for NSTX.
Boeglin, W U; Valenzuela Perez, R; Darrow, D S
2010-10-01
The concept of a new diagnostic for NSTX to determine the time dependent charged fusion product emission profile using an array of semiconductor detectors is presented. The expected time resolution of 1-2 ms should make it possible to study the effect of magnetohydrodynamics and other plasma activities (toroidal Alfvén eigenmodes (TAE), neoclassical tearing modes (NTM), edge localized modes (ELM), etc.) on the radial transport of neutral beam ions. First simulation results of deuterium-deuterium (DD) fusion proton yields for different detector arrangements and methods for inverting the simulated data to obtain the emission profile are discussed.
Predictive searching algorithm for Fourier ptychography
NASA Astrophysics Data System (ADS)
Li, Shunkai; Wang, Yifan; Wu, Weichen; Liang, Yanmei
2017-12-01
By capturing a set of low-resolution images under different illumination angles and stitching them together in the Fourier domain, Fourier ptychography (FP) is capable of providing high-resolution image with large field of view. Despite its validity, long acquisition time limits its real-time application. We proposed an incomplete sampling scheme in this paper, termed the predictive searching algorithm to shorten the acquisition and recovery time. Informative sub-regions of the sample’s spectrum are searched and the corresponding images of the most informative directions are captured for spectrum expansion. Its effectiveness is validated by both simulated and experimental results, whose data requirement is reduced by ˜64% to ˜90% without sacrificing image reconstruction quality compared with the conventional FP method.
Realistic mass ratio magnetic reconnection simulations with the Multi Level Multi Domain method
NASA Astrophysics Data System (ADS)
Innocenti, Maria Elena; Beck, Arnaud; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
Space physics simulations with the ambition of realistically representing both ion and electron dynamics have to be able to cope with the huge scale separation between the electron and ion parameters while respecting the stability constraints of the numerical method of choice. Explicit Particle In Cell (PIC) simulations with realistic mass ratio are limited in the size of the problems they can tackle by the restrictive stability constraints of the explicit method (Birdsall and Langdon, 2004). Many alternatives are available to reduce such computation costs. Reduced mass ratios can be used, with the caveats highlighted in Bret and Dieckmann (2010). Fully implicit (Chen et al., 2011a; Markidis and Lapenta, 2011) or semi implicit (Vu and Brackbill, 1992; Lapenta et al., 2006; Cohen et al., 1989) methods can bypass the strict stability constraints of explicit PIC codes. Adaptive Mesh Refinement (AMR) techniques (Vay et al., 2004; Fujimoto and Sydora, 2008) can be employed to change locally the simulation resolution. We focus here on the Multi Level Multi Domain (MLMD) method introduced in Innocenti et al. (2013) and Beck et al. (2013). The method combines the advantages of implicit algorithms and adaptivity. Two levels are fully simulated with fields and particles. The so called "refined level" simulates a fraction of the "coarse level" with a resolution RF times bigger than the coarse level resolution, where RF is the Refinement Factor between the levels. This method is particularly suitable for magnetic reconnection simulations (Biskamp, 2005), where the characteristic Ion and Electron Diffusion Regions (IDR and EDR) develop at the ion and electron scales respectively (Daughton et al., 2006). In Innocenti et al. (2013) we showed that basic wave and instability processes are correctly reproduced by MLMD simulations. In Beck et al. (2013) we applied the technique to plasma expansion and magnetic reconnection problems. We showed that notable computational time savings can be achieved. More importantly, we were able to correctly reproduce EDR features, such as the inversion layer of the electric field observed in Chen et al. (2011b), with a MLMD simulation at a significantly lower cost. Here, we present recent results on EDR dynamics achieved with the MLMD method and a realistic mass ratio.
FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data
NASA Astrophysics Data System (ADS)
Min, Junhong; Vonesch, Cédric; Kirshner, Hagai; Carlini, Lina; Olivier, Nicolas; Holden, Seamus; Manley, Suliana; Ye, Jong Chul; Unser, Michael
2014-04-01
Super resolution microscopy such as STORM and (F)PALM is now a well known method for biological studies at the nanometer scale. However, conventional imaging schemes based on sparse activation of photo-switchable fluorescent probes have inherently slow temporal resolution which is a serious limitation when investigating live-cell dynamics. Here, we present an algorithm for high-density super-resolution microscopy which combines a sparsity-promoting formulation with a Taylor series approximation of the PSF. Our algorithm is designed to provide unbiased localization on continuous space and high recall rates for high-density imaging, and to have orders-of-magnitude shorter run times compared to previous high-density algorithms. We validated our algorithm on both simulated and experimental data, and demonstrated live-cell imaging with temporal resolution of 2.5 seconds by recovering fast ER dynamics.
FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data
Min, Junhong; Vonesch, Cédric; Kirshner, Hagai; Carlini, Lina; Olivier, Nicolas; Holden, Seamus; Manley, Suliana; Ye, Jong Chul; Unser, Michael
2014-01-01
Super resolution microscopy such as STORM and (F)PALM is now a well known method for biological studies at the nanometer scale. However, conventional imaging schemes based on sparse activation of photo-switchable fluorescent probes have inherently slow temporal resolution which is a serious limitation when investigating live-cell dynamics. Here, we present an algorithm for high-density super-resolution microscopy which combines a sparsity-promoting formulation with a Taylor series approximation of the PSF. Our algorithm is designed to provide unbiased localization on continuous space and high recall rates for high-density imaging, and to have orders-of-magnitude shorter run times compared to previous high-density algorithms. We validated our algorithm on both simulated and experimental data, and demonstrated live-cell imaging with temporal resolution of 2.5 seconds by recovering fast ER dynamics. PMID:24694686
Comparison of SeaWinds Backscatter Imaging Algorithms
Long, David G.
2017-01-01
This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143
Hurricane Intensity Forecasts with a Global Mesoscale Model on the NASA Columbia Supercomputer
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Tao, Wei-Kuo; Atlas, Robert
2006-01-01
It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers (e.g., the NASA Columbia Supercomputer) have changed this. In 2004, the finite-volume General Circulation Model at a 1/4 degree resolution, doubling the resolution used by most of operational NWP center at that time, was implemented and run to obtain promising landfall predictions for major hurricanes (e.g., Charley, Frances, Ivan, and Jeanne). In 2005, we have successfully implemented the 1/8 degree version, and demonstrated its performance on intensity forecasts with hurricane Katrina (2005). It is found that the 1/8 degree model is capable of simulating the radius of maximum wind and near-eye wind structure, and thereby promising intensity forecasts. In this study, we will further evaluate the model s performance on intensity forecasts of hurricanes Ivan, Jeanne, Karl in 2004. Suggestions for further model development will be made in the end.
Use of upscaled elevation and surface roughness data in two-dimensional surface water models
Hughes, J.D.; Decker, J.D.; Langevin, C.D.
2011-01-01
In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.
Attribution of soil information associated with modeling background clutter
NASA Astrophysics Data System (ADS)
Mason, George; Melloh, Rae
2006-05-01
This paper examines the attribution of data fields required to generate high resolution soil profiles for support of Computational Test Bed (CTB) used for countermine research. The countermine computational test bed is designed to realistically simulate the geo-environment to support the evaluation of sensors used to locate unexploded ordnance. The goal of the CTB is to derive expected moisture, chemical compounds, and measure heat migration over time, from which we expect to optimize sensor performance. Several tests areas were considered for the collection of soils data to populate the CTB. Collection of bulk soil properties has inherent spatial resolution limits. Novel techniques are therefore required to populate a high resolution model. This paper presents correlations between spatial variability in texture as related to hydraulic permeability and heat transfer properties of the soil. The extracted physical properties are used to exercise models providing a signature of subsurface media and support the simulation of detection by various sensors of buried and surface ordnance.
Cherenkov radiation-based three-dimensional position-sensitive PET detector: A Monte Carlo study.
Ota, Ryosuke; Yamada, Ryoko; Moriya, Takahiro; Hasegawa, Tomoyuki
2018-05-01
Cherenkov radiation has recently received attention due to its prompt emission phenomenon, which has the potential to improve the timing performance of radiation detectors dedicated to positron emission tomography (PET). In this study, a Cherenkov-based three-dimensional (3D) position-sensitive radiation detector was proposed, which is composed of a monolithic lead fluoride (PbF 2 ) crystal and a photodetector array of which the signals can be readout independently. Monte Carlo simulations were performed to estimate the performance of the proposed detector. The position- and time resolution were evaluated under various practical conditions. The radiator size and various properties of the photodetector, e.g., readout pitch and single photon timing resolution (SPTR), were parameterized. The single photon time response of the photodetector was assumed to be a single Gaussian for the simplification. The photo detection efficiency of the photodetector was ideally 100% for all wavelengths. Compton scattering was included in simulations, but partly analyzed. To estimate the position at which a γ-ray interacted in the Cherenkov radiator, the center-of-gravity (COG) method was employed. In addition, to estimate the depth-of-interaction (DOI) principal component analysis (PCA), which is a multivariate analysis method and has been used to identify the patterns in data, was employed. The time-space distribution of Cherenkov photons was quantified to perform PCA. To evaluate coincidence time resolution (CTR), the time difference of two independent γ-ray events was calculated. The detection time was defined as the first photon time after the SPTR of the photodetector was taken into account. The position resolution on the photodetector plane could be estimated with high accuracy, by using a small number of Cherenkov photons. Moreover, PCA showed an ability to estimate the DOI. The position resolution heavily depends on the pitch of the photodetector array and the radiator thickness. If the readout pitch were ideally 0 and practically 3 mm, a full-width at half-maximum (FWHM) of 0.348 and 1.92 mm was achievable with a 10-mm-thick PbF 2 crystal, respectively. Furthermore, first-order correlation could be observed between the primary principal component and the true DOI. To obtain a coincidence timing resolution better than 100-ps FWHM with a 20-mm-thick PbF 2 crystal, a photodetector with SPTR of better than σ = 30 ps was necessary. From these results, the improvement of SPTR allows us to achieve CTR better than 100-ps FWHM, even in the case where a 20-mm-thick radiator is used. Our proposed detector has the potential to estimate the 3D interaction position of γ-rays in the radiator, using only time and space information of Cherenkov photons. © 2018 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
Kinematic Evolution of Simulated Star-Forming Galaxies
NASA Technical Reports Server (NTRS)
Kassin, Susan A.; Brooks, Alyson; Governato, Fabio; Weiner, Benjamin J.; Gardner, Jonathan P.
2014-01-01
Recent observations have shown that star-forming galaxies like our own Milky Way evolve kinematically into ordered thin disks over the last approximately 8 billion years since z = 1.2, undergoing a process of "disk settling." For the first time, we study the kinematic evolution of a suite of four state of the art "zoom in" hydrodynamic simulations of galaxy formation and evolution in a fully cosmological context and compare with these observations. Until now, robust measurements of the internal kinematics of simulated galaxies were lacking as the simulations suffered from low resolution, overproduction of stars, and overly massive bulges. The current generation of simulations has made great progress in overcoming these difficulties and is ready for a kinematic analysis. We show that simulated galaxies follow the same kinematic trends as real galaxies: they progressively decrease in disordered motions (sigma(sub g)) and increase in ordered rotation (V(sub rot)) with time. The slopes of the relations between both sigma(sub g) and V(sub rot) with redshift are consistent between the simulations and the observations. In addition, the morphologies of the simulated galaxies become less disturbed with time, also consistent with observations. This match between the simulated and observed trends is a significant success for the current generation of simulations, and a first step in determining the physical processes behind disk settling.
Role of Boundary Conditions in Monte Carlo Simulation of MEMS Devices
NASA Technical Reports Server (NTRS)
Nance, Robert P.; Hash, David B.; Hassan, H. A.
1997-01-01
A study is made of the issues surrounding prediction of microchannel flows using the direct simulation Monte Carlo method. This investigation includes the introduction and use of new inflow and outflow boundary conditions suitable for subsonic flows. A series of test simulations for a moderate-size microchannel indicates that a high degree of grid under-resolution in the streamwise direction may be tolerated without loss of accuracy. In addition, the results demonstrate the importance of physically correct boundary conditions, as well as possibilities for reducing the time associated with the transient phase of a simulation. These results imply that simulations of longer ducts may be more feasible than previously envisioned.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.
2002-01-01
The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.
NASA Astrophysics Data System (ADS)
Martin, Gill; Levine, Richard; Klingaman, Nicholas; Bush, Stephanie; Turner, Andrew; Woolnough, Steven
2015-04-01
Despite considerable efforts worldwide to improve model simulations of the Asian summer monsoon, significant biases still remain in climatological seasonal mean rainfall distribution, timing of the onset, and northward and eastward extent of the monsoon domain (Sperber et al., 2013). Many modelling studies have shown sensitivity to convection and boundary layer parameterization, cloud microphysics and land surface properties, as well as model resolution. Here we examine the problems in representing short-timescale rainfall variability (related to convection parameterization), problems in representing synoptic-scale systems such as monsoon depressions (related to model resolution), and the relationship of each of these with longer-term systematic biases. Analysis of the spatial distribution of rainfall intensity on a range of timescales ranging from ~30 minutes to daily, in the MetUM and in observations (where available), highlights how rainfall biases in the South Asian monsoon region on different timescales in different regions can be achieved in models through a combination of the incorrect frequency and/or intensity of rainfall. Over the Indian land area, the typical dry bias is related to sub-daily rainfall events being too infrequent, despite being too intense when they occur. In contrast, the wet bias regions over the equatorial Indian Ocean are mainly related to too frequent occurrence of lower-than-observed 3-hourly rainfall accumulations which result in too frequent occurrence of higher-than-observed daily rainfall accumulations. This analysis sheds light on the model deficiencies behind the climatological seasonal mean rainfall biases that many models exhibit in this region. Changing physical parameterizations alters this behaviour, with associated adjustments in the climatological rainfall distribution, although the latter is not always improved (Bush et al., 2014). This suggests a more complex interaction between the diabatic heating and the large-scale circulation than is indicated by the intensity and frequency of rainfall alone. Monsoon depressions and low pressure systems are important contributors to monsoon rainfall over central and northern India, areas where MetUM climate simulations typically show deficient monsoon rainfall. Analysis of MetUM climate simulations at resolutions ranging from N96 (~135km) to N512 (~25km) suggests that at lower resolution the numbers and intensities of monsoon depressions and low pressure systems and their associated rainfall are very low compared with re-analyses/observations. We show that there are substantial increases with horizontal resolution, but resolution is not the only factor. Idealised simulations, either using nudged atmospheric winds or initialised coupled hindcasts, which improve (strengthen) the mean state monsoon and cyclonic circulation over the Indian peninsula, also result in a substantial increase in monsoon depressions and associated rainfall. This suggests that a more realistic representation of monsoon depressions is possible even at lower resolution if the larger-scale systematic error pattern in the monsoon is improved.
Forecasting Lightning Threat Using WRF Proxy Fields
NASA Technical Reports Server (NTRS)
McCaul, E. W., Jr.
2010-01-01
Objectives: Given that high-resolution WRF forecasts can capture the character of convective outbreaks, we seek to: 1. Create WRF forecasts of LTG threat (1-24 h), based on 2 proxy fields from explicitly simulated convection: - graupel flux near -15 C (captures LTG time variability) - vertically integrated ice (captures LTG threat area). 2. Calibrate each threat to yield accurate quantitative peak flash rate densities. 3. Also evaluate threats for areal coverage, time variability. 4. Blend threats to optimize results. 5. Examine sensitivity to model mesh, microphysics. Methods: 1. Use high-resolution 2-km WRF simulations to prognose convection for a diverse series of selected case studies. 2. Evaluate graupel fluxes; vertically integrated ice (VII). 3. Calibrate WRF LTG proxies using peak total LTG flash rate densities from NALMA; relationships look linear, with regression line passing through origin. 4. Truncate low threat values to make threat areal coverage match NALMA flash extent density obs. 5. Blend proxies to achieve optimal performance 6. Study CAPS 4-km ensembles to evaluate sensitivities.
Measurement of pulsatile motion with millisecond resolution by MRI.
Souchon, Rémi; Gennisson, Jean-Luc; Tanter, Mickael; Salomir, Rares; Chapelon, Jean-Yves; Rouvière, Olivier
2012-06-01
We investigated a technique based on phase-contrast cine MRI combined with deconvolution of the phase shift waveforms to measure rapidly varying pulsatile motion waveforms. The technique does not require steady-state displacement during motion encoding. Simulations and experiments were performed in porcine liver samples in view of a specific application, namely the observation of transient displacements induced by acoustic radiation force. Simulations illustrate the advantages and shortcomings of the methods. For experimental validation, the waveforms were acquired with an ultrafast ultrasound scanner (Supersonic Imagine Aixplorer), and the rates of decay of the waveforms (relaxation time) were compared. With bipolar motion-encoding gradient of 8.4 ms, the method was able to measure displacement waveforms with a temporal resolution of 1 ms over a time course of 40 ms. Reasonable agreement was found between the rate of decay of the waveforms measured in ultrasound (2.8 ms) and in MRI (2.7-3.3 ms). Copyright © 2011 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.
2015-01-01
A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.
NASA Astrophysics Data System (ADS)
Deanes, L. N.; Ahmadov, R.; McKeen, S. A.; Manross, K.; Grell, G. A.; James, E.
2016-12-01
Wildfires are increasing in number and size in the western United States as climate change contributes to warmer and drier conditions in this region. These fires lead to poor air quality and diminished visibility. The High Resolution Rapid Refresh-Smoke modeling system (HRRR-Smoke) is designed to simulate fire emissions and smoke transport with high resolution. The model is based on the Weather Research and Forecasting model, coupled with chemistry (WRF-Chem) and uses fire detection data from the Visible Infrared and Imaging Radiometer Suite (VIIRS) satellite instrument to simulate wildfire emissions and their plume rise. HRRR-Smoke is used in both real-time applications and case studies. In this study, we evaluate the HRRR-Smoke for August 2015, during one of the worst wildfire seasons on record in the United States, by focusing on wildfires that occurred in the northwestern US. We compare HRRR-Smoke simulations with hourly fine particulate matter (PM2.5) observations from the Air Quality System (https://www.epa.gov/aqs) from multiple air quality monitoring sites in Washington state. PM2.5 data includes measurements from urban, suburban and remote sites in the state. We discuss the model performance in capturing large PM2.5 enhancements detected at surface sites due to wildfires. We present various statistical parameters to demonstrate HRRR-Smoke's performance in simulating surface PM2.5 levels.
Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling
NASA Astrophysics Data System (ADS)
Her, Y. G.
2017-12-01
Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological observations such as soil moisture and radar rainfall depth and by sharing the model and its codes in public domain, respectively.
Designing a compact high performance brain PET scanner—simulation study
NASA Astrophysics Data System (ADS)
Gong, Kuang; Majewski, Stan; Kinahan, Paul E.; Harrison, Robert L.; Elston, Brian F.; Manjeshwar, Ravindra; Dolinsky, Sergei; Stolin, Alexander V.; Brefczynski-Lewis, Julie A.; Qi, Jinyi
2016-05-01
The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér-Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of-interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact ‘helmet’ design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging.
Designing a compact high performance brain PET scanner—simulation study
Gong, Kuang; Majewski, Stan; Kinahan, Paul E; Harrison, Robert L; Elston, Brian F; Manjeshwar, Ravindra; Dolinsky, Sergei; Stolin, Alexander V; Brefczynski-Lewis, Julie A; Qi, Jinyi
2016-01-01
The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér–Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of- interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact ‘helmet’ design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging. PMID:27081753
Cartesian-Grid Simulations of a Canard-Controlled Missile with a Free-Spinning Tail
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Kwak, Dochan (Technical Monitor)
2002-01-01
The proposed paper presents a series of simulations of a geometrically complex, canard-controlled, supersonic missile with free-spinning tail fins. Time-dependent simulations were performed using an inviscid Cartesian-grid-based method with results compared to both experimental data and high-resolution Navier-Stokes computations. At fixed free stream conditions and canard deflections, the tail spin rate was iteratively determined such that the net rolling moment on the empennage is zero. This rate corresponds to the time-asymptotic rate of the free-to-spin fin system. After obtaining spin-averaged aerodynamic coefficients for the missile, the investigation seeks a fixed-tail approximation to the spin-averaged aerodynamic coefficients, and examines the validity of this approximation over a variety of freestream conditions.
Limits to high-speed simulations of spiking neural networks using general-purpose computers.
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
Evaluating synoptic systems in the CMIP5 climate models over the Australian region
NASA Astrophysics Data System (ADS)
Gibson, Peter B.; Uotila, Petteri; Perkins-Kirkpatrick, Sarah E.; Alexander, Lisa V.; Pitman, Andrew J.
2016-10-01
Climate models are our principal tool for generating the projections used to inform climate change policy. Our confidence in projections depends, in part, on how realistically they simulate present day climate and associated variability over a range of time scales. Traditionally, climate models are less commonly assessed at time scales relevant to daily weather systems. Here we explore the utility of a self-organizing maps (SOMs) procedure for evaluating the frequency, persistence and transitions of daily synoptic systems in the Australian region simulated by state-of-the-art global climate models. In terms of skill in simulating the climatological frequency of synoptic systems, large spread was observed between models. A positive association between all metrics was found, implying that relative skill in simulating the persistence and transitions of systems is related to skill in simulating the climatological frequency. Considering all models and metrics collectively, model performance was found to be related to model horizontal resolution but unrelated to vertical resolution or representation of the stratosphere. In terms of the SOM procedure, the timespan over which evaluation was performed had some influence on model performance skill measures, as did the number of circulation types examined. These findings have implications for selecting models most useful for future projections over the Australian region, particularly for projections related to synoptic scale processes and phenomena. More broadly, this study has demonstrated the utility of the SOMs procedure in providing a process-based evaluation of climate models.
NASA Astrophysics Data System (ADS)
Grand, Robert
2016-09-01
Simulations are playing an increasingly important role in probing the formation history of the Milky Way, including the formation of the thick/thin disc and origin of the metal distribution and chemo-dynamical relations. We introduce the Auriga project, a suite of high resolution cosmological-zoom simulations of Milky Way-sized galaxies simulated with the state-of-the-art cosmological magneto-hydrodynamical code AREPO, and present an analysis of the formation and evolution of the stellar disc(s) from early times to present day. In particular, we show that 'thickened discs' are mainly driven by a bar (if present) and interactions with satellites of masses log10 (M/ Mo ) >= 10, whereas other potential heating mechanisms such as spiral arms, radial migration, and adiabatic heating from mid-plane density growth are all sub-dominant. Interestingly, we find that even in cases of violent satellite interactions the disc reforms quickly (within a few giga years), producing a well-defined disc-bulge system. In nearly all simulations the overall structure of the disc becomes gradually more radially extended and vertically thinner with time, in support of the inside-out, upside-down formation scenario, and without the presence of a thin/thick disc dichotomy. In addition, we comment on the mass distribution of mono-abundance populations and their relation to the bulge and disc components, which are readily comparable to observations from surveys such as APOGEE and Gaia.
Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications
Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2016-01-01
We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 μm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 μm in x,y-plane and ~0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 μm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 μm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods. PMID:23079763
Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications.
Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2012-11-21
We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 µm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 µm in x,y-plane and ∼0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 µm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 µm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods.
NASA Technical Reports Server (NTRS)
Hartman, Brian Davis
1995-01-01
A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.
The Sea Breeze in South-Iceland: Observations with an unmanned aircraft and numerical simulations
NASA Astrophysics Data System (ADS)
Opsanger Jonassen, Marius; Ólafsson, Haraldur; Rasol, Dubravka; Reuder, Joachim
2010-05-01
Sea breeze events, 19-20 July 2009, observed during the international field campaign MOSO, at the southcoast of Iceland, have been investigated using high resolution numerical simulations. Thanks to the use of a small unmanned aircraft system (UAS), SUMO, the wind and temperature aloft could be observed at a high resolution in both space and time. Simultaneously with the UAS operations, conventional platforms were used to obtain surface measurements. The observations show a distinct sea breeze circulation with an onset at around noon and a final decay around 19:00 UTC. At the maximum, the sea breeze layer reached a height of appr. 400 m, marked by a capping wind minimum. When compared to the flow aloft, the sea breeze layer was found to exhibit relatively low temperatures and an expected turn from an off-shore to an on-shore flow. Overall, the agreement between the observations and simulations are relatively good. The simulations suggest a horizontal extent of the circulation some 20-30 km off-shore, but only around 5 km on-shore.
NASA Astrophysics Data System (ADS)
Bubolz, K.; Schenk, H.; Hirsch, T.
2016-05-01
Concentrating solar field operation is affected by shadowing through cloud movement. For line focusing systems the impact of varying irradiance has been studied before by several authors with simulations of relevant thermodynamics assuming spatially homogeneous irradiance or using artificial test signals. While today's simulation capabilities allow more and more a higher spatiotemporal resolution of plant processes there are only few studies on influence of spatially distributed irradiance due to lack of available data. Based on recent work on generating real irradiance maps with high spatial resolution this paper demonstrates their influence on solar field thermodynamics. For a case study an irradiance time series is chosen. One solar field section with several loops and collecting header is modeled for simulation purpose of parabolic trough collectors and oil as heat transfer medium. Assuming homogeneous mass flow distribution among all loops we observe spatially varying temperature characteristics. They are analysed without and with mass flow control and their impact on solar field control design is discussed. Finally, the potential of distributed irradiance data is outlined.
Direct Large-Scale N-Body Simulations of Planetesimal Dynamics
NASA Astrophysics Data System (ADS)
Richardson, Derek C.; Quinn, Thomas; Stadel, Joachim; Lake, George
2000-01-01
We describe a new direct numerical method for simulating planetesimal dynamics in which N˜10 6 or more bodies can be evolved simultaneously in three spatial dimensions over hundreds of dynamical times. This represents several orders of magnitude improvement in resolution over previous studies. The advance is made possible through modification of a stable and tested cosmological code optimized for massively parallel computers. However, owing to the excellent scalability and portability of the code, modest clusters of workstations can treat problems with N˜10 5 particles in a practical fashion. The code features algorithms for detection and resolution of collisions and takes into account the strong central force field and flattened Keplerian disk geometry of planetesimal systems. We demonstrate the range of problems that can be addressed by presenting simulations that illustrate oligarchic growth of protoplanets, planet formation in the presence of giant planet perturbations, the formation of the jovian moons, and orbital migration via planetesimal scattering. We also describe methods under development for increasing the timescale of the simulations by several orders of magnitude.
Climate Change Impact on Air Quality in High Resolution Simulation for Central Europe
NASA Astrophysics Data System (ADS)
Halenka, T.; Huszar, P.; Belda, M.
2009-04-01
Recently the effects of climate change on air-quality and vice-versa are studied quite extensively. In fact, even at regional and local scale especially the impact of climate change on the atmospheric composition and photochemical smog formation conditions can be significant when expecting e.g. more frequent appearance of heat waves etc. For the purpose of qualifying and quantifying the magnitude of such effects and to study the potential of climate forcing due to atmospheric chemistry/aerosols on regional scale, chemistry-transport model was coupled to RegCM on the Department of Meteorology and Environmental Protection, Faculty of Mathematics and Physics, Charles University in Prague, for the simulations in framework of the EC FP6 Project CECILIA. Off-line one way coupling enables the simulation of distribution of pollutants over 1991-2001 in very high resolution of 10 km is compared to the EMEP observations for the area of Central Europe. Simulations driven by climate change boundary conditions for time slices 1991-2000, 2041-2050 and 2091-2100 are presented to show the effect of climate change on the air quality in the region.
The end-to-end simulator for the E-ELT HIRES high resolution spectrograph
NASA Astrophysics Data System (ADS)
Genoni, M.; Landoni, M.; Riva, M.; Pariani, G.; Mason, E.; Di Marcantonio, P.; Disseau, K.; Di Varano, I.; Gonzalez, O.; Huke, P.; Korhonen, H.; Li Causi, Gianluca
2017-06-01
We present the design, architecture and results of the End-to-End simulator model of the high resolution spectrograph HIRES for the European Extremely Large Telescope (E-ELT). This system can be used as a tool to characterize the spectrograph both by engineers and scientists. The model allows to simulate the behavior of photons starting from the scientific object (modeled bearing in mind the main science drivers) to the detector, considering also calibration light sources, and allowing to perform evaluation of the different parameters of the spectrograph design. In this paper, we will detail the architecture of the simulator and the computational model which are strongly characterized by modularity and flexibility that will be crucial in the next generation astronomical observation projects like E-ELT due to of the high complexity and long-time design and development. Finally, we present synthetic images obtained with the current version of the End-to-End simulator based on the E-ELT HIRES requirements (especially high radial velocity accuracy). Once ingested in the Data reduction Software (DRS), they will allow to verify that the instrument design can achieve the radial velocity accuracy needed by the HIRES science cases.
NASA Astrophysics Data System (ADS)
Edwards, J. D.; Dreike, P.; Smith, M. W.; Clemenson, M. D.; Zollweg, J. D.
2015-12-01
We describe developments to a 1-D cylindrical, radiation-hydrodynamics model of a lightning return stroke that simulates lighting spectra with 1 Angstrom resolution in photon wavelength. In previous calculations we assumed standard density air in the return stroke channel and the resulting optical spectrum was that of an optically thick emitter, unlike measured spectra that are optically thin. In this work, we improve our model by initializing our simulation assuming that the leader-heated channel is pre-expanded to a density of 0.01-0.05 ambient and near pressure equilibrium with the surrounding ambient air and by implementing a time-dependent, external heat source to incorporate the effects of continuing current. By doing so, our simulated spectra, illustrated in the attached figure, show strong spectral emission characteristics at wavelengths similar to spectra measured by Orville (1968). In this poster, we describe our model and compare our simulated results with spectra measured by Orville (1968) and Smith (2015). We also use spectroscopic methods to compute physical properties of the plasma channel, e.g. temperature, from Smith's measurements and compare these with our simulated results.
Evaluation of PeneloPET Simulations of Biograph PET/CT Scanners
NASA Astrophysics Data System (ADS)
Abushab, K. M.; Herraiz, J. L.; Vicente, E.; Cal-González, J.; España, S.; Vaquero, J. J.; Jakoby, B. W.; Udías, J. M.
2016-06-01
Monte Carlo (MC) simulations are widely used in positron emission tomography (PET) for optimizing detector design, acquisition protocols, and evaluating corrections and reconstruction methods. PeneloPET is a MC code based on PENELOPE, for PET simulations which considers detector geometry, acquisition electronics and materials, and source definitions. While PeneloPET has been successfully employed and validated with small animal PET scanners, it required a proper validation with clinical PET scanners including time-of-flight (TOF) information. For this purpose, we chose the family of Biograph PET/CT scanners: the Biograph True-Point (B-TP), Biograph True-Point with TrueV (B-TPTV) and the Biograph mCT. They have similar block detectors and electronics, but a different number of rings and configuration. Some effective parameters of the simulations, such as the dead-time and the size of the reflectors in the detectors, were adjusted to reproduce the sensitivity and noise equivalent count (NEC) rate of the B-TPTV scanner. These parameters were then used to make predictions of experimental results such as sensitivity, NEC rate, spatial resolution, and scatter fraction (SF), from all the Biograph scanners and some variations of them (energy windows and additional rings of detectors). Predictions agree with the measured values for the three scanners, within 7% (sensitivity and NEC rate) and 5% (SF). The resolution obtained for the B-TPTV is slightly better (10%) than the experimental values. In conclusion, we have shown that PeneloPET is suitable for simulating and investigating clinical systems with good accuracy and short computational time, though some effort tuning of a few parameters of the scanners modeled may be needed in case that the full details of the scanners studied are not available.
Toward 10-km mesh global climate simulations
NASA Astrophysics Data System (ADS)
Ohfuchi, W.; Enomoto, T.; Takaya, K.; Yoshioka, M. K.
2002-12-01
An atmospheric general circulation model (AGCM) that runs very efficiently on the Earth Simulator (ES) was developed. The ES is a gigantic vector-parallel computer with the peak performance of 40 Tflops. The AGCM, named AFES (AGCM for ES), was based on the version 5.4.02 of an AGCM developed jointly by the Center for Climate System Research, the University of Tokyo and the Japanese National Institute for Environmental Sciences. The AFES was, however, totally rewritten in FORTRAN90 and MPI while the original AGCM was written in FORTRAN77 and not capable of parallel computing. The AFES achieved 26 Tflops (about 65 % of the peak performance of the ES) at resolution of T1279L96 (10-km horizontal resolution and 500-m vertical resolution in middle troposphere to lower stratosphere). Some results of 10- to 20-day global simulations will be presented. At this moment, only short-term simulations are possible due to data storage limitation. As ten tera flops computing is achieved, peta byte data storage are necessary to conduct climate-type simulations at this super-high resolution global simulations. Some possibilities for future research topics in global super-high resolution climate simulations will be discussed. Some target topics are mesoscale structures and self-organization of the Baiu-Meiyu front over Japan, cyclogenecsis over the North Pacific and typhoons around the Japan area. Also improvement in local precipitation with increasing horizontal resolution will be demonstrated.
Effects of Real-Time NASA Vegetation Data on Model Forecasts of Severe Weather
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Bell, Jordan R.; LaFontaine, Frank J.; Peters-Lidard, Christa D.
2012-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA-EOS Aqua and Terra satellites. NASA SPoRT started generating daily real-time GVF composites at 1-km resolution over the Continental United States beginning 1 June 2010. A companion poster presentation (Bell et al.) primarily focuses on impact results in an offline configuration of the Noah land surface model (LSM) for the 2010 warm season, comparing the SPoRT/MODIS GVF dataset to the current operational monthly climatology GVF available within the National Centers for Environmental Prediction (NCEP) and Weather Research and Forecasting (WRF) models. This paper/presentation primarily focuses on individual case studies of severe weather events to determine the impacts and possible improvements by using the real-time, high-resolution SPoRT-MODIS GVFs in place of the coarser-resolution NCEP climatological GVFs in model simulations. The NASA-Unified WRF (NU-WRF) modeling system is employed to conduct the sensitivity simulations of individual events. The NU-WRF is an integrated modeling system based on the Advanced Research WRF dynamical core that is designed to represents aerosol, cloud, precipitation, and land processes at satellite-resolved scales in a coupled simulation environment. For this experiment, the coupling between the NASA Land Information System (LIS) and the WRF model is utilized to measure the impacts of the daily SPoRT/MODIS versus the monthly NCEP climatology GVFs. First, a spin-up run of the LIS is integrated for two years using the Noah LSM to ensure that the land surface fields reach an equilibrium state on the 4-km grid mesh used. Next, the spin-up LIS is run in two separate modes beginning on 1 June 2010, one continuing with the climatology GVFs while the other uses the daily SPoRT/MODIS GVFs. Finally, snapshots of the LIS land surface fields are used to initialize two different simulations of the NU-WRF, one running with climatology LIS and GVFs, and the other running with experimental LIS and NASA/SPoRT GVFs. In this paper/presentation, case study results will be highlighted in regions with significant differences in GVF between the NCEP climatology and SPoRT product during severe weather episodes.
NASA Astrophysics Data System (ADS)
Beamer, J. P.; Hill, D. F.; Liston, G. E.; Arendt, A. A.; Hood, E. W.
2013-12-01
In Prince William Sound (PWS), Alaska, there is a pressing need for accurate estimates of the spatial and temporal variations in coastal freshwater discharge (FWD). FWD into PWS originates from streamflow due to rainfall, annual snowmelt, and changes in stored glacier mass and is important because it helps establish spatial and temporal patterns in ocean salinity and temperature, and is a time-varying boundary condition for oceanographic circulation models. Previous efforts to model FWD into PWS have been heavily empirical, with many physical processes absorbed into calibration coefficients that, in many cases, were calibrated to streams and rivers not hydrologically similar to those discharging into PWS. In this work we adapted and validated a suite of high-resolution (in space and time), physically-based, distributed weather, snowmelt, and runoff-routing models designed specifically for snow melt- and glacier melt-dominated watersheds like PWS in order to: 1) provide high-resolution, real-time simulations of snowpack and FWD, and 2) provide a record of historical variations of FWD. SnowModel, driven with gridded topography, land cover, and 32 years (1979-2011) of 3-hourly North American Regional Reanalysis (NARR) atmospheric forcing data, was used to simulate snowpack accumulation and melt across a PWS model domain. SnowModel outputs of daily snow water equivalent (SWE) depth and grid-cell runoff volumes were then coupled with HydroFlow, a runoff-routing model which routed snowmelt, glacier-melt, and rainfall to each watershed outlet (PWS coastline) in the simulation domain. The end product was a continuous 32-year simulation of daily FWD into PWS. In order to validate the models, SWE and snow depths from SnowModel were compared with observed SWE and snow depths from SnoTel and snow survey data, and discharge from HydroFlow was compared with observed streamflow measurements. As a second phase of this research effort, the coupled models will be set-up to run in real-time, where daily measurements from weather stations in the PWS will be used to drive simulations of snow cover and streamflow. In addition, we will deploy a strategic array of instrumentation aimed at validating the simulated weather estimates and the calculations of freshwater discharge. Upon successful implementation and validation of the modeling system, it will join established and ongoing computational and observational efforts that have a common goal of establishing a comprehensive understanding of the physical behavior of PWS.
NASA Astrophysics Data System (ADS)
Lang, C.; Fettweis, X.; Kittel, C.; Erpicum, M.
2017-12-01
We present the results of high resolution simulations of the climate and SMB of Svalbard with the regional climate model MAR forced by ERA-40 then ERA-Interim, as well as an online downscaling method allowing us to model the SMB and its components at a resolution twice as high (2.5 vs 5 km here) using only about 25% more CPU time. Spitsbergen, the largest island in Svalbard, has a very hilly topography and a high spatial resolution is needed to correctly represent the local topography and the complex pattern of ice distribution and precipitation. However, high resolution runs with an RCM fully coupled to an energy balance module like MAR require a huge amount of computation time. The hydrostatic equilibrium hypothesis used in MAR also becomes less valid as the spatial resolution increases. We therefore developed in MAR a method to run the snow module at a resolution twice as high as the atmospheric module. Near-surface temperature and humidity are corrected on a grid with a resolution twice as high, as a function of their local gradients and the elevation difference between the corresponding pixels in the 2 grids. We compared the results of our runs at 5 km and with SMB downscaled at 2.5 km over 1960 — 2016 and compared those to previous 10 km runs. On Austfonna, where the slopes are gentle, the agreement between observations and the 5 km SMB is better than with the 10 km SMB. It is again improved at 2.5 km but the gain is relatively small, showing the interest of our method rather than running a time consuming classic 2.5 km resolution simulation. On Spitsbergen, we show that a spatial resolution of 2.5 km is still not enough to represent the complex pattern of topography, precipitation and SMB. Due to a change in the summer atmospheric circulation, from a westerly flow over Svalbard to a northwesterly flow bringing colder air, the SMB of Svalbard was stable between 2006 and 2012, while several melt records were broken in Greenland, due to conditions more anticyclonic than usual. In 2013, the reverse situation happened and a southwesterly atmospheric circulation brought warmer air over Svalbard. The SMB broke the last 55 years' record. In 2016, the temperature was higher than average and a new record melt was broken despite a northwesterly flow. The northerly flow still mitigated the warming over Svalbard, which was much lower than most regions of the Arctic.
The relative entropy is fundamental to adaptive resolution simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreis, Karsten; Graduate School Materials Science in Mainz, Staudingerweg 9, 55128 Mainz; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy withmore » respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.« less
The relative entropy is fundamental to adaptive resolution simulations
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Potestio, Raffaello
2016-07-01
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.
NASA Astrophysics Data System (ADS)
Nadeem, Imran; Formayer, Herbert
2016-11-01
A suite of high-resolution (10 km) simulations were performed with the International Centre for Theoretical Physics (ICTP) Regional Climate Model (RegCM3) to study the effect of various lateral boundary conditions (LBCs), domain size, and intermediate domains on simulated precipitation over the Great Alpine Region. The boundary conditions used were ECMWF ERA-Interim Reanalysis with grid spacing 0.75∘, the ECMWF ERA-40 Reanalysis with grid spacing 1.125 and 2.5∘, and finally the 2.5∘ NCEP/DOE AMIP-II Reanalysis. The model was run in one-way nesting mode with direct nesting of the high-resolution RCM (horizontal grid spacing Δx = 10 km) with driving reanalysis, with one intermediate resolution nest (Δx = 30 km) between high-resolution RCM and reanalysis forcings, and also with two intermediate resolution nests (Δx = 90 km and Δx = 30 km) for simulations forced with LBC of resolution 2.5∘. Additionally, the impact of domain size was investigated. The results of multiple simulations were evaluated using different analysis techniques, e.g., Taylor diagram and a newly defined useful statistical parameter, called Skill-Score, for evaluation of daily precipitation simulated by the model. It has been found that domain size has the major impact on the results, while different resolution and versions of LBCs, e.g., 1.125∘ ERA40 and 0.7∘ ERA-Interim, do not produce significantly different results. It is also noticed that direct nesting with reasonable domain size, seems to be the most adequate method for reproducing precipitation over complex terrain, while introducing intermediate resolution nests seems to deteriorate the results.
Unraveling the martian water cycle with high-resolution global climate simulations
NASA Astrophysics Data System (ADS)
Pottier, Alizée; Forget, François; Montmessin, Franck; Navarro, Thomas; Spiga, Aymeric; Millour, Ehouarn; Szantai, André; Madeleine, Jean-Baptiste
2017-07-01
Global climate modeling of the Mars water cycle is usually performed at relatively coarse resolution (200 - 300km), which may not be sufficient to properly represent the impact of waves, fronts, topography effects on the detailed structure of clouds and surface ice deposits. Here, we present new numerical simulations of the annual water cycle performed at a resolution of 1° × 1° (∼ 60 km in latitude). The model includes the radiative effects of clouds, whose influence on the thermal structure and atmospheric dynamics is significant, thus we also examine simulations with inactive clouds to distinguish the direct impact of resolution on circulation and winds from the indirect impact of resolution via water ice clouds. To first order, we find that the high resolution does not dramatically change the behavior of the system, and that simulations performed at ∼ 200 km resolution capture well the behavior of the simulated water cycle and Mars climate. Nevertheless, a detailed comparison between high and low resolution simulations, with reference to observations, reveal several significant changes that impact our understanding of the water cycle active today on Mars. The key northern cap edge dynamics are affected by an increase in baroclinic wave strength, with a complication of northern summer dynamics. South polar frost deposition is modified, with a westward longitudinal shift, since southern dynamics are also influenced. Baroclinic wave mode transitions are observed. New transient phenomena appear, like spiral and streak clouds, already documented in the observations. Atmospheric circulation cells in the polar region exhibit a large variability and are fine structured, with slope winds. Most modeled phenomena affected by high resolution give a picture of a more turbulent planet, inducing further variability. This is challenging for long-period climate studies.
Stochastic Models for Precipitable Water in Convection
NASA Astrophysics Data System (ADS)
Leung, Kimberly
Atmospheric precipitable water vapor (PWV) is the amount of water vapor in the atmosphere within a vertical column of unit cross-sectional area and is a critically important parameter of precipitation processes. However, accurate high-frequency and long-term observations of PWV in the sky were impossible until the availability of modern instruments such as radar. The United States Department of Energy (DOE)'s Atmospheric Radiation Measurement (ARM) Program facility made the first systematic and high-resolution observations of PWV at Darwin, Australia since 2002. At a resolution of 20 seconds, this time series allowed us to examine the volatility of PWV, including fractal behavior with dimension equal to 1.9, higher than the Brownian motion dimension of 1.5. Such strong fractal behavior calls for stochastic differential equation modeling in an attempt to address some of the difficulties of convective parameterization in various kinds of climate models, ranging from general circulation models (GCM) to weather research forecasting (WRF) models. This important observed data at high resolution can capture the fractal behavior of PWV and enables stochastic exploration into the next generation of climate models which considers scales from micrometers to thousands of kilometers. As a first step, this thesis explores a simple stochastic differential equation model of water mass balance for PWV and assesses accuracy, robustness, and sensitivity of the stochastic model. A 1000-day simulation allows for the determination of the best-fitting 25-day period as compared to data from the TWP-ICE field campaign conducted out of Darwin, Australia in early 2006. The observed data and this portion of the simulation had a correlation coefficient of 0.6513 and followed similar statistics and low-resolution temporal trends. Building on the point model foundation, a similar algorithm was applied to the National Center for Atmospheric Research (NCAR)'s existing single-column model as a test-of-concept for eventual inclusion in a general circulation model. The stochastic scheme was designed to be coupled with the deterministic single-column simulation by modifying results of the existing convective scheme (Zhang-McFarlane) and was able to produce a 20-second resolution time series that effectively simulated observed PWV, as measured by correlation coefficient (0.5510), fractal dimension (1.9), statistics, and visual examination of temporal trends.
NASA Astrophysics Data System (ADS)
Park, Shinju; Berenguer, Marc; Sempere-Torres, Daniel; Baugh, Calum; Smith, Paul
2017-04-01
Flash floods induced by heavy rain are one of the hazardous natural events that significantly affect human lives. Because flash floods are characterized by their rapid onset, forecasting flash flood to lead an effective response requires accurate rainfall predictions with high spatial and temporal resolution and adequate representation of the hydrologic and hydraulic processes within a catchment that determine rainfall-runoff accumulations. We present extreme flash flood cases which occurred throughout Europe in 2015-2016 that were identified and forecasted by two real-time approaches: 1) the European Rainfall-Induced Hazard Assessment System (ERICHA) and 2) the European Runoff Index based on Climatology (ERIC). ERICHA is based on the nowcasts of accumulated precipitation generated from the pan-European radar composites produced by the EUMETNET project OPERA. It has the advantage of high-resolution precipitation inputs and rapidly updated forecasts (every 15 minutes), but limited forecast lead time (up to 8 hours). ERIC, on the other hand, provides 5-day forecasts based on the COSMO-LEPS NWP simulations updated 2 times a day but is only produced at a 7 km resolution. We compare the products from both systems and focus on showing the advantages, limitations and complementarities of ERICHA and ERIC for seamless high-resolution flash flood forecasting.
Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs
2014-02-01
The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. Copyright © 2013 Elsevier B.V. All rights reserved.
Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine
NASA Astrophysics Data System (ADS)
Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.
2017-01-01
Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin
2015-05-15
Net precipitation (precipitation minus evapotranspiration, P-E) changes between 1979 and 2011 from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The high resolution simulation better resolves precipitation changes than its coarse resolution forcing, which contributes dominantly to the improved P-E change in the regional simulation compared to the global reanalysis. Hence, the former may provide better insights about the drivers of P-E changes. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence intomore » thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.« less
A new synoptic scale resolving global climate simulation using the Community Earth System Model
NASA Astrophysics Data System (ADS)
Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana
2014-12-01
High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."
Multimaterial 4D Printing with Tailorable Shape Memory Polymers
Ge, Qi; Sakhaei, Amir Hosein; Lee, Howon; Dunn, Conner K.; Fang, Nicholas X.; Dunn, Martin L.
2016-01-01
We present a new 4D printing approach that can create high resolution (up to a few microns), multimaterial shape memory polymer (SMP) architectures. The approach is based on high resolution projection microstereolithography (PμSL) and uses a family of photo-curable methacrylate based copolymer networks. We designed the constituents and compositions to exhibit desired thermomechanical behavior (including rubbery modulus, glass transition temperature and failure strain which is more than 300% and larger than any existing printable materials) to enable controlled shape memory behavior. We used a high resolution, high contrast digital micro display to ensure high resolution of photo-curing methacrylate based SMPs that requires higher exposure energy than more common acrylate based polymers. An automated material exchange process enables the manufacture of 3D composite architectures from multiple photo-curable SMPs. In order to understand the behavior of the 3D composite microarchitectures, we carry out high fidelity computational simulations of their complex nonlinear, time-dependent behavior and study important design considerations including local deformation, shape fixity and free recovery rate. Simulations are in good agreement with experiments for a series of single and multimaterial components and can be used to facilitate the design of SMP 3D structures. PMID:27499417
Resolution dependence of precipitation statistical fidelity in hindcast simulations
O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik; ...
2016-06-19
This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less
Resolution dependence of precipitation statistical fidelity in hindcast simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik
This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Sato, Yushi; Nomoto, Ken'ichi; Maeda, Keiichi; Nakasato, Naohito; Hachisu, Izumi
2017-04-01
We investigate nucleosynthesis in tidal disruption events (TDEs) of white dwarfs (WDs) by intermediate-mass black holes. We consider various types of WDs with different masses and compositions by means of three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations. We model these WDs with different numbers of SPH particles, N, from a few 104 to a few 107 in order to check mass resolution convergence, where SPH simulations with N > 107 (or a space resolution of several 106 cm) have unprecedentedly high resolution in this kind of simulation. We find that nuclear reactions become less active with increasing N and that these nuclear reactions are excited by spurious heating due to low resolution. Moreover, we find no shock wave generation. In order to investigate the reason for the absence of a shock wave, we additionally perform one-dimensional (1D) SPH and mesh-based simulations with a space resolution ranging from 104 to 107 cm, using a characteristic flow structure extracted from the 3D SPH simulations. We find shock waves in these 1D high-resolution simulations, one of which triggers a detonation wave. However, we must be careful of the fact that, if the shock wave emerged in an outer region, it could not trigger the detonation wave due to low density. Note that the 1D initial conditions lack accuracy to precisely determine where a shock wave emerges. We need to perform 3D simulations with ≲106 cm space resolution in order to conclude that WD TDEs become optical transients powered by radioactive nuclei.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun
This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less
NASA Astrophysics Data System (ADS)
Rauser, F.
2013-12-01
We present results from the German BMBF initiative 'High Definition Cloud and Precipitation for advancing Climate Prediction -HD(CP)2'. This initiative addresses most of the problems that are discussed in this session in one, unified approach: cloud physics, convection, boundary layer development, radiation and subgrid variability are approached in one organizational framework. HD(CP)2 merges both observation and high performance computing / model development communities to tackle a shared problem: how to improve the understanding of the most important subgrid-scale processes of cloud and precipitation physics, and how to utilize this knowledge for improved climate predictions. HD(CP)2 is a coordinated initiative to: (i) realize; (ii) evaluate; and (iii) statistically characterize and exploit for the purpose of both parameterization development and cloud / precipitation feedback analysis; ultra-high resolution (100 m in the horizontal, 10-50 m in the vertical) regional hind-casts over time periods (3-15 y) and spatial scales (1000-1500 km) that are climatically meaningful. HD(CP)2 thus consists of three elements (the model development and simulations, their observational evaluation and exploitation/synthesis to advance CP prediction) and its first three-year phase has started on October 1st 2012. As a central part of HD(CP)2, the HD(CP)2 Observational Prototype Experiment (HOPE) has been carried out in spring 2013. In this campaign, high resolution measurements with a multitude of instruments from all major centers in Germany have been carried out in a limited domain, to allow for unprecedented resolution and precision in the observation of microphysics parameters on a resolution that will allow for evaluation and improvement of ultra-high resolution models. At the same time, a local area version of the new climate model ICON of the Max Planck Institute and the German weather service has been developed that allows for LES-type simulations on high resolutions on limited domains. The advantage of modifying an existing, evolving climate model is to share insights from high resolution runs directly with the large-scale modelers and to allow for easy intercomparison and evaluation later on. Within this presentation, we will give a short overview on HD(CP)2 , show results from the observation campaign HOPE and the LES simulations of the same domain and conditions and will discuss how these will lead to an improved understanding and evaluation background for the efforts to improve fast physics in our climate model.