Sample records for adaptive high-resolution simulation

  1. Unstructured mesh adaptivity for urban flooding modelling

    NASA Astrophysics Data System (ADS)

    Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.

    2018-05-01

    Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.

  2. Communication: Adaptive boundaries in multiscale simulations

    NASA Astrophysics Data System (ADS)

    Wagoner, Jason A.; Pande, Vijay S.

    2018-04-01

    Combined-resolution simulations are an effective way to study molecular properties across a range of length and time scales. These simulations can benefit from adaptive boundaries that allow the high-resolution region to adapt (change size and/or shape) as the simulation progresses. The number of degrees of freedom required to accurately represent even a simple molecular process can vary by several orders of magnitude throughout the course of a simulation, and adaptive boundaries react to these changes to include an appropriate but not excessive amount of detail. Here, we derive the Hamiltonian and distribution function for such a molecular simulation. We also design an algorithm that can efficiently sample the boundary as a new coordinate of the system. We apply this framework to a mixed explicit/continuum simulation of a peptide in solvent. We use this example to discuss the conditions necessary for a successful implementation of adaptive boundaries that is both efficient and accurate in reproducing molecular properties.

  3. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  4. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  5. Adaptive hyperspectral imager: design, modeling, and control

    NASA Astrophysics Data System (ADS)

    McGregor, Scot; Lacroix, Simon; Monmayrant, Antoine

    2015-08-01

    An adaptive, hyperspectral imager is presented. We propose a system with easily adaptable spectral resolution, adjustable acquisition time, and high spatial resolution which is independent of spectral resolution. The system yields the possibility to define a variety of acquisition schemes, and in particular near snapshot acquisitions that may be used to measure the spectral content of given or automatically detected regions of interest. The proposed system is modelled and simulated, and tests on a first prototype validate the approach to achieve near snapshot spectral acquisitions without resorting to any computationally heavy post-processing, nor cumbersome calibration

  6. The relative entropy is fundamental to adaptive resolution simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreis, Karsten; Graduate School Materials Science in Mainz, Staudingerweg 9, 55128 Mainz; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy withmore » respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.« less

  7. The relative entropy is fundamental to adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Potestio, Raffaello

    2016-07-01

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.

  8. Adapting to life: ocean biogeochemical modelling and adaptive remeshing

    NASA Astrophysics Data System (ADS)

    Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.

    2014-05-01

    An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More work is required to move this to fully 3-D simulations.

  9. Simulating observations with HARMONI: the integral field spectrograph for the European Extremely Large Telescope

    NASA Astrophysics Data System (ADS)

    Zieleniewski, Simon; Thatte, Niranjan; Kendrew, Sarah; Houghton, Ryan; Tecza, Matthias; Clarke, Fraser; Fusco, Thierry; Swinbank, Mark

    2014-07-01

    With the next generation of extremely large telescopes commencing construction, there is an urgent need for detailed quantitative predictions of the scientific observations that these new telescopes will enable. Most of these new telescopes will have adaptive optics fully integrated with the telescope itself, allowing unprecedented spatial resolution combined with enormous sensitivity. However, the adaptive optics point spread function will be strongly wavelength dependent, requiring detailed simulations that accurately model these variations. We have developed a simulation pipeline for the HARMONI integral field spectrograph, a first light instrument for the European Extremely Large Telescope. The simulator takes high-resolution input data-cubes of astrophysical objects and processes them with accurate atmospheric, telescope and instrumental effects, to produce mock observed cubes for chosen observing parameters. The output cubes represent the result of a perfect data reduc- tion process, enabling a detailed analysis and comparison between input and output, showcasing HARMONI's capabilities. The simulations utilise a detailed knowledge of the telescope's wavelength dependent adaptive op- tics point spread function. We discuss the simulation pipeline and present an early example of the pipeline functionality for simulating observations of high redshift galaxies.

  10. A Parallel, Multi-Scale Watershed-Hydrologic-Inundation Model with Adaptively Switching Mesh for Capturing Flooding and Lake Dynamics

    NASA Astrophysics Data System (ADS)

    Ji, X.; Shen, C.

    2017-12-01

    Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.

  11. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  12. Adapting to life: ocean biogeochemical modelling and adaptive remeshing

    NASA Astrophysics Data System (ADS)

    Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.

    2013-11-01

    An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. As an example, state-of-the-art models give values of primary production approximately two orders of magnitude lower than those observed in the ocean's oligotrophic gyres, which cover a third of the Earth's surface. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a~simple vertical column (quasi 1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The simulations capture both the seasonal and inter-annual variations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3, so reducing computational overhead. We then show the potential of this method in two case studies where we change the metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate adaptive meshes may provide a~suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high spatial resolution whilst minimising computational cost.

  13. Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa

    2017-05-01

    We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.

  14. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-02-15

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k{approx}20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discussmore » how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.« less

  15. Visible light high-resolution imaging system for large aperture telescope by liquid crystal adaptive optics with phase diversity technique.

    PubMed

    Xu, Zihao; Yang, Chengliang; Zhang, Peiguang; Zhang, Xingyun; Cao, Zhaoliang; Mu, Quanquan; Sun, Qiang; Xuan, Li

    2017-08-30

    There are more than eight large aperture telescopes (larger than eight meters) equipped with adaptive optics system in the world until now. Due to the limitations such as the difficulties of increasing actuator number of deformable mirror, most of them work in the infrared waveband. A novel two-step high-resolution optical imaging approach is proposed by applying phase diversity (PD) technique to the open-loop liquid crystal adaptive optics system (LC AOS) for visible light high-resolution adaptive imaging. Considering the traditional PD is not suitable for LC AOS, the novel PD strategy is proposed which can reduce the wavefront estimating error caused by non-modulated light generated by liquid crystal spatial light modulator (LC SLM) and make the residual distortions after open-loop correction to be smaller. Moreover, the LC SLM can introduce any aberration which realizes the free selection of phase diversity. The estimating errors are greatly reduced in both simulations and experiments. The resolution of the reconstructed image is greatly improved on both subjective visual effect and the highest discernible space resolution. Such technique can be widely used in large aperture telescopes for astronomical observations such as terrestrial planets, quasars and also can be used in other applications related to wavefront correction.

  16. Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Seun; Lin, Guang; Sun, Xin

    2013-01-01

    Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.

  17. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    NASA Astrophysics Data System (ADS)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  18. Accurate and general treatment of electrostatic interaction in Hamiltonian adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.

    2016-10-01

    In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.

  19. The WASCAL high-resolution climate projection ensemble for West Africa

    NASA Astrophysics Data System (ADS)

    Kunstmann, Harald; Heinzeller, Dominikus; Dieng, Diarra; Smiatek, Gerhard; Bliefernicht, Jan; Hamann, Ilse; Salack, Seyni

    2017-04-01

    With climate change being one of the most severe challenges to rural Africa in the 21st century, West Africa is facing an urgent need to develop effective adaptation and mitigation measures to protect its constantly growing population. We perform ensemble-based regional climate simulations at a high resolution of 12km for West Africa to allow a scientifically sound derivation of climate change adaptation measures. Based on the RCP4.5 scenario, our ensemble consist of three simulation experiments with the Weather Research & Forecasting Tool (WRF) and one additional experiment with the Consortium for Small-scale Modelling Model COSMO in Climate Mode (COSMO-CLM). We discuss the model performance over the validation period 1980-2010, including a novel, station-based precipitation database for West Africa obtained within the WASCAL (West African Science Service Centre for Climate Change and Adapted Land Use) program. Particular attention is paid to the representation of the dynamics of the West African Summer Monsoon and to the added value of our high-resolution models over existing data sets. We further present results on the climate change signal obtained for the two future periods 2020-2050 and 2070-2100 and compare them to current state-of-the-art projections from the CORDEX-Africa project. While the temperature change signal is similar to that obtained within CORDEX-Africa, our simulations predict a wetter future for the Coast of Guinea and the southern Soudano area and a slight drying in the northernmost part of the Sahel.

  20. Non-periodic multi-slit masking for a single counter rotating 2-disc chopper and channeling guides for high resolution and high intensity neutron TOF spectroscopy

    NASA Astrophysics Data System (ADS)

    Bartkowiak, M.; Hofmann, T.; Stüßer, N.

    2017-02-01

    Energy resolution is an important design goal for time-of-flight instruments and neutron spectroscopy. For high-resolution applications, it is required that the burst times of choppers be short, going down to the μs-range. To produce short pulses while maintaining high neutron flux, we propose beam masks with more than two slits on a counter-rotating 2-disc chopper, behind specially adapted focusing multi-channel guides. A novel non-regular arrangement of the slits ensures that the beam opens only once per chopper cycle, when the masks are congruently aligned. Additionally, beam splitting and intensity focusing by guides before and after the chopper position provide high intensities even for small samples. Phase-space analysis and Monte Carlo simulations on examples of four-slit masks with adapted guide geometries show the potential of the proposed setup.

  1. Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi

    Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.

  2. Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique

    DOE PAGES

    Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi

    2017-06-01

    Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.

  3. Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim

    2018-06-01

    We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.

  4. Evaluation of high-resolution climate simulations for West Africa using COSMO-CLM

    NASA Astrophysics Data System (ADS)

    Dieng, Diarra; Smiatek, Gerhard; Bliefernicht, Jan; Laux, Patrick; Heinzeller, Dominikus; Kunstmann, Harald; Sarr, Abdoulaye; Thierno Gaye, Amadou

    2017-04-01

    The climate change modeling activities within the WASCAL program (West African Science Service Center on Climate Change and Adapted Land Use) concentrate on the provisioning of future climate change scenario data at high spatial and temporal resolution and quality in West Africa. Such information is highly required for impact studies in water resources and agriculture for the development of reliable climate change adaptation and mitigation strategies. In this study, we present a detailed evaluation of high simulation runs based on the regional climate model, COSMO model in CLimate Mode (COSMO-CLM). The model is applied over West Africa in a nested approach with two simulation domains at 0.44° and 0.11° resolution using reanalysis data from ERA-Interim (1979-2013). The models runs are compared to several state-of-the-art observational references (e.g., CRU, CHIRPS) including daily precipitation data provided by national meteorological services in West Africa. Special attention is paid to the reproduction of the dynamics of the West African Monsoon (WMA), its associated precipitation patterns and crucial agro-climatological indices such as the onset of the rainy season. In addition, first outcomes of the regional climate change simulations driven by MPI-ESM-LR are presented for a historical period (1980 to 2010) and two future periods (2020 to 2050, 2070 to 2100). The evaluation of the reanalysis runs shows that COSMO-CLM is able to reproduce the observed major climate characteristics including the West African Monsoon within the range of comparable RCM evaluations studies. However, substantial uncertainties remain, especially in the Sahel zone. The added value of the higher resolution of the nested run is reflected in a smaller bias in extreme precipitation statistics with respect to the reference data.

  5. High resolution crop growth simulation for identification of potential adaptation strategies under climate change

    NASA Astrophysics Data System (ADS)

    Kim, K. S.; Yoo, B. H.

    2016-12-01

    Impact assessment of climate change on crop production would facilitate planning of adaptation strategies. Because socio-environmental conditions would differ by local areas, it would be advantageous to assess potential adaptation measures at a specific area. The objectives of this study was to develop a crop growth simulation system at a very high spatial resolution, e.g., 30 m, and to assess different adaptation options including shift of planting date and use of different cultivars. The Decision Support System for Agrotechnology Transfer (DSSAT) model was used to predict yields of soybean and maize in Korea. Gridded data for climate and soil were used to prepare input data for the DSSAT model. Weather input data were prepared at the resolution of 30 m using bilinear interpolation from gridded climate scenario data. Those climate data were obtained from Korean Meteorology Administration. Spatial resolution of temperature and precipitation was 1 km whereas that of solar radiation was 12.5 km. Soil series data at the 30 m resolution were obtained from the soil database operated by Rural Development Administration, Korea. The SOL file, which is a soil input file for the DSSAT model was prepared using physical and chemical properties of a given soil series, which were available from the soil database. Crop yields were predicted by potential adaptation options based on planting date and cultivar. For example, 10 planting dates and three cultivars were used to identify ideal management options for climate change adaptation. In prediction of maize yield, combination of 20 planting dates and two cultivars was used as management options. Predicted crop yields differed by site even within a relatively small region. For example, the maximum of average yields for 2001-2010 seasons differed by sites In a county of which areas is 520 km2 (Fig. 1). There was also spatial variation in the ideal management option in the region (Fig. 2). These results suggested that local assessment of climate change impact on crop production would be useful for planning adaptation options.

  6. Adaptive resolution simulation of an atomistic protein in MARTINI water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J., E-mail: s.j.marrink@rug.nl

    2014-02-07

    We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecularmore » dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.« less

  7. Adaptive resolution simulation of an atomistic protein in MARTINI water.

    PubMed

    Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J; Praprotnik, Matej

    2014-02-07

    We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecular dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.

  8. Development of the Fully Adaptive Storm Tide (FAST) Model for hurricane induced storm surges and associated inundation

    NASA Astrophysics Data System (ADS)

    Teng, Y. C.; Kelly, D.; Li, Y.; Zhang, K.

    2016-02-01

    A new state-of-the-art model (the Fully Adaptive Storm Tide model, FAST) for the prediction of storm surges over complex landscapes is presented. The FAST model is based on the conservation form of the full non-linear depth-averaged long wave equations. The equations are solved via an explicit finite volume scheme with interfacial fluxes being computed via Osher's approximate Riemann solver. Geometric source terms are treated in a high order manner that is well-balanced. The numerical solution technique has been chosen to enable the accurate simulation of wetting and drying over complex topography. Another important feature of the FAST model is the use of a simple underlying Cartesian mesh with tree-based static and dynamic adaptive mesh refinement (AMR). This permits the simulation of unsteady flows over varying landscapes (including localized features such as canals) by locally increasing (or relaxing) grid resolution in a dynamic fashion. The use of (dynamic) AMR lowers the computational cost of the storm surge model whilst retaining high resolution (and thus accuracy) where and when it is required. In additional, the FAST model has been designed to execute in a parallel computational environment with localized time-stepping. The FAST model has already been carefully verified against a series of benchmark type problems (Kelly et al. 2015). Here we present two simulations of the storm tide due to Hurricane Ike(2008) and Hurricane Sandy (2012). The model incorporates high resolution LIDAR data for the major portion of the New York City. Results compare favorably with water elevations measured by NOAA tidal gauges, by mobile sensors deployed and high water marks collected by the USGS.

  9. An atmospheric turbulence and telescope simulator for the development of AOLI

    NASA Astrophysics Data System (ADS)

    Puga, Marta; López, Roberto; King, David; Oscoz, Alejandro

    2014-08-01

    AOLI, Adaptive Optics Lucky Imager, is the next generation of extremely high resolution instruments in the optical range, combining the two more promising techniques: Adaptive optics and lucky imaging. The possibility of reaching fainter objects at maximum resolution implies a better use of weak energy on each lucky image. AOLI aims to achieve this by using an adaptive optics system to reduce the dispersion that seeing causes on the spot and therefore increasing the number of optimal images to accumulate, maximizing the efficiency of the lucky imaging technique. The complexity of developments in hardware, control and software for in-site telescope tests claim for a system to simulate the telescope performance. This paper outlines the requirements and a concept/preliminary design for the William Herschel Telescope (WHT) and atmospheric turbulence simulator. The design consists of pupil resemble, a variable intensity point source, phase plates and a focal plane mask to assist in the alignment, diagnostics and calibration of AOLI wavefront sensor, AO loop and science detectors, as well as enabling stand-alone test operation of AOLI.

  10. High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.

    PubMed

    Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min

    2012-01-01

    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  11. A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM

    NASA Astrophysics Data System (ADS)

    Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui

    2014-12-01

    Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.

  12. Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2012-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.

  13. Climate Modeling: Ocean Cavities below Ice Shelves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petersen, Mark Roger

    The Accelerated Climate Model for Energy (ACME), a new initiative by the U.S. Department of Energy, includes unstructured-mesh ocean, land-ice, and sea-ice components using the Model for Prediction Across Scales (MPAS) framework. The ability to run coupled high-resolution global simulations efficiently on large, high-performance computers is a priority for ACME. Sub-ice shelf ocean cavities are a significant new capability in ACME, and will be used to better understand how changing ocean temperature and currents influence glacial melting and retreat. These simulations take advantage of the horizontal variable-resolution mesh and adaptive vertical coordinate in MPAS-Ocean, in order to place high resolutionmore » below ice shelves and near grounding lines.« less

  14. New methods and astrophysical applications of adaptive mesh fluid simulations

    NASA Astrophysics Data System (ADS)

    Wang, Peng

    The formation of stars, galaxies and supermassive black holes are among the most interesting unsolved problems in astrophysics. Those problems are highly nonlinear and involve enormous dynamical ranges. Thus numerical simulations with spatial adaptivity are crucial in understanding those processes. In this thesis, we discuss the development and application of adaptive mesh refinement (AMR) multi-physics fluid codes to simulate those nonlinear structure formation problems. To simulate the formation of star clusters, we have developed an AMR magnetohydrodynamics (MHD) code, coupled with radiative cooling. We have also developed novel algorithms for sink particle creation, accretion, merging and outflows, all of which are coupled with the fluid algorithms using operator splitting. With this code, we have been able to perform the first AMR-MHD simulation of star cluster formation for several dynamical times, including sink particle and protostellar outflow feedbacks. The results demonstrated that protostellar outflows can drive supersonic turbulence in dense clumps and explain the observed slow and inefficient star formation. We also suggest that global collapse rate is the most important factor in controlling massive star accretion rate. In the topics of galaxy formation, we discuss the results of three projects. In the first project, using cosmological AMR hydrodynamics simulations, we found that isolated massive star still forms in cosmic string wakes even though the mega-parsec scale structure has been perturbed significantly by the cosmic strings. In the second project, we calculated the dynamical heating rate in galaxy formation. We found that by balancing our heating rate with the atomic cooling rate, it gives a critical halo mass which agrees with the result of numerical simulations. This demonstrates that the effect of dynamical heating should be put into semi-analytical works in the future. In the third project, using our AMR-MHD code coupled with radiative cooling module, we performed the first MHD simulations of disk galaxy formation. We find that the initial magnetic fields are quickly amplified to Milky-Way strength in a self-regulated way with amplification rate roughly one e-folding per orbit. This suggests that Milky Way strength magnetic field might be common in high redshift disk galaxies. We have also developed AMR relativistic hydrodynamics code to simulate black hole relativistic jets. We discuss the coupling of the AMR framework with various relativistic solvers and conducted extensive algorithmic comparisons. Via various test problems, we emphasize the importance of resolution studies in relativistic flow simulations because extremely high resolution is required especially when shear flows are present in the problem. Then we present the results of 3D simulations of supermassive black hole jets propagation and gamma ray burst jet breakout. Resolution studies of the two 3D jets simulations further highlight the need of high resolutions to calculate accurately relativistic flow problems. Finally, to push forward the kind of simulations described above, we need faster codes with more physics included. We describe an implementation of compressible inviscid fluid solvers with AMR on Graphics Processing Units (GPU) using NVIDIA's CUDA. We show that the class of high resolution shock capturing schemes can be mapped naturally on this architecture. For both uniform and adaptive simulations, we achieve an overall speedup of approximately 10 times faster execution on one Quadro FX 5600 GPU as compared to a single 3 GHz Intel core on the host computer. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case.

  15. Comments on "Adaptive resolution simulation in equilibrium and beyond" by H. Wang and A. Agarwal

    NASA Astrophysics Data System (ADS)

    Klein, R.

    2015-09-01

    Wang and Agarwal (Eur. Phys. J. Special Topics, this issue, 2015, doi: 10.1140/epjst/e2015-02411-2) discuss variants of Adaptive Resolution Molecular Dynamics Simulations (AdResS), and their applications. Here we comment on their report, addressing scaling properties of the method, artificial forcings implemented to ensure constant density across the full simulation despite changing thermodynamic properties of the simulated media, the possible relation between an AdResS system on the one hand and a phase transition phenomenon on the other, and peculiarities of the SPC/E water model.

  16. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    NASA Astrophysics Data System (ADS)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  17. Navier-Stokes Simulation of UH-60A Rotor/Wake Interaction Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2017-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a flexible UH-60A rotor in forward flight, where the rotor wake interacts with the rotor blades. These flow conditions involved blade vortex interaction and dynamic stall, two common conditions that occur as modern helicopter designs strive to achieve greater flight speeds and payload capacity. These numerical simulations utilized high-order spatial accuracy and delayed detached eddy simulation. Emphasis was placed on understanding how improved rotor wake resolution affects the prediction of the normal force, pitching moment, and chord force of the rotor. Adaptive mesh refinement was used to highly resolve the turbulent rotor wake in a computationally efficient manner. Moreover, blade vortex interaction was found to trigger dynamic stall. Time-dependent flow visualization was utilized to provide an improved understanding of the numerical and physical mechanisms involved with three-dimensional dynamic stall.

  18. How the Dynamics of a Supramolecular Polymer Determines Its Dynamic Adaptivity and Stimuli-Responsiveness: Structure-Dynamics-Property Relationships From Coarse-Grained Simulations.

    PubMed

    Torchi, Andrea; Bochicchio, Davide; Pavan, Giovanni M

    2018-04-12

    The rational design of supramolecular polymers that can adapt or respond in time to specific stimuli in a controlled way is interesting for many applications, but this requires understanding the molecular factors that make the material faster or slower in responding to the stimulus. To this end, it is necessary to study the dynamic adaptive properties at submolecular resolution, which is difficult at an experimental level. Here we show coarse-grained molecular dynamics simulations (<5 Å resolution) demonstrating how the dynamic adaptivity and stimuli responsiveness of a supramolecular polymer is controlled by the intrinsic dynamics of the assembly, which is in turn determined by the structure of the monomers. As a representative case, we focus on a water-soluble 1,3,5-benzenetricarboxamide (BTA) supramolecular polymer incorporating (charged) receptor monomers, experimentally seen to undergo dynamic clustering following the superselective binding to a multivalent recruiter. Our simulations show that the dynamic reorganization of the supramolecular structure proceeds via monomer diffusion on the dynamic fiber surface (exchange within the fiber). Rationally changing the structure of the monomers to make the fiber surface more or less dynamic allows tuning the rate of response to the stimulus and of supramolecular reconfiguration. Simple in silico experiments draw a structure-dynamics-property relationship revealing the key factors underpinning the dynamic adaptivity and stimuli-responsiveness of these supramolecular polymers. We come out with clear evidence that to master the bioinspired properties of these fibers, it is necessary to control their intrinsic dynamics, while the high-resolution of our molecular models permits us to show how.

  19. Low-resolution simulations of vesicle suspensions in 2D

    NASA Astrophysics Data System (ADS)

    Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George

    2018-03-01

    Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.

  20. Numerical modeling of landslide-generated tsunami using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Wilson, Cian; Collins, Gareth; Desousa Costa, Patrick; Piggott, Matthew

    2010-05-01

    Landslides impacting into or occurring under water generate waves, which can have devastating environmental consequences. Depending on the characteristics of the landslide the waves can have significant amplitude and potentially propagate over large distances. Linear models of classical earthquake-generated tsunamis cannot reproduce the highly nonlinear generation mechanisms required to accurately predict the consequences of landslide-generated tsunamis. Also, laboratory-scale experimental investigation is limited to simple geometries and short time-scales before wave reflections contaminate the data. Computational fluid dynamics models based on the nonlinear Navier-Stokes equations can simulate landslide-tsunami generation at realistic scales. However, traditional chessboard-like structured meshes introduce superfluous resolution and hence the computing power required for such a simulation can be prohibitively high, especially in three dimensions. Unstructured meshes allow the grid spacing to vary rapidly from high resolution in the vicinity of small scale features to much coarser, lower resolution in other areas. Combining this variable resolution with dynamic mesh adaptivity allows such high resolution zones to follow features like the interface between the landslide and the water whilst minimising the computational costs. Unstructured meshes are also better suited to representing complex geometries and bathymetries allowing more realistic domains to be simulated. Modelling multiple materials, like water, air and a landslide, on an unstructured adaptive mesh poses significant numerical challenges. Novel methods of interface preservation must be considered and coupled to a flow model in such a way that ensures conservation of the different materials. Furthermore this conservation property must be maintained during successive stages of mesh optimisation and interpolation. In this paper we validate a new multi-material adaptive unstructured fluid dynamics model against the well-known Lituya Bay landslide-generated wave experiment and case study [1]. In addition, we explore the effect of physical parameters, such as the shape, velocity and viscosity of the landslide, on wave amplitude and run-up, to quantify their influence on the landslide-tsunami hazard. As well as reproducing the experimental results, the model is shown to have excellent conservation and bounding properties. It also requires fewer nodes than an equivalent resolution fixed mesh simulation, therefore minimising at least one aspect of the computational cost. These computational savings are directly transferable to higher dimensions and some initial three dimensional results are also presented. These reproduce the experiments of DiRisio et al. [2], where an 80cm long landslide analogue was released from the side of an 8.9m diameter conical island in a 50 × 30m tank of water. The resulting impact between the landslide and the water generated waves with an amplitude of 1cm at wave gauges around the island. The range of scales that must be considered in any attempt to numerically reproduce this experiment makes it an ideal case study for our multi-material adaptive unstructured fluid dynamics model. [1] FRITZ, H. M., MOHAMMED, F., & YOO, J. 2009. Lituya Bay Landslide Impact Generated Mega-Tsunami 50th Anniversary. Pure and Applied Geophysics, 166(1), 153-175. [2] DIRISIO, M., DEGIROLAMO, P., BELLOTTI, G., PANIZZO, A., ARISTODEMO, F.,

  1. A cost-effective strategy for nonoscillatory convection without clipping

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1990-01-01

    Clipping of narrow extrema and distortion of smooth profiles is a well known problem associated with so-called high resolution nonoscillatory convection schemes. A strategy is presented for accurately simulating highly convective flows containing discontinuities such as density fronts or shock waves, without distorting smooth profiles or clipping narrow local extrema. The convection algorithm is based on non-artificially diffusive third-order upwinding in smooth regions, with automatic adaptive stencil expansion to (in principle, arbitrarily) higher order upwinding locally, in regions of rapidly changing gradients. This is highly cost effective because the wider stencil is used only where needed-in isolated narrow regions. A recently developed universal limiter assures sharp monotonic resolution of discontinuities without introducing artificial diffusion or numerical compression. An adaptive discriminator is constructed to distinguish between spurious overshoots and physical peaks; this automatically relaxes the limiter near local turning points, thereby avoiding loss of resolution in narrow extrema. Examples are given for one-dimensional pure convection of scalar profiles at constant velocity.

  2. Towards a new multiscale air quality transport model using the fully unstructured anisotropic adaptive mesh technology of Fluidity (version 4.1.9)

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.

    2015-10-01

    An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.

  3. Detecting phase-amplitude coupling with high frequency resolution using adaptive decompositions

    PubMed Central

    Pittman-Polletta, Benjamin; Hsieh, Wan-Hsin; Kaur, Satvinder; Lo, Men-Tzung; Hu, Kun

    2014-01-01

    Background Phase-amplitude coupling (PAC) – the dependence of the amplitude of one rhythm on the phase of another, lower-frequency rhythm – has recently been used to illuminate cross-frequency coordination in neurophysiological activity. An essential step in measuring PAC is decomposing data to obtain rhythmic components of interest. Current methods of PAC assessment employ narrowband Fourier-based filters, which assume that biological rhythms are stationary, harmonic oscillations. However, biological signals frequently contain irregular and nonstationary features, which may contaminate rhythms of interest and complicate comodulogram interpretation, especially when frequency resolution is limited by short data segments. New method To better account for nonstationarities while maintaining sharp frequency resolution in PAC measurement, even for short data segments, we introduce a new method of PAC assessment which utilizes adaptive and more generally broadband decomposition techniques – such as the empirical mode decomposition (EMD). To obtain high frequency resolution PAC measurements, our method distributes the PAC associated with pairs of broadband oscillations over frequency space according to the time-local frequencies of these oscillations. Comparison with existing methods We compare our novel adaptive approach to a narrowband comodulogram approach on a variety of simulated signals of short duration, studying systematically how different types of nonstationarities affect these methods, as well as on EEG data. Conclusions Our results show: (1) narrowband filtering can lead to poor PAC frequency resolution, and inaccuracy and false negatives in PAC assessment; (2) our adaptive approach attains better PAC frequency resolution and is more resistant to nonstationarities and artifacts than traditional comodulograms. PMID:24452055

  4. SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows

    NASA Astrophysics Data System (ADS)

    Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu

    2017-12-01

    A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.

  5. High-resolution modeling of indirectly driven high-convergence layered inertial confinement fusion capsule implosions

    DOE PAGES

    Haines, Brian M.; Aldrich, C. H.; Campbell, J. M.; ...

    2017-04-24

    In this study, we present the results of high-resolution simulations of the implosion of high-convergence layered indirect-drive inertial confinement fusion capsules of the type fielded on the National Ignition Facility using the xRAGE radiation-hydrodynamics code. In order to evaluate the suitability of xRAGE to model such experiments, we benchmark simulation results against available experimental data, including shock-timing, shock-velocity, and shell trajectory data, as well as hydrodynamic instability growth rates. We discuss the code improvements that were necessary in order to achieve favorable comparisons with these data. Due to its use of adaptive mesh refinement and Eulerian hydrodynamics, xRAGE is particularlymore » well suited for high-resolution study of multi-scale engineering features such as the capsule support tent and fill tube, which are known to impact the performance of high-convergence capsule implosions. High-resolution two-dimensional (2D) simulations including accurate and well-resolved models for the capsule fill tube, support tent, drive asymmetry, and capsule surface roughness are presented. These asymmetry seeds are isolated in order to study their relative importance and the resolution of the simulations enables the observation of details that have not been previously reported. We analyze simulation results to determine how the different asymmetries affect hotspot reactivity, confinement, and confinement time and how these combine to degrade yield. Yield degradation associated with the tent occurs largely through decreased reactivity due to the escape of hot fuel mass from the hotspot. Drive asymmetries and the fill tube, however, degrade yield primarily via burn truncation, as associated instability growth accelerates the disassembly of the hotspot. Finally, modeling all of these asymmetries together in 2D leads to improved agreement with experiment but falls short of explaining the experimentally observed yield degradation, consistent with previous 2D simulations of such capsules.« less

  6. A sampling procedure to guide the collection of narrow-band, high-resolution spatially and spectrally representative reflectance data. [satellite imagery of earth resources

    NASA Technical Reports Server (NTRS)

    Brand, R. R.; Barker, J. L.

    1983-01-01

    A multistage sampling procedure using image processing, geographical information systems, and analytical photogrammetry is presented which can be used to guide the collection of representative, high-resolution spectra and discrete reflectance targets for future satellite sensors. The procedure is general and can be adapted to characterize areas as small as minor watersheds and as large as multistate regions. Beginning with a user-determined study area, successive reductions in size and spectral variation are performed using image analysis techniques on data from the Multispectral Scanner, orbital and simulated Thematic Mapper, low altitude photography synchronized with the simulator, and associated digital data. An integrated image-based geographical information system supports processing requirements.

  7. Study of the adaptive refinement on an open source 2D shallow-water flow solver using quadtree grid for flash flood simulations.

    NASA Astrophysics Data System (ADS)

    Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.

    2015-12-01

    The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/

  8. Navier-Stokes Simulation of UH-60A Rotor/Wake Interaction Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2017-01-01

    High-resolution simulations of rotor/vortex-wake interaction for a UH60-A rotor under BVI and dynamic stallconditions were carried out with the OVERFLOW Navier-Stokes code.a. The normal force and pitching moment variation with azimuth angle were in good overall agreementwith flight-test data, similar to other CFD results reported in the literature.b. The wake-grid resolution did not have a significant effect on the rotor-blade airloads. This surprisingresult indicates that a wake grid spacing of (Delta)S=10% ctip is sufficient for engineering airloads predictionfor hover and forward flight. This assumes high-resolution body grids, high-order spatial accuracy, anda hybrid RANS/DDES turbulence model.c. Three-dimensional dynamic stall was found to occur due the presence of blade-tip vortices passing overa rotor blade on the retreating side. This changed the local airfoil angle of attack, causing stall, unlikethe 2D perspective of pure pitch oscillation of the local airfoil section.

  9. High-resolution coupled ice sheet-ocean modeling using the POPSICLES model

    NASA Astrophysics Data System (ADS)

    Ng, E. G.; Martin, D. F.; Asay-Davis, X.; Price, S. F.; Collins, W.

    2014-12-01

    It is expected that a primary driver of future change of the Antarctic ice sheet will be changes in submarine melting driven by incursions of warm ocean water into sub-ice shelf cavities. Correctly modeling this response on a continental scale will require high-resolution modeling of the coupled ice-ocean system. We describe the computational and modeling challenges in our simulations of the full Southern Ocean coupled to a continental-scale Antarctic ice sheet model at unprecedented spatial resolutions (0.1 degree for the ocean model and adaptive mesh refinement down to 500m in the ice sheet model). The POPSICLES model couples the POP2x ocean model, a modified version of the Parallel Ocean Program (Smith and Gent, 2002), with the BISICLES ice-sheet model (Cornford et al., 2012) using a synchronous offline-coupling scheme. Part of the PISCEES SciDAC project and built on the Chombo framework, BISICLES makes use of adaptive mesh refinement to fully resolve dynamically-important regions like grounding lines and employs a momentum balance similar to the vertically-integrated formulation of Schoof and Hindmarsh (2009). Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests like MISMIP3D (Pattyn et al., 2013) and realistic configurations (Favier et al. 2014). POP2x includes sub-ice-shelf circulation using partial top cells (Losch, 2008) and boundary layer physics following Holland and Jenkins (1999), Jenkins (2001), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008) and other continental-scale simulations and melt-rate observations (Kimura et al., 2013; Rignot et al., 2013). For the POPSICLES Antarctic-Southern Ocean simulations, ice sheet and ocean models communicate at one-month coupling intervals.

  10. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-02-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  11. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-06-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  12. Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition. [NEEDS Information Adaptive System

    NASA Technical Reports Server (NTRS)

    Daluge, D. R.; Ruedger, W. H.

    1981-01-01

    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered.

  13. Sharpening method of satellite thermal image based on the geographical statistical model

    NASA Astrophysics Data System (ADS)

    Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng

    2016-04-01

    To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.

  14. High Resolution Simulations of Arctic Sea Ice, 1979-1993

    DTIC Science & Technology

    2003-01-01

    William H. Lipscomb * PO[ARISSP To evaluate improvements in modelling Arctic sea ice, we compare results from two regional models at 1/120 horizontal...resolution. The first is a coupled ice-ocean model of the Arctic Ocean, consisting of an ocean model (adapted from the Parallel Ocean Program, Los...Alamos National Laboratory [LANL]) and the "old" sea ice model . The second model uses the same grid but consists of an improved "new" sea ice model (LANL

  15. SCASim: A Flexible and Reusable Detector Simulator for the MIRI instrument of the JWST

    NASA Astrophysics Data System (ADS)

    Beard, S.; Morin, J.; Gastaud, R.; Azzollini, R.; Bouchet, P.; Chaintreuil, S.; Lahuis, F.; Littlejohns, O.; Nehme, C.; Pye, J.

    2012-09-01

    The JWST Mid Infrared Instrument (MIRI) operates in the 5-28μm wavelength range and can be configured for imaging, coronographic imaging, long-slit, low-resolution spectroscopy or medium resolution spectroscopy with an integral field unit. SCASim is one of a suite of simulators which operate together to simulate all the different modes of the instrument. These simulators are essential for the efficient operation of MIRI; allowing more accurate planning of MIRI observations on sky or during the pre-launch testing of the instrument. The data generated by the simulators are essential for testing the data pipeline software. The simulators not only need to reproduce the behaviour of the instrument faithfully, they also need to be adaptable so that information learned about the instrument during the pre-launch testing and in-orbit commissioning can be fed back into the simulation. SCASim simulates the behaviour of the MIRI detectors, taking into account cosmetic effects, quantum efficiency, shot noise, dark current, read noise, amplifier layout, cosmic ray hits, etc... The software has benefited from three major design choices. First, the development of a suite of MIRI simulators, rather than single simulator, has allowed MIRI simulators to be developed in parallel by different teams, with each simulator able to concentrate on one particular area. SCASim provides a facility common to all the other simulators and saves duplication of effort. Second, SCASim has a Python-based object-oriented design which makes it easier to adapt as new information about the instrument is learned during testing. Third, all simulator parameters are maintained in external files, rather than being hard coded in the software. These design choices have made SCASim highly reusable. In its present form it can be used to simulate any JWST detector, and it can be adapted for future instruments with similar, photon-counting detectors.

  16. A Comparison of Spectral Element and Finite Difference Methods Using Statically Refined Nonconforming Grids for the MHD Island Coalescence Instability Problem

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.

    2009-04-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.

  17. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Yuxing; Fan, Jiwen; Xiao, Heng

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32more » km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.« less

  18. A High Resolution Study of Black Sea Circulation and Hypothetical Oil Spills

    NASA Astrophysics Data System (ADS)

    Dietrich, D. E.; Bowman, M. J.; Korotenko, K. A.

    2008-12-01

    A 1/24 deg resolution adaptation of the DieCAST ocean model simulates a realistically intense Rim Current and ubiquitous mesoscale coastal anticyclonic eddies that result from anticyclonic vorticity generation by laterally differential bottom drag forces that are amplified near Black Sea coastal headlands. Climatological and synoptic surface forcings are compared. The effects of vertical momentum transfer by known (by Synop region fishermen, as reported by Ballard National Geographic article) big amplitude internal waves are parameterized by big vertical viscosity. Sensitivity to vertical viscosity is shown. Results of simulated hypothetical oil spills are shown. A simple method to nowcast/forecast the Black Sea currents is described and early results are shown.

  19. Compressible magma/mantle dynamics: 3-D, adaptive simulations in ASPECT

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Heister, Timo

    2016-12-01

    Melt generation and migration are an important link between surface processes and the thermal and chemical evolution of the Earth's interior. However, their vastly different timescales make it difficult to study mantle convection and melt migration in a unified framework, especially for 3-D global models. And although experiments suggest an increase in melt volume of up to 20 per cent from the depth of melt generation to the surface, previous computations have neglected the individual compressibilities of the solid and the fluid phase. Here, we describe our extension of the finite element mantle convection code ASPECT that adds melt generation and migration. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in areas where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high-resolution, 3-D, compressible, global mantle convection simulations coupled with melt migration. We evaluate the functionality and potential of this method using a series of benchmarks and model setups, compare results of the compressible and incompressible formulation, and show the effectiveness of adaptive mesh refinement when applied to melt migration. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modelling the generation of komatiites or other melts originating in greater depths. The implementation is available in the Open Source ASPECT repository.

  20. Multi-Band Light Curves from Two-Dimensional Simulations of Gamma-Ray Burst Afterglows

    NASA Astrophysics Data System (ADS)

    MacFadyen, Andrew

    2010-01-01

    The dynamics of gamma-ray burst outflows is inherently multi-dimensional. 1.) We present high resolution two-dimensional relativistic hydrodynamics simulations of GRBs in the afterglow phase using adaptive mesh refinement (AMR). Using standard synchrotron radiation models, we compute multi-band light curves, from the radio to X-ray, directly from the 2D hydrodynamics simulation data. We will present on-axis light curves for both constant density and wind media. We will also present off-axis light curves relevant for searches for orphan afterglows. We find that jet breaks are smoothed due to both off-axis viewing and wind media effects. 2.) Non-thermal radiation mechanisms in GRB afterglows require substantial magnetic field strengths. In turbulence driven by shear instabilities in relativistic magnetized gas, we demonstrate that magnetic field is naturally amplified to half a percent of the total energy (epsilon B = 0.005). We will show high resolution three dimensional relativistic MHD simulations of this process as well as particle in cell (PIC) simulations of mildly relativistic collisionless shocks.

  1. Adaptive finite-volume WENO schemes on dynamically redistributed grids for compressible Euler equations

    NASA Astrophysics Data System (ADS)

    Pathak, Harshavardhana S.; Shukla, Ratnesh K.

    2016-08-01

    A high-order adaptive finite-volume method is presented for simulating inviscid compressible flows on time-dependent redistributed grids. The method achieves dynamic adaptation through a combination of time-dependent mesh node clustering in regions characterized by strong solution gradients and an optimal selection of the order of accuracy and the associated reconstruction stencil in a conservative finite-volume framework. This combined approach maximizes spatial resolution in discontinuous regions that require low-order approximations for oscillation-free shock capturing. Over smooth regions, high-order discretization through finite-volume WENO schemes minimizes numerical dissipation and provides excellent resolution of intricate flow features. The method including the moving mesh equations and the compressible flow solver is formulated entirely on a transformed time-independent computational domain discretized using a simple uniform Cartesian mesh. Approximations for the metric terms that enforce discrete geometric conservation law while preserving the fourth-order accuracy of the two-point Gaussian quadrature rule are developed. Spurious Cartesian grid induced shock instabilities such as carbuncles that feature in a local one-dimensional contact capturing treatment along the cell face normals are effectively eliminated through upwind flux calculation using a rotated Hartex-Lax-van Leer contact resolving (HLLC) approximate Riemann solver for the Euler equations in generalized coordinates. Numerical experiments with the fifth and ninth-order WENO reconstructions at the two-point Gaussian quadrature nodes, over a range of challenging test cases, indicate that the redistributed mesh effectively adapts to the dynamic flow gradients thereby improving the solution accuracy substantially even when the initial starting mesh is non-adaptive. The high adaptivity combined with the fifth and especially the ninth-order WENO reconstruction allows remarkably sharp capture of discontinuous propagating shocks with simultaneous resolution of smooth yet complex small scale unsteady flow features to an exceptional detail.

  2. Dynamic Hybrid Simulation of the Lunar Wake During ARTEMIS Crossing

    NASA Astrophysics Data System (ADS)

    Wiehle, S.; Plaschke, F.; Angelopoulos, V.; Auster, H.; Glassmeier, K.; Kriegel, H.; Motschmann, U. M.; Mueller, J.

    2010-12-01

    The interaction of the highly dynamic solar wind with the Moon is simulated with the A.I.K.E.F. (Adaptive Ion Kinetic Electron Fluid) code for the ARTEMIS P1 flyby on February 13, 2010. The A.I.K.E.F. hybrid plasma simulation code is the improved version of the Braunschweig code. It is able to automatically increase simulation grid resolution in areas of interest during runtime, which greatly increases resolution as well as performance. As the Moon has no intrinsic magnetic field and no ionosphere, the solar wind particles are absorbed at its surface, resulting in the formation of the lunar wake at the nightside. The solar wind magnetic field is basically convected through the Moon and the wake is slowly filled up with solar wind particles. However, this interaction is strongly influenced by the highly dynamic solar wind during the flyby. This is considered by a dynamic variation of the upstream conditions in the simulation using OMNI solar wind measurement data. By this method, a very good agreement between simulation and observations is achieved. The simulations show that the stationary structure of the lunar wake constitutes a tableau vivant in space representing the well-known Friedrichs diagram for MHD waves.

  3. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablonowski, Christiane

    The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less

  4. Advances in the U.S. Navy Non-hydrostatic Unified Model of the Atmosphere (NUMA): LES as a Stabilization Methodology for High-Order Spectral Elements in the Simulation of Deep Convection

    NASA Astrophysics Data System (ADS)

    Marras, Simone; Giraldo, Frank

    2015-04-01

    The prediction of extreme weather sufficiently ahead of its occurrence impacts society as a whole and coastal communities specifically (e.g. Hurricane Sandy that impacted the eastern seaboard of the U.S. in the fall of 2012). With the final goal of solving hurricanes at very high resolution and numerical accuracy, we have been developing the Non-hydrostatic Unified Model of the Atmosphere (NUMA) to solve the Euler and Navier-Stokes equations by arbitrary high-order element-based Galerkin methods on massively parallel computers. NUMA is a unified model with respect to the following criteria: (a) it is based on unified numerics in that element-based Galerkin methods allow the user to choose between continuous (spectral elements, CG) or discontinuous Galerkin (DG) methods and from a large spectrum of time integrators, (b) it is unified across scales in that it can solve flow in limited-area mode (flow in a box) or in global mode (flow on the sphere). NUMA is the dynamical core that powers the U.S. Naval Research Laboratory's next-generation global weather prediction system NEPTUNE (Navy's Environmental Prediction sysTem Utilizing the NUMA corE). Because the solution of the Euler equations by high order methods is prone to instabilities that must be damped in some way, we approach the problem of stabilization via an adaptive Large Eddy Simulation (LES) scheme meant to treat such instabilities by modeling the sub-grid scale features of the flow. The novelty of our effort lies in the extension to high order spectral elements for low Mach number stratified flows of a method that was originally designed for low order, adaptive finite elements in the high Mach number regime [1]. The Euler equations are regularized by means of a dynamically adaptive stress tensor that is proportional to the residual of the unperturbed equations. Its effect is close to none where the solution is sufficiently smooth, whereas it increases elsewhere, with a direct contribution to the stabilization of the otherwise oscillatory solution. As a first step toward the Large Eddy Simulation of a hurricane, we verify the model via a high-order and high resolution idealized simulation of deep convection on the sphere. References [1] M. Nazarov and J. Hoffman (2013) Residual-based artificial viscosity for simulation of turbulent compressible flow using adaptive finite element methods Int. J. Numer. Methods Fluids, 71:339-357

  5. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-02-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second generation DGVM LPJ-GUESS to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, that increased the model's speed by approximately the factor 8, we were able to faster detect shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south-transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  6. Using force-based adaptive resolution simulations to calculate solvation free energies of amino acid sidechain analogues

    NASA Astrophysics Data System (ADS)

    Fiorentini, Raffaele; Kremer, Kurt; Potestio, Raffaello; Fogarty, Aoife C.

    2017-06-01

    The calculation of free energy differences is a crucial step in the characterization and understanding of the physical properties of biological molecules. In the development of efficient methods to compute these quantities, a promising strategy is that of employing a dual-resolution representation of the solvent, specifically using an accurate model in the proximity of a molecule of interest and a simplified description elsewhere. One such concurrent multi-resolution simulation method is the Adaptive Resolution Scheme (AdResS), in which particles smoothly change their resolution on-the-fly as they move between different subregions. Before using this approach in the context of free energy calculations, however, it is necessary to make sure that the dual-resolution treatment of the solvent does not cause undesired effects on the computed quantities. Here, we show how AdResS can be used to calculate solvation free energies of small polar solutes using Thermodynamic Integration (TI). We discuss how the potential-energy-based TI approach combines with the force-based AdResS methodology, in which no global Hamiltonian is defined. The AdResS free energy values agree with those calculated from fully atomistic simulations to within a fraction of kBT. This is true even for small atomistic regions whose size is on the order of the correlation length, or when the properties of the coarse-grained region are extremely different from those of the atomistic region. These accurate free energy calculations are possible because AdResS allows the sampling of solvation shell configurations which are equivalent to those of fully atomistic simulations. The results of the present work thus demonstrate the viability of the use of adaptive resolution simulation methods to perform free energy calculations and pave the way for large-scale applications where a substantial computational gain can be attained.

  7. High resolution tsunami modelling for the evaluation of potential risk areas in Setúbal (Portugal)

    NASA Astrophysics Data System (ADS)

    Ribeiro, J.; Silva, A.; Leitão, P.

    2011-08-01

    The use of high resolution hydrodynamic modelling to simulate the potential effects of tsunami events can provide relevant information about the most probable inundation areas. Moreover, the consideration of complementary data such as the type of buildings, location of priority equipment, type of roads, enables mapping of the most vulnerable zones, computing of the expected damage on man-made structures, constrain of the definition of rescue areas and escape routes, adaptation of emergency plans and proper evaluation of the vulnerability associated with different areas and/or equipment. Such an approach was used to evaluate the specific risks associated with a potential occurrence of a tsunami event in the region of Setúbal (Portugal), which was one of the areas most seriously affected by the 1755 tsunami. In order to perform an evaluation of the hazard associated with the occurrence of a similar event, high resolution wave propagation simulations were performed considering different potential earthquake sources with different magnitudes. Based on these simulations, detailed inundation maps associated with the different events were produced. These results were combined with the available information on the vulnerability of the local infrastructures (building types, roads and streets characteristics, priority buildings) in order to impose restrictions in the production of high-scale potential damage maps, escape routes and emergency routes maps.

  8. A new multiscale air quality transport model (Fluidity, 4.1.9) using fully unstructured anisotropic adaptive mesh technology

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.

    2015-06-01

    A new anisotropic hr-adaptive mesh technique has been applied to modelling of multiscale transport phenomena, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been setup for two-dimensional (2-D) transport phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes.

  9. Hamiltonian adaptive resolution molecular dynamics simulation of infrared dielectric functions of liquids

    NASA Astrophysics Data System (ADS)

    Wang, C. C.; Tan, J. Y.; Liu, L. H.

    2018-05-01

    Hamiltonian adaptive resolution scheme (H-AdResS), which allows to simulate materials by treating different domains of the system at different levels of resolution, is a recently proposed atomistic/coarse-grained multiscale model. In this work, a scheme to calculate the dielectric functions of liquids on account of H-AdResS is presented. In the proposed H-AdResS dielectric-function calculation scheme (DielectFunctCalS), the corrected molecular dipole moments are calculated by multiplying molecular dipole moment by the weighting fraction of the molecular mapping point. As the widths of all-atom and hybrid regions show different degrees of influence on the dielectric functions, a prefactor is multiplied to eliminate the effects of all-atom and hybrid region widths. Since one goal of using the H-AdResS method is to reduce computational costs, widths of the all-atom region and the hybrid region can be reduced considering that the coarse-grained simulation is much more timesaving compared to atomistic simulation. Liquid water and ethanol are taken as test cases to validate the DielectFunctCalS. The H-AdResS DielectFunctCalS results are in good agreement with all-atom molecular dynamics simulations. The accuracy of the H-AdResS results, together with all-atom molecular dynamics results, depends heavily on the choice of the force field and force field parameters. The H-AdResS DielectFunctCalS allows us to calculate the dielectric functions of macromolecule systems with high efficiency and makes the dielectric function calculations of large biomolecular systems possible.

  10. Secure steganography designed for mobile platforms

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath; Sifuentes, Ronnie R.

    2006-05-01

    Adaptive steganography, an intelligent approach to message hiding, integrated with matrix encoding and pn-sequences serves as a promising resolution to recent security assurance concerns. Incorporating the above data hiding concepts with established cryptographic protocols in wireless communication would greatly increase the security and privacy of transmitting sensitive information. We present an algorithm which will address the following problems: 1) low embedding capacity in mobile devices due to fixed image dimensions and memory constraints, 2) compatibility between mobile and land based desktop computers, and 3) detection of stego images by widely available steganalysis software [1-3]. Consistent with the smaller available memory, processor capabilities, and limited resolution associated with mobile devices, we propose a more magnified approach to steganography by focusing adaptive efforts at the pixel level. This deeper method, in comparison to the block processing techniques commonly found in existing adaptive methods, allows an increase in capacity while still offering a desired level of security. Based on computer simulations using high resolution, natural imagery and mobile device captured images, comparisons show that the proposed method securely allows an increased amount of embedding capacity but still avoids detection by varying steganalysis techniques.

  11. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  12. First benchmark of the Unstructured Grid Adaptation Working Group

    NASA Technical Reports Server (NTRS)

    Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike

    2017-01-01

    Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.

  13. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-07-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  14. LOOPREF: A Fluid Code for the Simulation of Coronal Loops

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda; Antiochos, Spiro; Spicer, Daniel

    1998-01-01

    This report documents the code LOOPREF. LOOPREF is a semi-one dimensional finite element code that is especially well suited to simulate coronal-loop phenomena. It has a full implementation of adaptive mesh refinement (AMR), which is crucial for this type of simulation. The AMR routines are an improved version of AMR1D. LOOPREF's versatility makes is suitable to simulate a wide variety of problems. In addition to efficiently providing very high resolution in rapidly changing regions of the domain, it is equipped to treat loops of variable cross section, any non-linear form of heat conduction, shocks, gravitational effects, and radiative loss.

  15. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  16. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  17. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  18. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  19. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

  20. Attention Modifies Spatial Resolution According to Task Demands.

    PubMed

    Barbot, Antoine; Carrasco, Marisa

    2017-03-01

    How does visual attention affect spatial resolution? In texture-segmentation tasks, exogenous (involuntary) attention automatically increases resolution at the attended location, which improves performance where resolution is too low (at the periphery) but impairs performance where resolution is already too high (at central locations). Conversely, endogenous (voluntary) attention improves performance at all eccentricities, which suggests a more flexible mechanism. Here, using selective adaptation to spatial frequency, we investigated the mechanism by which endogenous attention benefits performance in resolution tasks. Participants detected a texture target that could appear at several eccentricities. Adapting to high or low spatial frequencies selectively affected performance in a manner consistent with changes in resolution. Moreover, adapting to high, but not low, frequencies mitigated the attentional benefit at central locations where resolution was too high; this shows that attention can improve performance by decreasing resolution. Altogether, our results indicate that endogenous attention benefits performance by modulating the contribution of high-frequency information in order to flexibly adjust spatial resolution according to task demands.

  1. Attention Modifies Spatial Resolution According to Task Demands

    PubMed Central

    Barbot, Antoine; Carrasco, Marisa

    2017-01-01

    How does visual attention affect spatial resolution? In texture-segmentation tasks, exogenous (involuntary) attention automatically increases resolution at the attended location, which improves performance where resolution is too low (at the periphery) but impairs performance where resolution is already too high (at central locations). Conversely, endogenous (voluntary) attention improves performance at all eccentricities, which suggests a more flexible mechanism. Here, using selective adaptation to spatial frequency, we investigated the mechanism by which endogenous attention benefits performance in resolution tasks. Participants detected a texture target that could appear at several eccentricities. Adapting to high or low spatial frequencies selectively affected performance in a manner consistent with changes in resolution. Moreover, adapting to high, but not low, frequencies mitigated the attentional benefit at central locations where resolution was too high; this shows that attention can improve performance by decreasing resolution. Altogether, our results indicate that endogenous attention benefits performance by modulating the contribution of high-frequency information in order to flexibly adjust spatial resolution according to task demands. PMID:28118103

  2. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    NASA Astrophysics Data System (ADS)

    Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt

    2015-05-01

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.

  3. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.

  4. High-Resolution Adaptive Optics Test-Bed for Vision Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilks, S C; Thomspon, C A; Olivier, S S

    2001-09-27

    We discuss the design and implementation of a low-cost, high-resolution adaptive optics test-bed for vision research. It is well known that high-order aberrations in the human eye reduce optical resolution and limit visual acuity. However, the effects of aberration-free eyesight on vision are only now beginning to be studied using adaptive optics to sense and correct the aberrations in the eye. We are developing a high-resolution adaptive optics system for this purpose using a Hamamatsu Parallel Aligned Nematic Liquid Crystal Spatial Light Modulator. Phase-wrapping is used to extend the effective stroke of the device, and the wavefront sensing and wavefrontmore » correction are done at different wavelengths. Issues associated with these techniques will be discussed.« less

  5. Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo

    2015-01-01

    Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.

  6. Challenges to Computational Aerothermodynamic Simulation and Validation for Planetary Entry Vehicle Analysis

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil

    2010-01-01

    Challenges to computational aerothermodynamic (CA) simulation and validation of hypersonic flow over planetary entry vehicles are discussed. Entry, descent, and landing (EDL) of high mass to Mars is a significant driver of new simulation requirements. These requirements include simulation of large deployable, flexible structures and interactions with reaction control system (RCS) and retro-thruster jets. Simulation of radiation and ablation coupled to the flow solver continues to be a high priority for planetary entry analyses, especially for return to Earth and outer planet missions. Three research areas addressing these challenges are emphasized. The first addresses the need to obtain accurate heating on unstructured tetrahedral grid systems to take advantage of flexibility in grid generation and grid adaptation. A multi-dimensional inviscid flux reconstruction algorithm is defined that is oriented with local flow topology as opposed to grid. The second addresses coupling of radiation and ablation to the hypersonic flow solver - flight- and ground-based data are used to provide limited validation of these multi-physics simulations. The third addresses the challenges of retro-propulsion simulation and the criticality of grid adaptation in this application. The evolution of CA to become a tool for innovation of EDL systems requires a successful resolution of these challenges.

  7. ADAPTIVE REAL-TIME CARDIAC MRI USING PARADISE: VALIDATION BY THE PHYSIOLOGICALLY IMPROVED NCAT PHANTOM

    PubMed Central

    Sharif, Behzad; Bresler, Yoram

    2013-01-01

    Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding (PARADISE) is a dynamic MR imaging scheme that optimally combines parallel imaging and model-based adaptive acquisition. In this work, we propose the application of PARADISE to real-time cardiac MRI. We introduce a physiologically improved version of a realistic four-dimensional cardiac-torso (NCAT) phantom, which incorporates natural beat-to-beat heart rate and motion variations. Cardiac cine imaging using PARADISE is simulated and its performance is analyzed by virtue of the improved phantom. Results verify the effectiveness of PARADISE for high resolution un-gated real-time cardiac MRI and its superiority over conventional acquisition methods. PMID:24398475

  8. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  9. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunney, Brian T.N.; Anderson, Robert W.

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  10. Investigating the scale-adaptivity of a shallow cumulus parameterization scheme with LES

    NASA Astrophysics Data System (ADS)

    Brast, Maren; Schemann, Vera; Neggers, Roel

    2017-04-01

    In this study we investigate the scale-adaptivity of a new parameterization scheme for shallow cumulus clouds in the gray zone. The Eddy-Diffusivity Multiple Mass-Flux (or ED(MF)n ) scheme is a bin-macrophysics scheme, in which subgrid transport is formulated in terms of discretized size densities. While scale-adaptivity in the ED-component is achieved using a pragmatic blending approach, the MF-component is filtered such that only the transport by plumes smaller than the grid size is maintained. For testing, ED(MF)n is implemented in a large-eddy simulation (LES) model, replacing the original subgrid-scheme for turbulent transport. LES thus plays the role of a non-hydrostatic testing ground, which can be run at different resolutions to study the behavior of the parameterization scheme in the boundary-layer gray zone. In this range convective cumulus clouds are partially resolved. We find that at high resolutions the clouds and the turbulent transport are predominantly resolved by the LES, and the transport represented by ED(MF)n is small. This partitioning changes towards coarser resolutions, with the representation of shallow cumulus clouds becoming exclusively carried by the ED(MF)n. The way the partitioning changes with grid-spacing matches the results of previous LES studies, suggesting some scale-adaptivity is captured. Sensitivity studies show that a scale-inadaptive ED component stays too active at high resolutions, and that the results are fairly insensitive to the number of transporting updrafts in the ED(MF)n scheme. Other assumptions in the scheme, such as the distribution of updrafts across sizes and the value of the area fraction covered by updrafts, are found to affect the location of the gray zone.

  11. Simulations of Astrophysical Jets in Dense Environments

    NASA Astrophysics Data System (ADS)

    Krause, Martin; Gaibler, Volker; Camenzind, Max

    We have simulated the interaction of jets with a galactic wind at high resolution using the magnetohydrodynamics code NIRVANA on the NEC SX-6 at the HLRS. This setup may describe a typical situation for the starbursting radio galaxies of the early universe. The results show a clear resolution dependence in the expected way, but the formed clumps are denser than expected from linear extrapolation. We also report our recent progress in the adaptation of the magnetic part of NIRVANA to the SX-6. The code is now fully tuned to the machine and reached more than 3 Gflops. We plan to use this new code version to extend our study of magnetized jets down to very low jet densities. This should be especially applicable to the conditions in the young universe.

  12. Laser Ray Tracing in a Parallel Arbitrary Lagrangian-Eulerian Adaptive Mesh Refinement Hydrocode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masters, N D; Kaiser, T B; Anderson, R W

    2009-09-28

    ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray-tracing in ALE-AMR. We present the equations of laser ray tracing, our approach to efficient traversal of the adaptive mesh hierarchy in which we propagate computational rays through a virtual composite mesh consisting of the finest resolution representation of the modeled space, and anticipate simulations that will be compared to experiments for code validation.

  13. Resolved spectroscopy of adolescent and infant galaxies (1 < z < 10)

    NASA Astrophysics Data System (ADS)

    Wright, Shelley; IRIS Science Team

    2014-07-01

    The combination of integral field spectroscopy (IFS) and adaptive optics (AO) on TMT will be revolutionary in studying the distant universe. The high angular resolution exploited by an AO system with this large aperture will be essential for studying high-redshift (1 < z < 5) galaxies' kinematics and chemical abundance histories. At even greater distances, TMT will be essential for conducting follow-up spectroscopy of Ly-alpha emission from first lights galaxies (6 < z < 10) and determining their kinematics and morphologies. I will present simulations and sensitivity calculations for high-z and first light galaxies using the diffraction-limited instrument IRIS coupled with NFIRAOS. I will put these simulations in context with current IFS+AO high-z observations and future capabilities with JWST.

  14. Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows

    NASA Astrophysics Data System (ADS)

    Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.

    2017-12-01

    Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.

  15. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    NASA Technical Reports Server (NTRS)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  16. Adaptation of a multi-resolution adversarial model for asymmetric warfare

    NASA Astrophysics Data System (ADS)

    Rosenberg, Brad; Gonsalves, Paul G.

    2006-05-01

    Recent military operations have demonstrated the use by adversaries of non-traditional or asymmetric military tactics to offset US military might. Rogue nations with links to trans-national terrorists have created a highly unpredictable and potential dangerous environment for US military operations. Several characteristics of these threats include extremism in beliefs, global in nature, non-state oriented, and highly networked and adaptive, thus making these adversaries less vulnerable to conventional military approaches. Additionally, US forces must also contend with more traditional state-based threats that are further evolving their military fighting strategies and capabilities. What are needed are solutions to assist our forces in the prosecution of operations against these diverse threat types and their atypical strategies and tactics. To address this issue, we present a system that allows for the adaptation of a multi-resolution adversarial model. The developed model can then be used to support both training and simulation based acquisition requirements to effectively respond to such an adversary. The described system produces a combined adversarial model by merging behavior modeling at the individual level with aspects at the group and organizational level via network analysis. Adaptation of this adversarial model is performed by means of an evolutionary algorithm to build a suitable model for the chosen adversary.

  17. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  18. Driven and decaying turbulence simulations of low–mass star formation: From clumps to cores to protostars

    DOE PAGES

    Offner, Stella S. R.; Klein, Richard I.; McKee, Christopher F.

    2008-10-20

    Molecular clouds are observed to be turbulent, but the origin of this turbulence is not well understood. As a result, there are two different approaches to simulating molecular clouds, one in which the turbulence is allowed to decay after it is initialized, and one in which it is driven. We use the adaptive mesh refinement (AMR) code, Orion, to perform high-resolution simulations of molecular cloud cores and protostars in environments with both driven and decaying turbulence. We include self-gravity, use a barotropic equation of state, and represent regions exceeding the maximum grid resolution with sink particles. We analyze the propertiesmore » of bound cores such as size, shape, line width, and rotational energy, and we find reasonable agreement with observation. At high resolution the different rates of core accretion in the two cases have a significant effect on protostellar system development. Clumps forming in a decaying turbulence environment produce high-multiplicity protostellar systems with Toomre Q unstable disks that exhibit characteristics of the competitive accretion model for star formation. In contrast, cores forming in the context of continuously driven turbulence and virial equilibrium form smaller protostellar systems with fewer low-mass members. Furthermore, our simulations of driven and decaying turbulence show some statistically significant differences, particularly in the production of brown dwarfs and core rotation, but the uncertainties are large enough that we are not able to conclude whether observations favor one or the other.« less

  19. Addressing spatial scales and new mechanisms in climate impact ecosystem modeling

    NASA Astrophysics Data System (ADS)

    Poulter, B.; Joetzjer, E.; Renwick, K.; Ogunkoya, G.; Emmett, K.

    2015-12-01

    Climate change impacts on vegetation distributions are typically addressed using either an empirical approach, such as a species distribution model (SDM), or with process-based methods, for example, dynamic global vegetation models (DGVMs). Each approach has its own benefits and disadvantages. For example, an SDM is constrained by data and few parameters, but does not include adaptation or acclimation processes or other ecosystem feedbacks that may act to mitigate or enhance climate effects. Alternatively, a DGVM model includes many mechanisms relating plant growth and disturbance to climate, but simulations are costly to perform at high-spatial resolution and there remains large uncertainty on a variety of fundamental physical processes. To address these issues, here, we present two DGVM-based case studies where i) high-resolution (1 km) simulations are being performed for vegetation in the Greater Yellowstone Ecosystem using a biogeochemical, forest gap model, LPJ-GUESS, and ii) where new mechanisms for simulating tropical tree-mortality are being introduced. High-resolution DGVM model simulations require not only computing and reorganizing code but also a consideration of scaling issues on vegetation dynamics and stochasticity and also on disturbance and migration. New mechanisms for simulating forest mortality must consider hydraulic limitations and carbon reserves and their interactions on source-sink dynamics and in controlling water potentials. Improving DGVM approaches by addressing spatial scale challenges and integrating new approaches for estimating forest mortality will provide new insights more relevant for land management and possibly reduce uncertainty by physical processes more directly comparable to experimental and observational evidence.

  20. High-resolution retinal imaging using adaptive optics and Fourier-domain optical coherence tomography

    DOEpatents

    Olivier, Scot S.; Werner, John S.; Zawadzki, Robert J.; Laut, Sophie P.; Jones, Steven M.

    2010-09-07

    This invention permits retinal images to be acquired at high speed and with unprecedented resolution in three dimensions (4.times.4.times.6 .mu.m). The instrument achieves high lateral resolution by using adaptive optics to correct optical aberrations of the human eye in real time. High axial resolution and high speed are made possible by the use of Fourier-domain optical coherence tomography. Using this system, we have demonstrated the ability to image microscopic blood vessels and the cone photoreceptor mosaic.

  1. Assessing the impact of future climate extremes on the US corn and soybean production

    NASA Astrophysics Data System (ADS)

    Jin, Z.

    2015-12-01

    Future climate changes will place big challenges to the US agricultural system, among which increasing heat stress and precipitation variability were the two major concerns. Reliable prediction of crop productions in response to the increasingly frequent and severe extreme climate is a prerequisite for developing adaptive strategies on agricultural risk management. However, the progress has been slow on quantifying the uncertainty of computational predictions at high spatial resolutions. Here we assessed the risks of future climate extremes on the US corn and soybean production using the Agricultural Production System sIMulator (APSIM) model under different climate scenarios. To quantify the uncertainty due to conceptual representations of heat, drought and flooding stress in crop models, we proposed a new strategy of algorithm ensemble in which different methods for simulating crop responses to those extreme climatic events were incorporated into the APSIM. This strategy allowed us to isolate irrelevant structure differences among existing crop models but only focus on the process of interest. Future climate inputs were derived from high-spatial-resolution (12km × 12km) Weather Research and Forecasting (WRF) simulations under Representative Concentration Pathways 4.5 (RCP 4.5) and 8.5 (RCP 8.5). Based on crop model simulations, we analyzed the magnitude and frequency of heat, drought and flooding stress for the 21st century. We also evaluated the water use efficiency and water deficit on regional scales if farmers were to boost their yield by applying more fertilizers. Finally we proposed spatially explicit adaptation strategies of irrigation and fertilizing for different management zones.

  2. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  3. High-resolution precipitation data derived from dynamical downscaling using the WRF model for the Heihe River Basin, northwest China

    NASA Astrophysics Data System (ADS)

    Zhang, Xuezhen; Xiong, Zhe; Zheng, Jingyun; Ge, Quansheng

    2018-02-01

    The community of climate change impact assessments and adaptations research needs regional high-resolution (spatial) meteorological data. This study produced two downscaled precipitation datasets with spatial resolutions of as high as 3 km by 3 km for the Heihe River Basin (HRB) from 2011 to 2014 using the Weather Research and Forecast (WRF) model nested with Final Analysis (FNL) from the National Center for Environmental Prediction (NCEP) and ERA-Interim from the European Centre for Medium-Range Weather Forecasts (ECMWF) (hereafter referred to as FNLexp and ERAexp, respectively). Both of the downscaling simulations generally reproduced the observed spatial patterns of precipitation. However, users should keep in mind that the two downscaled datasets are not exactly the same in terms of observations. In comparison to the remote sensing-based estimation, the FNLexp produced a bias of heavy precipitation centers. In comparison to the ground gauge-based measurements, for the warm season (May to September), the ERAexp produced more precipitation (root-mean-square error (RMSE) = 295.4 mm, across the 43 sites) and more heavy rainfall days, while the FNLexp produced less precipitation (RMSE = 115.6 mm) and less heavy rainfall days. Both the ERAexp and FNLexp produced considerably more precipitation for the cold season (October to April) with RMSE values of 119.5 and 32.2 mm, respectively, and more heavy precipitation days. Along with simulating a higher number of heavy precipitation days, both the FNLexp and ERAexp also simulated stronger extreme precipitation. Sensitivity experiments show that the bias of these simulations is much more sensitive to micro-physical parameterizations than to the spatial resolution of topography data. For the HRB, application of the WSM3 scheme may improve the performance of the WRF model.

  4. Can Asteroid Airbursts Cause Dangerous Tsunami?.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boslough, Mark B.

    I have performed a series of high-resolution hydrocode simulations to generate “source functions” for tsunami simulations as part of a proof-of-principle effort to determine whether or not the downward momentum from an asteroid airburst can couple energy into a dangerous tsunami in deep water. My new CTH simulations show enhanced momentum multiplication relative to a nuclear explosion of the same yield. Extensive sensitivity and convergence analyses demonstrate that results are robust and repeatable for simulations with sufficiently high resolution using adaptive mesh refinement. I have provided surface overpressure and wind velocity fields to tsunami modelers to use as time-dependent boundarymore » conditions and to test the hypothesis that this mechanism can enhance the strength of the resulting shallow-water wave. The enhanced momentum result suggests that coupling from an over-water plume-forming airburst could be a more efficient tsunami source mechanism than a collapsing impact cavity or direct air blast alone, but not necessarily due to the originally-proposed mechanism. This result has significant implications for asteroid impact risk assessment and airburst-generated tsunami will be the focus of a NASA-sponsored workshop at the Ames Research Center next summer, with follow-on funding expected.« less

  5. Resolving the Small-Scale Structure of the Circumgalactic Medium in Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Corlies, Lauren

    2017-08-01

    We propose to resolve the circumgalactic medium (CGM) of L* galaxies down to 100 Msun (250 pc) in a full cosmological simulation to examine how mixing and cooling shape the physical nature of this gas on the scales expected from observations. COS has provided the best characterization of the low-z CGM to date, revealing the extent and amount of low- and high-ions and hinting at the kinematic relations between them. Yet cosmological galaxy simulations that can reproduce the stellar properties of galaxies have all struggled to reproduce these results even qualitatively. However, while the COS data imply that the low-ion absorption is occurring on sub-kpc scales, such scales can not be traced by simulations with resolution between 1-5 kpc in the CGM. Our proposed simulations will, for the first time, reach the resolution required to resolve these structures in the outer halo of L* galaxies. Using the adaptive mesh refinement code enzo, we will experiment with the size, shape, and resolution of an enforced high refinement region extending from the disk into the CGM to identify the best configuration for probing the flows of gas throughout the CGM. Our test case has found that increasing the resolution alone can have dramatic consequences for the density, temperature, and kinematics along a line of sight. Coupling this technique with an independent feedback study already underway will help disentangle the roles of global and small scale physics in setting the physical state of the CGM. Finally, we will use the MISTY pipeline to generate realistic mock spectra for direct comparison with COS data which will be made available through MAST.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrnstein, Aaron R.

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less

  7. Arctic storms simulated in atmospheric general circulation models under uniform high, uniform low, and variable resolutions

    NASA Astrophysics Data System (ADS)

    Roesler, E. L.; Bosler, P. A.; Taylor, M.

    2016-12-01

    The impact of strong extratropical storms on coastal communities is large, and the extent to which storms will change with a warming Arctic is unknown. Understanding storms in reanalysis and in climate models is important for future predictions. We know that the number of detected Arctic storms in reanalysis is sensitive to grid resolution. To understand Arctic storm sensitivity to resolution in climate models, we describe simulations designed to identify and compare Arctic storms at uniform low resolution (1 degree), at uniform high resolution (1/8 degree), and at variable resolution (1 degree to 1/8 degree). High-resolution simulations resolve more fine-scale structure and extremes, such as storms, in the atmosphere than a uniform low-resolution simulation. However, the computational cost of running a globally uniform high-resolution simulation is often prohibitive. The variable resolution tool in atmospheric general circulation models permits regional high-resolution solutions at a fraction of the computational cost. The storms are identified using the open-source search algorithm, Stride Search. The uniform high-resolution simulation has over 50% more storms than the uniform low-resolution and over 25% more storms than the variable resolution simulations. Storm statistics from each of the simulations is presented and compared with reanalysis. We propose variable resolution as a cost-effective means of investigating physics/dynamics coupling in the Arctic environment. Future work will include comparisons with observed storms to investigate tuning parameters for high resolution models. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2016-7402 A

  8. OFDM and PAM comparison using a high baudrate low resolution IM/DD interface for 400G Ethernet access.

    PubMed

    André, Nuno Sequeira; Louchet, Hadrien; Filsinger, Volker; Hansen, Erik; Richter, André

    2016-05-30

    We compare OFDM and PAM for 400G Ethernet based on a 3-bit high baudrate IM/DD interface at 1550nm. We demonstrate 27Gb/s and 32Gb/s transmission over 10km SSMF using OFDM and PAM respectively. We show that capacity can be improved through adaptation/equalization to achieve 42Gb/s and 64Gb/s for OFDM and PAM respectively. Experimental results are used to create realistic simulations to extrapolate the performance of both modulation formats under varied conditions. For the considered interface we found that PAM has the best performance, OFDM is impaired by quantization noise. When the resolution limitation is relaxed, OFDM shows better performance.

  9. A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.

    2012-05-01

    We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.

  10. Analyzing the Adaptive Mesh Refinement (AMR) Characteristics of a High-Order 2D Cubed-Sphere Shallow-Water Model

    DOE PAGES

    Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans; ...

    2016-11-09

    Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less

  11. Analyzing the Adaptive Mesh Refinement (AMR) Characteristics of a High-Order 2D Cubed-Sphere Shallow-Water Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans

    Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less

  12. Projected changes over western Canada using convection-permitting regional climate model and the pseudo-global warming method

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kurkute, S.; Chen, L.

    2017-12-01

    Results from the General Circulation Models (GCMs) suggest more frequent and more severe extreme rain events in a climate warmer than the present. However, current GCMs cannot accurately simulate extreme rainfall events of short duration due to their coarse model resolutions and parameterizations. This limitation makes it difficult to provide the detailed quantitative information for the development of regional adaptation and mitigation strategies. Dynamical downscaling using nested Regional Climate Models (RCMs) are able to capture key regional and local climate processes with an affordable computational cost. Recent studies have demonstrated that the downscaling of GCM results with weather-permitting mesoscale models, such as the pseudo-global warming (PGW) technique, could be a viable and economical approach of obtaining valuable climate change information on regional scales. We have conducted a regional climate 4-km Weather Research and Forecast Model (WRF) simulation with one domain covering the whole western Canada, for a historic run (2000-2015) and a 15-year future run to 2100 and beyond with the PGW forcing. The 4-km resolution allows direct use of microphysics and resolves the convection explicitly, thus providing very convincing spatial detail. With this high-resolution simulation, we are able to study the convective mechanisms, specifically the control of convections over the Prairies, the projected changes of rainfall regimes, and the shift of the convective mechanisms in a warming climate, which has never been examined before numerically at such large scale with such high resolution.

  13. High Resolution Studies Of Lensed z ∼ 2 Galaxies: Kinematics And Metal Gradients

    NASA Astrophysics Data System (ADS)

    Leethochawalit, Nicha

    2016-09-01

    We use the OSIRIS integral field unit (IFU) spectograph to secure spatially-resolved strong emission lines of 15 gravitationally-lensed star-forming galaxies at redshift z ∼ 2. With the aid of gravitational lensing and Keck laser-assisted adaptive optics, the spatial resolution of these sub-luminous galaxies is at a few hundred parsecs. First, we demonstrate that high spatial resolution is crucial in diagnosing the kinematic properties and dynamical maturity of z ∼ 2 galaxies. We observe a significantly lower fraction of rotationally-supported systems than what has been claimed in lower spatial resolution surveys. Second, we find a much larger fraction of z ∼ 2 galaxies with weak metallicity gradients, contrary to the simple picture suggested by earlier studies that well-ordered rotation develops concurrently with established steep metal gradients in all but merging systems. Comparing our observations with the predictions of hydronamical simulations, strong feedback is likely to play a key role in flattening metal gradients in early star-forming galaxies.

  14. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  15. Evaluation of local adaptation strategies to climate change of maize crop in Andalusia for the first half of 21st century

    NASA Astrophysics Data System (ADS)

    Gabaldón, Clara; Lorite, Ignacio J.; Inés Mínguez, M.; Dosio, Alessandro; Sánchez-Sánchez, Enrique; Ruiz-Ramos, Margarita

    2013-04-01

    The objective of this work is to generate and analyse adaptation strategies to cope with impacts of climate change on cereal cropping systems in Andalusia (Southern Spain) in a semi-arid environment, with focus on extreme events. In Andalusia, located in the South of the Iberian Peninsula, cereals crops may be affected by the increase in average temperatures, the precipitation variability and the possible extreme events. Those impacts may cause a decrease in both water availability and the pollination rate resulting on a decrease in yield and the farmer's profitability. Designing local and regional adaptation strategies to reduce these negative impacts is necessary. This study is focused on irrigated maize on five Andalusia locations. The Andalusia Network of Agricultural Trials (RAEA in Spanish) provided the experimental crop and soil data, and the observed climate data were obtained from the Agroclimatic Information Network of Andalusia and the Spanish National Meteorological Agency (AEMET in Spanish). The data for future climate scenarios (2013-2050) were generated by Dosio and Paruolo (2011) and Dosio et al. (2012), who corrected the bias of ENSEMBLES data for maximum and minimum temperatures and precipitation. ENSEMBLES data were the results of numerical simulations obtained from a group of regional climate models at high resolution (25 km) from the European Project ENSEMBLES (http://www.ensembles-eu.org/). Crop models considered were CERES-maize (Jones and Kiniry, 1986) under DSSAT platform, and CropSyst (Stockle et al., 2003). Those crop models were applied only on locations were calibration and validation were done. The effects of the adaptations strategies, such as changes in sowing dates or choice of cultivar, were evaluated regarding water consumption; changes in phenological dates were also analysed to compare with occurrence of extreme events of maximum temperature. These events represent a threat on summer crops due to the reduction on the duration of grain filling period with the consequent reduction in yield (Ruiz-Ramos et al., 2011) and with the supraoptimal temperatures in pollination. Finally, results of simulated impacts and adaptations were compared to previous studies done without bias correction of climatic projections, at low resolution and with previous versions of crop models (Mínguez et al., 2007). This study will contribute to MACSUR knowledge Hub within the Joint Programming Initiative on Agriculture, Food Security and Climate Change (FACCE - JPI) of EU and is financed by MULCLIVAR project (CGL2012-38923-C02-02) and IFAPA project AGR6126 from Junta de Andalucía, Spain. References Dosio A. and Paruolo P., 2011. Bias correction of the ENSEMBLES high-resolution climate change projections for use by impact models: Evaluation on the present climate. Journal of Geophysical Research, VOL. 116, D16106, doi:10.1029/2011JD015934 Dosio A., Paruolo P. and Rojas R., 2012. Bias correction of the ENSEMBLES high resolution climate change projections for use by impact models: Analysis of the climate change signal. Journal of Geophysical Research, Volume 117, D17, doi: 0.1029/2012JD017968 Jones, C.A., and J.R. Kiniry. 1986. CERES-Maize: A simulation model of maize growth and development. Texas A&M Univ. Press, College Station. Mínguez, M.I., M. Ruiz-ramos, C.H. Díaz-Ambrona, and M. Quemada. 2007. First-order impacts on winter and summer crops assessed with various high-resolution climate models in the Iberian Peninsula. Climatic Change 81: 343-355. Ruiz-Ramos, M., E. Sanchez, C. Galllardo, and M.I. Minguez. 2011. Impacts of projected maximum temperature extremes for C21 by an ensemble of regional climate models on cereal cropping systems in the Iberian Peninsula. Natural Hazards and Earth System Science 11: 3275-3291. Stockle, C.O., M. Donatelli, and R. Nelson. 2003. CropSyst , a cropping systems simulation model. European Journal of Agronomy18: 289-307.

  16. The End-to-end Demonstrator for improved decision making in the water sector in Europe (EDgE)

    NASA Astrophysics Data System (ADS)

    Wood, Eric; Wanders, Niko; Pan, Ming; Sheffield, Justin; Samaniego, Luis; Thober, Stephan; Kumar, Rohinni; Prudhomme, Christel; Houghton-Carr, Helen

    2017-04-01

    High-resolution simulations of water resources from hydrological models are vital to supporting important climate services. Apart from a high level of detail, both spatially and temporally, it is important to provide simulations that consistently cover a range of timescales, from historical reanalysis to seasonal forecast and future projections. In the new EDgE project commissioned by the ECMWF (C3S) we try to fulfill these requirements. EDgE is a proof-of-concept project which combines climate data and state-of-the-art hydrological modelling to demonstrate a water-oriented information system implemented through a web application. EDgE is working with key European stakeholders representative of private and public sectors to jointly develop and tailor approaches and techniques. With these tools, stakeholders are assisted in using improved climate information in decision-making, and supported in the development of climate change adaptation and mitigation policies. Here, we present the first results of the EDgE modelling chain, which is divided into three main processes: 1) pre-processing and downscaling; 2) hydrological modelling; 3) post-processing. Consistent downscaling and bias corrections for historical simulations, seasonal forecasts and climate projections ensure that the results across scales are robust. The daily temporal resolution and 5km spatial resolution ensure locally relevant simulations. With the use of four hydrological models (PCR-GLOBWB, VIC, mHM, Noah-MP), uncertainty between models is properly addressed, while consistency is guaranteed by using identical input data for static land surface parameterizations. The forecast results are communicated to stakeholders via Sectoral Climate Impact Indicators (SCIIs) that have been created in collaboration with the end-user community of the EDgE project. The final product of this project is composed of 15 years of seasonal forecast and 10 climate change projections, all combined with four hydrological models. These unique high-resolution climate information simulations in the EDgE project provide an unprecedented information system for decision-making over Europe.

  17. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  18. High Resolution DNS of Turbulent Flows using an Adaptive, Finite Volume Method

    NASA Astrophysics Data System (ADS)

    Trebotich, David

    2014-11-01

    We present a new computational capability for high resolution simulation of incompressible viscous flows. Our approach is based on cut cell methods where an irregular geometry such as a bluff body is intersected with a rectangular Cartesian grid resulting in cut cells near the boundary. In the cut cells we use a conservative discretization based on a discrete form of the divergence theorem to approximate fluxes for elliptic and hyperbolic terms in the Navier-Stokes equations. Away from the boundary the method reduces to a finite difference method. The algorithm is implemented in the Chombo software framework which supports adaptive mesh refinement and massively parallel computations. The code is scalable to 200,000 + processor cores on DOE supercomputers, resulting in DNS studies at unprecedented scale and resolution. For flow past a cylinder in transition (Re = 300) we observe a number of secondary structures in the far wake in 2D where the wake is over 120 cylinder diameters in length. These are compared with the more regularized wake structures in 3D at the same scale. For flow past a sphere (Re = 600) we resolve an arrowhead structure in the velocity in the near wake. The effectiveness of AMR is further highlighted in a simulation of turbulent flow (Re = 6000) in the contraction of an oil well blowout preventer. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under Contract Number DE-AC02-05-CH11231.

  19. Evaluation of a high-resolution patient-specific model of the electrically stimulated cochlea

    NASA Astrophysics Data System (ADS)

    Cakir, Ahmet; Dwyer, Robert T.; Noble, Jack H.

    2017-03-01

    Cochlear implants (CIs) are considered standard treatment for patients who experience sensorineural hearing loss. Although these devices have been remarkably successful at restoring hearing, it is rare to achieve natural fidelity, and many patients experience poor outcomes. Our group has developed the first image-guided CI programming (IGCIP) technique where the positions of the electrodes are found in CT images and used to estimate neural activation patterns, which is unique information that audiologists can use to define patient-specific processor settings. In our current system, neural activation is estimated using only the distance from each electrode to the neural activation sites. This approach might be less accurate than using a high-resolution electro-anatomical model (EAM) of the electrically stimulated cochlea to perform physics-based estimation of neural activation. In this work, we propose a patientcustomized EAM approach where the EAM is spatially and electrically adapted to a patient-specific configuration. Spatial adaptation is done through non-rigid registration of the model with the patient CT image. Electrical adaptation is done by adjusting tissue resistivity parameters so that the intra-cochlear voltage distributions predicted by the model best match those directly measured for the patient via their implant. We demonstrated our approach for N=7 patients. We found that our approach results in mean percent differences between direct and simulated measurements of voltage distributions of 11%. In addition, visual comparison shows the simulated and measured voltage distributions are qualitatively in good agreement. This represents a crucial step toward developing and validating the first in vivo patient-specific cochlea EAMs.

  20. Increasing circular synthetic aperture sonar resolution via adapted wave atoms deconvolution.

    PubMed

    Pailhas, Yan; Petillot, Yvan; Mulgrew, Bernard

    2017-04-01

    Circular Synthetic Aperture Sonar (CSAS) processing computes coherently Synthetic Aperture Sonar (SAS) data acquired along a circular trajectory. This approach has a number of advantages, in particular it maximises the aperture length of a SAS system, producing very high resolution sonar images. CSAS image reconstruction using back-projection algorithms, however, introduces a dissymmetry in the impulse response, as the imaged point moves away from the centre of the acquisition circle. This paper proposes a sampling scheme for the CSAS image reconstruction which allows every point, within the full field of view of the system, to be considered as the centre of a virtual CSAS acquisition scheme. As a direct consequence of using the proposed resampling scheme, the point spread function (PSF) is uniform for the full CSAS image. Closed form solutions for the CSAS PSF are derived analytically, both in the image and the Fourier domain. The thorough knowledge of the PSF leads naturally to the proposed adapted atom waves basis for CSAS image decomposition. The atom wave deconvolution is successfully applied to simulated data, increasing the image resolution by reducing the PSF energy leakage.

  1. Navigating Earthquake Physics with High-Resolution Array Back-Projection

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen

    Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The high-frequency subevents are located at the edges of geodetic slip regions, which are correlated to the stopping phases associated with rupture speed reduction when the earthquake arrests.

  2. Regional projections of North Indian climate for adaptation studies.

    PubMed

    Mathison, Camilla; Wiltshire, Andrew; Dimri, A P; Falloon, Pete; Jacob, Daniela; Kumar, Pankaj; Moors, Eddy; Ridley, Jeff; Siderius, Christian; Stoffel, Markus; Yasunari, T

    2013-12-01

    Adaptation is increasingly important for regions around the world where large changes in climate could have an impact on populations and industry. The Brahmaputra-Ganges catchments have a large population, a main industry of agriculture and a growing hydro-power industry, making the region susceptible to changes in the Indian Summer Monsoon, annually the main water source. The HighNoon project has completed four regional climate model simulations for India and the Himalaya at high resolution (25km) from 1960 to 2100 to provide an ensemble of simulations for the region. In this paper we have assessed the ensemble for these catchments, comparing the simulations with observations, to give credence that the simulations provide a realistic representation of atmospheric processes and therefore future climate. We have illustrated how these simulations could be used to provide information on potential future climate impacts and therefore aid decision-making using climatology and threshold analysis. The ensemble analysis shows an increase in temperature between the baseline (1970-2000) and the 2050s (2040-2070) of between 2 and 4°C and an increase in the number of days with maximum temperatures above 28°C and 35°C. There is less certainty for precipitation and runoff which show considerable variability, even in this relatively small ensemble, spanning zero. The HighNoon ensemble is the most complete data for the region providing useful information on a wide range of variables for the regional climate of the Brahmaputra-Ganges region, however there are processes not yet included in the models that could have an impact on the simulations of future climate. We have discussed these processes and show that the range from the HighNoon ensemble is similar in magnitude to potential changes in projections where these processes are included. Therefore strategies for adaptation must be robust and flexible allowing for advances in the science and natural environmental changes. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. High resolution simulations of a variable HH jet

    NASA Astrophysics Data System (ADS)

    Raga, A. C.; de Colle, F.; Kajdič, P.; Esquivel, A.; Cantó, J.

    2007-04-01

    Context: In many papers, the flows in Herbig-Haro (HH) jets have been modeled as collimated outflows with a time-dependent ejection. In particular, a supersonic variability of the ejection velocity leads to the production of "internal working surfaces" which (for appropriate forms of the time-variability) can produce emitting knots that resemble the chains of knots observed along HH jets. Aims: In this paper, we present axisymmetric simulations of an "internal working surface" in a radiative jet (produced by an ejection velocity variability). We concentrate on a given parameter set (i.e., on a jet with a constante ejection density, and a sinusoidal velocity variability with a 20 yr period and a 40 km s-1 half-amplitude), and carry out a study of the behaviour of the solution for increasing numerical resolutions. Methods: In our simulations, we solve the gasdynamic equations together with a 17-species atomic/ionic network, and we are therefore able to compute emission coefficients for different emission lines. Results: We compute 3 adaptive grid simulations, with 20, 163 and 1310 grid points (at the highest grid resolution) across the initial jet radius. From these simulations we see that successively more complex structures are obtained for increasing numerical resolutions. Such an effect is seen in the stratifications of the flow variables as well as in the predicted emission line intensity maps. Conclusions: .We find that while the detailed structure of an internal working surface depends on resolution, the predicted emission line luminosities (integrated over the volume of the working surface) are surprisingly stable. This is definitely good news for the future computation of predictions from radiative jet models for carrying out comparisons with observations of HH objects.

  4. Integration of adaptive optics into highEnergy laser modeling and simulation

    DTIC Science & Technology

    2017-06-01

    astronomy [1], where AO is often used to improve image resolution. Likewise, AO shows promise in improving HEL performance. To better understand how much...the focus of the beam on target. In astronomy , the target is an imaging sensor and the source is an astronomical object, while in the application of...mirror [21]. While AO in laser weapons is still a developing field, the technology has been used for several decades on telescopes in astronomy to

  5. Regional Community Climate Simulations with variable resolution meshes in the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.; Gettelman, A.; Callaghan, P.

    2017-12-01

    Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.

  6. Performance evaluation of spatial compounding in the presence of aberration and adaptive imaging

    NASA Astrophysics Data System (ADS)

    Dahl, Jeremy J.; Guenther, Drake; Trahey, Gregg E.

    2003-05-01

    Spatial compounding has been used for years to reduce speckle in ultrasonic images and to resolve anatomical features hidden behind the grainy appearance of speckle. Adaptive imaging restores image contrast and resolution by compensating for beamforming errors caused by tissue-induced phase errors. Spatial compounding represents a form of incoherent imaging, whereas adaptive imaging attempts to maintain a coherent, diffraction-limited aperture in the presence of aberration. Using a Siemens Antares scanner, we acquired single channel RF data on a commercially available 1-D probe. Individual channel RF data was acquired on a cyst phantom in the presence of a near field electronic phase screen. Simulated data was also acquired for both a 1-D and a custom built 8x96, 1.75-D probe (Tetrad Corp.). The data was compounded using a receive spatial compounding algorithm; a widely used algorithm because it takes advantage of parallel beamforming to avoid reductions in frame rate. Phase correction was also performed by using a least mean squares algorithm to estimate the arrival time errors. We present simulation and experimental data comparing the performance of spatial compounding to phase correction in contrast and resolution tasks. We evaluate spatial compounding and phase correction, and combinations of the two methods, under varying aperture sizes, aperture overlaps, and aberrator strength to examine the optimum configuration and conditions in which spatial compounding will provide a similar or better result than adaptive imaging. We find that, in general, phase correction is hindered at high aberration strengths and spatial frequencies, whereas spatial compounding is helped by these aberrators.

  7. Influence of wave-front sampling in adaptive optics retinal imaging

    PubMed Central

    Laslandes, Marie; Salas, Matthias; Hitzenberger, Christoph K.; Pircher, Michael

    2017-01-01

    A wide range of sampling densities of the wave-front has been used in retinal adaptive optics (AO) instruments, compared to the number of corrector elements. We developed a model in order to characterize the link between number of actuators, number of wave-front sampling points and AO correction performance. Based on available data from aberration measurements in the human eye, 1000 wave-fronts were generated for the simulations. The AO correction performance in the presence of these representative aberrations was simulated for different deformable mirror and Shack Hartmann wave-front sensor combinations. Predictions of the model were experimentally tested through in vivo measurements in 10 eyes including retinal imaging with an AO scanning laser ophthalmoscope. According to our study, a ratio between wavefront sampling points and actuator elements of 2 is sufficient to achieve high resolution in vivo images of photoreceptors. PMID:28271004

  8. Resolution-Adapted All-Atomic and Coarse-Grained Model for Biomolecular Simulations.

    PubMed

    Shen, Lin; Hu, Hao

    2014-06-10

    We develop here an adaptive multiresolution method for the simulation of complex heterogeneous systems such as the protein molecules. The target molecular system is described with the atomistic structure while maintaining concurrently a mapping to the coarse-grained models. The theoretical model, or force field, used to describe the interactions between two sites is automatically adjusted in the simulation processes according to the interaction distance/strength. Therefore, all-atomic, coarse-grained, or mixed all-atomic and coarse-grained models would be used together to describe the interactions between a group of atoms and its surroundings. Because the choice of theory is made on the force field level while the sampling is always carried out in the atomic space, the new adaptive method preserves naturally the atomic structure and thermodynamic properties of the entire system throughout the simulation processes. The new method will be very useful in many biomolecular simulations where atomistic details are critically needed.

  9. Generalized sidelobe canceller beamforming method for ultrasound imaging.

    PubMed

    Wang, Ping; Li, Na; Luo, Han-Wu; Zhu, Yong-Kun; Cui, Shi-Gang

    2017-03-01

    A modified generalized sidelobe canceller (IGSC) algorithm is proposed to enhance the resolution and robustness against the noise of the traditional generalized sidelobe canceller (GSC) and coherence factor combined method (GSC-CF). In the GSC algorithm, weighting vector is divided into adaptive and non-adaptive parts, while the non-adaptive part does not block all the desired signal. A modified steer vector of the IGSC algorithm is generated by the projection of the non-adaptive vector on the signal space constructed by the covariance matrix of received data. The blocking matrix is generated based on the orthogonal complementary space of the modified steer vector and the weighting vector is updated subsequently. The performance of IGSC was investigated by simulations and experiments. Through simulations, IGSC outperformed GSC-CF in terms of spatial resolution by 0.1 mm regardless there is noise or not, as well as the contrast ratio respect. The proposed IGSC can be further improved by combining with CF. The experimental results also validated the effectiveness of the proposed algorithm with dataset provided by the University of Michigan.

  10. Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Chacón, Luis; Pernice, Michael

    2008-10-01

    An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.

  11. Computational tissue volume reconstruction of a peripheral nerve using high-resolution light-microscopy and reconstruct.

    PubMed

    Gierthmuehlen, Mortimer; Freiman, Thomas M; Haastert-Talini, Kirsten; Mueller, Alexandra; Kaminsky, Jan; Stieglitz, Thomas; Plachta, Dennis T T

    2013-01-01

    The development of neural cuff-electrodes requires several in vivo studies and revisions of the electrode design before the electrode is completely adapted to its target nerve. It is therefore favorable to simulate many of the steps involved in this process to reduce costs and animal testing. As the restoration of motor function is one of the most interesting applications of cuff-electrodes, the position and trajectories of myelinated fibers in the simulated nerve are important. In this paper, we investigate a method for building a precise neuroanatomical model of myelinated fibers in a peripheral nerve based on images obtained using high-resolution light microscopy. This anatomical model describes the first aim of our "Virtual workbench" project to establish a method for creating realistic neural simulation models based on image datasets. The imaging, processing, segmentation and technical limitations are described, and the steps involved in the transition into a simulation model are presented. The results showed that the position and trajectories of the myelinated axons were traced and virtualized using our technique, and small nerves could be reliably modeled based on of light microscopy images using low-cost OpenSource software and standard hardware. The anatomical model will be released to the scientific community.

  12. Computational Tissue Volume Reconstruction of a Peripheral Nerve Using High-Resolution Light-Microscopy and Reconstruct

    PubMed Central

    Gierthmuehlen, Mortimer; Freiman, Thomas M.; Haastert-Talini, Kirsten; Mueller, Alexandra; Kaminsky, Jan; Stieglitz, Thomas; Plachta, Dennis T. T.

    2013-01-01

    The development of neural cuff-electrodes requires several in vivo studies and revisions of the electrode design before the electrode is completely adapted to its target nerve. It is therefore favorable to simulate many of the steps involved in this process to reduce costs and animal testing. As the restoration of motor function is one of the most interesting applications of cuff-electrodes, the position and trajectories of myelinated fibers in the simulated nerve are important. In this paper, we investigate a method for building a precise neuroanatomical model of myelinated fibers in a peripheral nerve based on images obtained using high-resolution light microscopy. This anatomical model describes the first aim of our “Virtual workbench” project to establish a method for creating realistic neural simulation models based on image datasets. The imaging, processing, segmentation and technical limitations are described, and the steps involved in the transition into a simulation model are presented. The results showed that the position and trajectories of the myelinated axons were traced and virtualized using our technique, and small nerves could be reliably modeled based on of light microscopy images using low-cost OpenSource software and standard hardware. The anatomical model will be released to the scientific community. PMID:23785485

  13. A Sparse Dictionary Learning-Based Adaptive Patch Inpainting Method for Thick Clouds Removal from High-Spatial Resolution Remote Sensing Imagery

    PubMed Central

    Yang, Xiaomei; Zhou, Chenghu; Li, Zhi

    2017-01-01

    Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features. PMID:28914787

  14. A Sparse Dictionary Learning-Based Adaptive Patch Inpainting Method for Thick Clouds Removal from High-Spatial Resolution Remote Sensing Imagery.

    PubMed

    Meng, Fan; Yang, Xiaomei; Zhou, Chenghu; Li, Zhi

    2017-09-15

    Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features.

  15. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    NASA Astrophysics Data System (ADS)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  16. Structured illumination 3D microscopy using adaptive lenses and multimode fibers

    NASA Astrophysics Data System (ADS)

    Czarske, Jürgen; Philipp, Katrin; Koukourakis, Nektarios

    2017-06-01

    Microscopic techniques with high spatial and temporal resolution are required for in vivo studying biological cells and tissues. Adaptive lenses exhibit strong potential for fast motion-free axial scanning. However, they also lead to a degradation of the achievable resolution because of aberrations. This hurdle can be overcome by digital optical technologies. We present a novel High-and-Low-frequency (HiLo) 3D-microscope using structured illumination and an adaptive lens. Uniform illumination is used to obtain optical sectioning for the high-frequency (Hi) components of the image, and nonuniform illumination is needed to obtain optical sectioning for the low-frequency (Lo) components of the image. Nonuniform illumination is provided by a multimode fiber. It ensures robustness against optical aberrations of the adaptive lens. The depth-of-field of our microscope can be adjusted a-posteriori by computational optics. It enables to create flexible scans, which compensate for irregular axial measurement positions. The adaptive HiLo 3D-microscope provides an axial scanning range of 1 mm with an axial resolution of about 4 microns and sub-micron lateral resolution over the full scanning range. In result, volumetric measurements with high temporal and spatial resolution are provided. Demonstration measurements of zebrafish embryos with reporter gene-driven fluorescence in the thyroid gland are presented.

  17. Analysis and improvements of Adaptive Particle Refinement (APR) through CPU time, accuracy and robustness considerations

    NASA Astrophysics Data System (ADS)

    Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.

    2018-02-01

    While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.

  18. The Aurora radiation-hydrodynamical simulations of reionization: calibration and first results

    NASA Astrophysics Data System (ADS)

    Pawlik, Andreas H.; Rahmati, Alireza; Schaye, Joop; Jeon, Myoungwon; Dalla Vecchia, Claudio

    2017-04-01

    We introduce a new suite of radiation-hydrodynamical simulations of galaxy formation and reionization called Aurora. The Aurora simulations make use of a spatially adaptive radiative transfer technique that lets us accurately capture the small-scale structure in the gas at the resolution of the hydrodynamics, in cosmological volumes. In addition to ionizing radiation, Aurora includes galactic winds driven by star formation and the enrichment of the universe with metals synthesized in the stars. Our reference simulation uses 2 × 5123 dark matter and gas particles in a box of size 25 h-1 comoving Mpc with a force softening scale of at most 0.28 h-1 kpc. It is accompanied by simulations in larger and smaller boxes and at higher and lower resolution, employing up to 2 × 10243 particles, to investigate numerical convergence. All simulations are calibrated to yield simulated star formation rate functions in close agreement with observational constraints at redshift z = 7 and to achieve reionization at z ≈ 8.3, which is consistent with the observed optical depth to reionization. We focus on the design and calibration of the simulations and present some first results. The median stellar metallicities of low-mass galaxies at z = 6 are consistent with the metallicities of dwarf galaxies in the Local Group, which are believed to have formed most of their stars at high redshifts. After reionization, the mean photoionization rate decreases systematically with increasing resolution. This coincides with a systematic increase in the abundance of neutral hydrogen absorbers in the intergalactic medium.

  19. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    NASA Astrophysics Data System (ADS)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-12-01

    This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.

  20. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  1. Time-resolved High Spectral Resolution Observation of 2MASSW J0746425+200032AB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ji; Mawet, Dimitri; Prato, Lisa, E-mail: ji.wang@caltech.edu

    Many brown dwarfs (BDs) exhibit photometric variability at levels from tenths to tens of percents. The photometric variability is related to magnetic activity or patchy cloud coverage, characteristic of BDs near the L–T transition. Time-resolved spectral monitoring of BDs provides diagnostics of cloud distribution and condensate properties. However, current time-resolved spectral studies of BDs are limited to low spectral resolution ( R ∼ 100) with the exception of the study of Luhman 16 AB at a resolution of 100,000 using the VLT+CRIRES. This work yielded the first map of BD surface inhomogeneity, highlighting the importance and unique contribution of highmore » spectral resolution observations. Here, we report on the time-resolved high spectral resolution observations of a nearby BD binary, 2MASSW J0746425+200032AB. We find no coherent spectral variability that is modulated with rotation. Based on simulations, we conclude that the coverage of a single spot on 2MASSW J0746425+200032AB is smaller than 1% or 6.25% if spot contrast is 50% or 80% of its surrounding flux, respectively. Future high spectral resolution observations aided by adaptive optics systems can put tighter constraints on the spectral variability of 2MASSW J0746425+200032AB and other nearby BDs.« less

  2. An Immersed Boundary - Adaptive Mesh Refinement solver (IB-AMR) for high fidelity fully resolved wind turbine simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2015-11-01

    The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.

  3. Intrusion-Tolerant Location Information Services in Intelligent Vehicular Networks

    NASA Astrophysics Data System (ADS)

    Yan, Gongjun; Yang, Weiming; Shaner, Earl F.; Rawat, Danda B.

    Intelligent Vehicular Networks, known as Vehicle-to-Vehicle and Vehicle-to-Roadside wireless communications (also called Vehicular Ad hoc Networks), are revolutionizing our daily driving with better safety and more infortainment. Most, if not all, applications will depend on accurate location information. Thus, it is of importance to provide intrusion-tolerant location information services. In this paper, we describe an adaptive algorithm that detects and filters the false location information injected by intruders. Given a noisy environment of mobile vehicles, the algorithm estimates the high resolution location of a vehicle by refining low resolution location input. We also investigate results of simulations and evaluate the quality of the intrusion-tolerant location service.

  4. The "Grey Zone" cold air outbreak global model intercomparison: A cross evaluation using large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel

    2017-03-01

    A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.

  5. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  6. Eulerian adaptive finite-difference method for high-velocity impact and penetration problems

    NASA Astrophysics Data System (ADS)

    Barton, P. T.; Deiterding, R.; Meiron, D.; Pullin, D.

    2013-05-01

    Owing to the complex processes involved, faithful prediction of high-velocity impact events demands a simulation method delivering efficient calculations based on comprehensively formulated constitutive models. Such an approach is presented herein, employing a weighted essentially non-oscillatory (WENO) method within an adaptive mesh refinement (AMR) framework for the numerical solution of hyperbolic partial differential equations. Applied widely in computational fluid dynamics, these methods are well suited to the involved locally non-smooth finite deformations, circumventing any requirement for artificial viscosity functions for shock capturing. Application of the methods is facilitated through using a model of solid dynamics based upon hyper-elastic theory comprising kinematic evolution equations for the elastic distortion tensor. The model for finite inelastic deformations is phenomenologically equivalent to Maxwell's model of tangential stress relaxation. Closure relations tailored to the expected high-pressure states are proposed and calibrated for the materials of interest. Sharp interface resolution is achieved by employing level-set functions to track boundary motion, along with a ghost material method to capture the necessary internal boundary conditions for material interactions and stress-free surfaces. The approach is demonstrated for the simulation of high velocity impacts of steel projectiles on aluminium target plates in two and three dimensions.

  7. Texture-adaptive hyperspectral video acquisition system with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Fang, Xiaojing; Feng, Jiao; Wang, Yongjin

    2014-10-01

    We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.

  8. Parallel adaptive wavelet collocation method for PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less

  9. Multiplatform Mission Planning and Operations Simulation Environment for Adaptive Remote Sensors

    NASA Astrophysics Data System (ADS)

    Smith, G.; Ball, C.; O'Brien, A.; Johnson, J. T.

    2017-12-01

    We report on the design and development of mission simulator libraries to support the emerging field of adaptive remote sensors. We will outline the current state of the art in adaptive sensing, provide analysis of how the current approach to performing observing system simulation experiments (OSSEs) must be changed to enable adaptive sensors for remote sensing, and present an architecture to enable their inclusion in future OSSEs.The growing potential of sensors capable of real-time adaptation of their operational parameters calls for a new class of mission planning and simulation tools. Existing simulation tools used in OSSEs assume a fixed set of sensor parameters in terms of observation geometry, frequencies used, resolution, or observation time, which allows simplifications to be made in the simulation and allows sensor observation errors to be characterized a priori. Adaptive sensors may vary these parameters depending on the details of the scene observed, so that sensor performance is not simple to model without conducting OSSE simulations that include sensor adaptation in response to varying observational environment. Adaptive sensors are of significance to resource-constrained, small satellite platforms because they enable the management of power and data volumes while providing methods for multiple sensors to collaborate.The new class of OSSEs required to utilize adaptive sensors located on multiple platforms must answer the question: If the physical act of sensing has a cost, how does the system determine if the science value of a measurement is worth the cost and how should that cost be shared among the collaborating sensors?Here we propose to answer this question using an architecture structured around three modules: ADAPT, MANAGE and COLLABORATE. The ADAPT module is a set of routines to facilitate modeling of adaptive sensors, the MANAGE module will implement a set of routines to facilitate simulations of sensor resource management when power and data volume are constrained, and the COLLABORATE module will support simulations of coordination among multiple platforms with adaptive sensors. When used together these modules will for a simulation OSSEs that can enable both the design of adaptive algorithms to support remote sensing and the prediction of the sensor performance.

  10. Realistic and efficient 2D crack simulation

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  11. Solar tomography adaptive optics.

    PubMed

    Ren, Deqing; Zhu, Yongtian; Zhang, Xi; Dou, Jiangpei; Zhao, Gang

    2014-03-10

    Conventional solar adaptive optics uses one deformable mirror (DM) and one guide star for wave-front sensing, which seriously limits high-resolution imaging over a large field of view (FOV). Recent progress toward multiconjugate adaptive optics indicates that atmosphere turbulence induced wave-front distortion at different altitudes can be reconstructed by using multiple guide stars. To maximize the performance over a large FOV, we propose a solar tomography adaptive optics (TAO) system that uses tomographic wave-front information and uses one DM. We show that by fully taking advantage of the knowledge of three-dimensional wave-front distribution, a classical solar adaptive optics with one DM can provide an extra performance gain for high-resolution imaging over a large FOV in the near infrared. The TAO will allow existing one-deformable-mirror solar adaptive optics to deliver better performance over a large FOV for high-resolution magnetic field investigation, where solar activities occur in a two-dimensional field up to 60'', and where the near infrared is superior to the visible in terms of magnetic field sensitivity.

  12. Joint numerical study of the 2011 Tohoku-Oki tsunami: comparative propagation simulations and high resolution coastal models

    NASA Astrophysics Data System (ADS)

    Loevenbruck, Anne; Arpaia, Luca; Ata, Riadh; Gailler, Audrey; Hayashi, Yutaka; Hébert, Hélène; Heinrich, Philippe; Le Gal, Marine; Lemoine, Anne; Le Roy, Sylvestre; Marcer, Richard; Pedreros, Rodrigo; Pons, Kevin; Ricchiuto, Mario; Violeau, Damien

    2017-04-01

    This study is part of the joint actions carried out within TANDEM (Tsunamis in northern AtlaNtic: Definition of Effects by Modeling). This French project, mainly dedicated to the appraisal of coastal effects due to tsunami waves on the French coastlines, was initiated after the catastrophic 2011 Tohoku-Oki tsunami. This event, which tragically struck Japan, drew the attention to the importance of tsunami risk assessment, in particular when nuclear facilities are involved. As a contribution to this challenging task, the TANDEM partners intend to provide guidance for the French Atlantic area based on numerical simulation. One of the identified objectives consists in designing, adapting and validating simulation codes for tsunami hazard assessment. Besides an integral benchmarking workpackage, the outstanding database of the 2011 event offers the TANDEM partners the opportunity to test their numerical tools with a real case. As a prerequisite, among the numerous published seismic source models arisen from the inversion of the various available records, a couple of coseismic slip distributions have been selected to provide common initial input parameters for the tsunami computations. After possible adaptations or specific developments, the different codes are employed to simulate the Tohoku-Oki tsunami from its source to the northeast Japanese coastline. The results are tested against the numerous tsunami measurements and, when relevant, comparisons of the different codes are carried out. First, the results related to the oceanic propagation phase are compared with the offshore records. Then, the modeled coastal impacts are tested against the onshore data. Flooding at a regional scale is considered, but high resolution simulations are also performed with some of the codes. They allow examining in detail the runup amplitudes and timing, as well as the complexity of the tsunami interaction with the coastal structures. The work is supported by the Tandem project in the frame of French PIA grant ANR-11-RSNR-00023.

  13. A new framework for the analysis of continental-scale convection-resolving climate simulations

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Charpilloz, C.; Arteaga, A.; Ban, N.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Schulthess, T. C.; Christoph, S.

    2017-12-01

    High-resolution climate simulations at horizontal resolution of O(1-4 km) allow explicit treatment of deep convection (thunderstorms and rain showers). Explicitly treating convection by the governing equations reduces uncertainties associated with parametrization schemes and allows a model formulation closer to physical first principles [1,2]. But kilometer-scale climate simulations with long integration periods and large computational domains are expensive and data storage becomes unbearably voluminous. Hence new approaches to perform analysis are required. In the crCLIM project we propose a new climate modeling framework that allows scientists to conduct analysis at high spatial and temporal resolution. We tackle the computational cost by using the largest available supercomputers such as hybrid CPU-GPU architectures. For this the COSMO model has been adapted to run on such architectures [2]. We then alleviate the I/O-bottleneck by employing a simulation data-virtualizer (SDaVi) that allows to trade-off storage (space) for computational effort (time). This is achieved by caching the simulation outputs and efficiently launching re-simulations in case of cache misses. All this is done transparently from the analysis applications [3]. For the re-runs this approach requires a bit-reproducible version of COSMO. That is to say a model that produces identical results on different architectures to ensure coherent recomputation of the requested data [4]. In this contribution we present a version of SDaVi, a first performance model, and a strategy to obtain bit-reproducibility across hardware architectures.[1] N. Ban, J. Schmidli, C. Schär. Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 7889-7907, 2014.[2] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, C. Schär. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19. Geosci. Model Dev, 3393-3412, 2016.[3] S. Di Girolamo, P. Schmid, T. Schulthess, T. Hoefler. Virtualized Big Data: Reproducing Simulation Output on Demand. Submit. to the 23rd ACM Symposium on PPoPP 18, Vienna, Austria.[4] A. Arteaga, O. Fuhrer, T. Hoefler. Designing Bit-Reproducible Portable High-Performance Applications. IEEE 28th IPDPS, 2014.

  14. Studying Spatial Resolution of CZT Detectors Using Sub-Pixel Positioning for SPECT

    NASA Astrophysics Data System (ADS)

    Montémont, Guillaume; Lux, Silvère; Monnet, Olivier; Stanchina, Sylvain; Verger, Loïck

    2014-10-01

    CZT detectors are the basic building block of a variety of new SPECT systems. Their modularity allows adapting system architecture to specific applications such as cardiac, breast, brain or small animal imaging. In semiconductors, a high number of electron-hole pairs is produced by a single interaction. This direct conversion process allows better energy and spatial resolutions than usual scintillation detectors based on NaI(Tl). However, it remains often unclear if SPECT imaging can really benefit of that performance gain. We investigate the system performance of a detection module, which is based on 5 mm thick CZT with a segmented anode having a 2.5 mm pitch by simulation and experimentation. This pitch allows an easy assembly of the crystal on the readout board and limits the space occupied by electronics without significantly degrading energy and spatial resolution.

  15. Simulating single-phase and two-phase non-Newtonian fluid flow of a digital rock scanned at high resolution

    NASA Astrophysics Data System (ADS)

    Tembely, Moussa; Alsumaiti, Ali M.; Jouini, Mohamed S.; Rahimov, Khurshed; Dolatabadi, Ali

    2017-11-01

    Most of the digital rock physics (DRP) simulations focus on Newtonian fluids and overlook the detailed description of rock-fluid interaction. A better understanding of multiphase non-Newtonian fluid flow at pore-scale is crucial for optimizing enhanced oil recovery (EOR). The Darcy scale properties of reservoir rocks such as the capillary pressure curves and the relative permeability are controlled by the pore-scale behavior of the multiphase flow. In the present work, a volume of fluid (VOF) method coupled with an adaptive meshing technique is used to perform the pore-scale simulation on a 3D X-ray micro-tomography (CT) images of rock samples. The numerical model is based on the resolution of the Navier-Stokes equations along with a phase fraction equation incorporating the dynamics contact model. The simulations of a single phase flow for the absolute permeability showed a good agreement with the literature benchmark. Subsequently, the code is used to simulate a two-phase flow consisting of a polymer solution, displaying a shear-thinning power law viscosity. The simulations enable to access the impact of the consistency factor (K), the behavior index (n), along with the two contact angles (advancing and receding) on the relative permeability.

  16. Adaptive resolution simulation of oligonucleotides

    NASA Astrophysics Data System (ADS)

    Netz, Paulo A.; Potestio, Raffaello; Kremer, Kurt

    2016-12-01

    Nucleic acids are characterized by a complex hierarchical structure and a variety of interaction mechanisms with other molecules. These features suggest the need of multiscale simulation methods in order to grasp the relevant physical properties of deoxyribonucleic acid (DNA) and RNA using in silico experiments. Here we report an implementation of a dual-resolution modeling of a DNA oligonucleotide in physiological conditions; in the presented setup only the nucleotide molecule and the solvent and ions in its proximity are described at the atomistic level; in contrast, the water molecules and ions far from the DNA are represented as computationally less expensive coarse-grained particles. Through the analysis of several structural and dynamical parameters, we show that this setup reliably reproduces the physical properties of the DNA molecule as observed in reference atomistic simulations. These results represent a first step towards a realistic multiscale modeling of nucleic acids and provide a quantitatively solid ground for their simulation using dual-resolution methods.

  17. Development of ALARO-Climate regional climate model for a very high resolution

    NASA Astrophysics Data System (ADS)

    Skalak, Petr; Farda, Ales; Brozkova, Radmila; Masek, Jan

    2014-05-01

    ALARO-Climate is a new regional climate model (RCM) derived from the ALADIN LAM model family. It is based on the numerical weather prediction model ALARO and developed at the Czech Hydrometeorological Institute. The model is expected to able to work in the so called "grey zone" physics (horizontal resolution of 4 - 7 km) and at the same time retain its ability to be operated in resolutions in between 20 and 50 km, which are typical for contemporary generation of regional climate models. Here we present the main results of the RCM ALARO-Climate model simulations in 25 and 6.25 km resolutions on the longer time-scale (1961-1990). The model was driven by the ERA-40 re-analyses and run on the integration domain of ~ 2500 x 2500 km size covering the central Europe. The simulated model climate was compared with the gridded observation of air temperature (mean, maximum, minimum) and precipitation from the E-OBS version dataset 8. Other simulated parameters (e.g., cloudiness, radiation or components of water cycle) were compared to the ERA-40 re-analyses. The validation of the first ERA-40 simulation in both, 25 km and 6.25 km resolutions, revealed significant cold biases in all seasons and overestimation of precipitation in the selected Central Europe target area (0° - 30° eastern longitude ; 40° - 60° northern latitude). The differences between these simulations were small and thus revealed a robustness of the model's physical parameterization on the resolution change. The series of 25 km resolution simulations with several model adaptations was carried out to study their effect on the simulated properties of climate variables and thus possibly identify a source of major errors in the simulated climate. The current investigation suggests the main reason for biases is related to the model physic. Acknowledgements: This study was performed within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation) and CzechGlobe Centre (CZ.1.05/1.1.00/02.0073). The partial support was also provided under the projects P209-11-0956 of the Czech Science Foundation and CZ.1.07/2.4.00/31.0056 (Operational Programme of Education for Competitiveness of Ministry of Education, Youth and Sports of the Czech Republic).

  18. Towards Forming a Primordial Protostar in a Cosmological AMR Simulation

    NASA Astrophysics Data System (ADS)

    Turk, Matthew J.; Abel, Tom; O'Shea, Brian W.

    2008-03-01

    Modeling the formation of the first stars in the universe is a well-posed problem and ideally suited for computational investigation.We have conducted high-resolution numerical studies of the formation of primordial stars. Beginning with primordial initial conditions appropriate for a ΛCDM model, we used the Eulerian adaptive mesh refinement code (Enzo) to achieve unprecedented numerical resolution, resolving cosmological scales as well as sub-stellar scales simultaneously. Building on the work of Abel, Bryan and Norman (2002), we followed the evolution of the first collapsing cloud until molecular hydrogen is optically thick to cooling radiation. In addition, the calculations account for the process of collision-induced emission (CIE) and add approximations to the optical depth in both molecular hydrogen roto-vibrational cooling and CIE. Also considered are the effects of chemical heating/cooling from the formation/destruction of molecular hydrogen. We present the results of these simulations, showing the formation of a 10 Jupiter-mass protostellar core bounded by a strongly aspherical accretion shock. Accretion rates are found to be as high as one solar mass per year.

  19. Colonoscopy procedure simulation: virtual reality training based on a real time computational approach.

    PubMed

    Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang

    2018-01-25

    Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.

  20. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  1. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  2. Novel Multistatic Adaptive Microwave Imaging Methods for Early Breast Cancer Detection

    NASA Astrophysics Data System (ADS)

    Xie, Yao; Guo, Bin; Li, Jian; Stoica, Petre

    2006-12-01

    Multistatic adaptive microwave imaging (MAMI) methods are presented and compared for early breast cancer detection. Due to the significant contrast between the dielectric properties of normal and malignant breast tissues, developing microwave imaging techniques for early breast cancer detection has attracted much interest lately. MAMI is one of the microwave imaging modalities and employs multiple antennas that take turns to transmit ultra-wideband (UWB) pulses while all antennas are used to receive the reflected signals. MAMI can be considered as a special case of the multi-input multi-output (MIMO) radar with the multiple transmitted waveforms being either UWB pulses or zeros. Since the UWB pulses transmitted by different antennas are displaced in time, the multiple transmitted waveforms are orthogonal to each other. The challenge to microwave imaging is to improve resolution and suppress strong interferences caused by the breast skin, nipple, and so forth. The MAMI methods we investigate herein utilize the data-adaptive robust Capon beamformer (RCB) to achieve high resolution and interference suppression. We will demonstrate the effectiveness of our proposed methods for breast cancer detection via numerical examples with data simulated using the finite-difference time-domain method based on a 3D realistic breast model.

  3. An adaptive semi-Lagrangian advection model for transport of volcanic emissions in the atmosphere

    NASA Astrophysics Data System (ADS)

    Gerwing, Elena; Hort, Matthias; Behrens, Jörn; Langmann, Bärbel

    2018-06-01

    The dispersion of volcanic emissions in the Earth atmosphere is of interest for climate research, air traffic control and human wellbeing. Current volcanic emission dispersion models rely on fixed-grid structures that often are not able to resolve the fine filamented structure of volcanic emissions being transported in the atmosphere. Here we extend an existing adaptive semi-Lagrangian advection model for volcanic emissions including the sedimentation of volcanic ash. The advection of volcanic emissions is driven by a precalculated wind field. For evaluation of the model, the explosive eruption of Mount Pinatubo in June 1991 is chosen, which was one of the largest eruptions in the 20th century. We compare our simulations of the climactic eruption on 15 June 1991 to satellite data of the Pinatubo ash cloud and evaluate different sets of input parameters. We could reproduce the general advection of the Pinatubo ash cloud and, owing to the adaptive mesh, simulations could be performed at a high local resolution while minimizing computational cost. Differences to the observed ash cloud are attributed to uncertainties in the input parameters and the course of Typhoon Yunya, which is probably not completely resolved in the wind data used to drive the model. The best results were achieved for simulations with multiple ash particle sizes.

  4. Exploring the impacts of physics and resolution on aqua-planet simulations from a nonhydrostatic global variable-resolution modeling framework: IMPACTS OF PHYSICS AND RESOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Chun; Leung, L. Ruby; Park, Sang-Hun

    Advances in computing resources are gradually moving regional and global numerical forecasting simulations towards sub-10 km resolution, but global high resolution climate simulations remain a challenge. The non-hydrostatic Model for Prediction Across Scales (MPAS) provides a global framework to achieve very high resolution using regional mesh refinement. Previous studies using the hydrostatic version of MPAS (H-MPAS) with the physics parameterizations of Community Atmosphere Model version 4 (CAM4) found notable resolution dependent behaviors. This study revisits the resolution sensitivity using the non-hydrostatic version of MPAS (NH-MPAS) with both CAM4 and CAM5 physics. A series of aqua-planet simulations at global quasi-uniform resolutionsmore » ranging from 240 km to 30 km and global variable resolution simulations with a regional mesh refinement of 30 km resolution over the tropics are analyzed, with a primary focus on the distinct characteristics of NH-MPAS in simulating precipitation, clouds, and large-scale circulation features compared to H-MPAS-CAM4. The resolution sensitivity of total precipitation and column integrated moisture in NH-MPAS is smaller than that in H-MPAS-CAM4. This contributes importantly to the reduced resolution sensitivity of large-scale circulation features such as the inter-tropical convergence zone and Hadley circulation in NH-MPAS compared to H-MPAS. In addition, NH-MPAS shows almost no resolution sensitivity in the simulated westerly jet, in contrast to the obvious poleward shift in H-MPAS with increasing resolution, which is partly explained by differences in the hyperdiffusion coefficients used in the two models that influence wave activity. With the reduced resolution sensitivity, simulations in the refined region of the NH-MPAS global variable resolution configuration exhibit zonally symmetric features that are more comparable to the quasi-uniform high-resolution simulations than those from H-MPAS that displays zonal asymmetry in simulations inside the refined region. Overall, NH-MPAS with CAM5 physics shows less resolution sensitivity compared to CAM4. These results provide a reference for future studies to further explore the use of NH-MPAS for high-resolution climate simulations in idealized and realistic configurations.« less

  5. The FALCON Concept: Multi-Object Spectroscopy Combined with MCAO in Near-IR

    NASA Astrophysics Data System (ADS)

    Hammer, François; Sayède, Frédéric; Gendron, Eric; Fusco, Thierry; Burgarella, Denis; Cayatte, Véronique; Conan, Jean-Marc; Courbin, Frédéric; Flores, Hector; Guinouard, Isabelle; Jocou, Laurent; Lançon, Ariane; Monnet, Guy; Mouhcine, Mustapha; Rigaud, François; Rouan, Daniel; Rousset, Gérard; Buat, Véronique; Zamkotsian, Frédéric

    A large fraction of the present-day stellar mass was formed between z=0.5 and z˜ 3 and our understanding of the formation mechanisms at work at these epochs requires both high spatial and high spectral resolution: one shall simultaneously obtain images of objects with typical sizes as small as 1-2 kpc (˜ 0".1), while achieving 20-50 km/s (R≥ 5000) spectral resolution. In addition, the redshift range to be considered implies that most important spectral features are redshifted in the near-infrared. The obvious instrumental solution to adopt in order to tackle the science goal is therefore a combination of multi-object 3D spectrograph with multi-conjugate adaptive optics in large fields. A very promising way to achieve such a technically challenging goal is to relax the conditions of the traditional full adaptive optics correction. A partial, but still competitive correction shall be prefered, over a much wider field of view. This can be done by estimating the turbulent volume from sets of natural guide stars, by optimizing the correction to several and discrete small areas of few arcsec 2 selected in a large field (Nasmyth field of 25 arcmin) and by correcting up to the 6th, and eventually, up to the 60 th Zernike modes. Simulations on real extragalactic fields, show that for most sources (> 80%), the recovered resolution could reach 0".15-0".25 in the J and H bands. Detection of point-like objects is improved by factors from 3 to ≥10, when compared with an instrument without adaptive correction. The proposed instrument concept, FALCON, is equipped with deployable mini-integral field units (IFUs), achieving spectral resolutions between R=5000 and 20000. Its multiplex capability, combined with high spatial and spectral resolution characteristics, is a natural ground based complement to the next generation of space telescopes. Galaxy formation in the early Universe is certainly a main science driver. We describe here how FALCON shall allow to answer puzzling questions in this area, although the science cases naturally accessible to the instrument concept makes it of interest for most areas of astrophysics.

  6. Study of compressible turbulent flows in supersonic environment by large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Genin, Franklin

    The numerical resolution of turbulent flows in high-speed environment is of fundamental importance but remains a very challenging problem. First, the capture of strong discontinuities, typical of high-speed flows, requires the use of shock-capturing schemes, which are not adapted to the resolution of turbulent structures due to their intrinsic dissipation. On the other hand, low-dissipation schemes are unable to resolve shock fronts and other sharp gradients without creating high amplitude numerical oscillations. Second, the nature of turbulence in high-speed flows differs from its incompressible behavior, and, in the context of Large-Eddy Simulation, the subgrid closure must be adapted to the modeling of compressibility effects and shock waves on turbulent flows. The developments described in this thesis are two-fold. First, a state of the art closure approach for LES is extended to model subgrid turbulence in compressible flows. The energy transfers due to compressible turbulence and the diffusion of turbulent kinetic energy by pressure fluctuations are assessed and integrated in the Localized Dynamic ksgs model. Second, a hybrid numerical scheme is developed for the resolution of the LES equations and of the model transport equation, which combines a central scheme for turbulent resolutions to a shock-capturing method. A smoothness parameter is defined and used to switch from the base smooth solver to the upwind scheme in regions of discontinuities. It is shown that the developed hybrid methodology permits a capture of shock/turbulence interactions in direct simulations that agrees well with other reference simulations, and that the LES methodology effectively reproduces the turbulence evolution and physical phenomena involved in the interaction. This numerical approach is then employed to study a problem of practical importance in high-speed mixing. The interaction of two shock waves with a high-speed turbulent shear layer as a mixing augmentation technique is considered. It is shown that the levels of turbulence are increased through the interaction, and that the mixing is significantly improved in this flow configuration. However, the region of increased mixing is found to be localized to a region close to the impact of the shocks, and that the statistical levels of turbulence relax to their undisturbed levels some short distance downstream of the interaction. The present developments are finally applied to a practical configuration relevant to scramjet injection. The normal injection of a sonic jet into a supersonic crossflow is considered numerically, and compared to the results of an experimental study. A fair agreement in the statistics of mean and fluctuating velocity fields is obtained. Furthermore, some of the instantaneous flow structures observed in experimental visualizations are identified in the present simulation. The dynamics of the interaction for the reference case, based on the experimental study, as well as for a case of higher freestream Mach number and a case of higher momentum ratio, are examined. The classical instantaneous vortical structures are identified, and their generation mechanisms, specific to supersonic flow, are highlighted. Furthermore, two vortical structures, recently revealed in low-speed jets in crossflow but never documented for high-speed flows, are identified during the flow evolution.

  7. StePS: Stereographically Projected Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-05-01

    StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.

  8. Structures and Intermittency in a Passive Scalar Model

    NASA Astrophysics Data System (ADS)

    Vergassola, M.; Mazzino, A.

    1997-09-01

    Perturbative expansions for intermittency scaling exponents in the Kraichnan passive scalar model [Phys. Rev. Lett. 72, 1016 (1994)] are investigated. A one-dimensional compressible model is considered for this purpose. High resolution Monte Carlo simulations using an Ito approach adapted to an advecting velocity field with a very short correlation time are performed and lead to clean scaling behavior for passive scalar structure functions. Perturbative predictions for the scaling exponents around the Gaussian limit of the model are derived as in the Kraichnan model. Their comparison with the simulations indicates that the scale-invariant perturbative scheme correctly captures the inertial range intermittency corrections associated with the intense localized structures observed in the dynamics.

  9. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  10. Adaptive optics with pupil tracking for high resolution retinal imaging

    PubMed Central

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-01-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577

  11. Adaptive optics with pupil tracking for high resolution retinal imaging.

    PubMed

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-02-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.

  12. High-resolution altitude profiles of the atmospheric turbulence with PML at the Sutherland Observatory

    NASA Astrophysics Data System (ADS)

    Catala, L.; Ziad, A.; Fanteï-Caujolle, Y.; Crawford, S. M.; Buckley, D. A. H.; Borgnino, J.; Blary, F.; Nickola, M.; Pickering, T.

    2017-05-01

    With the prospect of the next generation of ground-based telescopes, the extremely large telescopes, increasingly complex and demanding adaptive optics systems are needed. This is to compensate for image distortion caused by atmospheric turbulence and fully take advantage of mirrors with diameters of 30-40 m. This requires a more precise characterization of the turbulence. The Profiler of Moon Limb (PML) was developed within this context. The PML aims to provide high-resolution altitude profiles of the turbulence using differential measurements of the Moon limb position to calculate the transverse spatio-angular covariance of the angle of arrival fluctuations. The covariance of differential image motion for different separation angles is sensitive to the altitude distribution of the seeing. The use of the continuous Moon limb provides a large number of separation angles allowing for the high-resolution altitude of the profiles. The method is presented and tested with simulated data. Moreover, a PML instrument was deployed at the Sutherland Observatory in South Africa in 2011 August. We present here the results of this measurement campaign.

  13. Compactified cosmological simulations of the infinite universe

    NASA Astrophysics Data System (ADS)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-06-01

    We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.

  14. A massively parallel adaptive scheme for melt migration in geodynamics computations

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Heister, Timo; Grove, Ryan

    2016-04-01

    Melt generation and migration are important processes for the evolution of the Earth's interior and impact the global convection of the mantle. While they have been the subject of numerous investigations, the typical time and length-scales of melt transport are vastly different from global mantle convection, which determines where melt is generated. This makes it difficult to study mantle convection and melt migration in a unified framework. In addition, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. We describe our extension of the community mantle convection code ASPECT that adds equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects, and it incorporates the individual compressibilities of the solid and the fluid phase. For this, we derive an accurate and stable Finite Element scheme that can be combined with adaptive mesh refinement. This is particularly advantageous for this type of problem, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high resolution, 3d, compressible, global mantle convection simulations coupled with melt migration. Furthermore, scalable iterative linear solvers are required to solve the large linear systems arising from the discretized system. Finally, we present benchmarks and scaling tests of our solver up to tens of thousands of cores, show the effectiveness of adaptive mesh refinement when applied to melt migration and compare the compressible and incompressible formulation. We then apply our software to large-scale 3d simulations of melting and melt transport in mantle plumes interacting with the lithosphere. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. The presented implementation is available online under an Open Source license together with an extensive documentation.

  15. Climate change indices for Greenland applied directly for other arctic regions - Enhanced and utilized climate information from one high resolution RCM downscaling for Greenland evaluated through pattern scaling and CMIP5

    NASA Astrophysics Data System (ADS)

    Olesen, M.; Christensen, J. H.; Boberg, F.

    2016-12-01

    Climate change indices for Greenland applied directly for other arctic regions - Enhanced and utilized climate information from one high resolution RCM downscaling for Greenland evaluated through pattern scaling and CMIP5Climate change affects the Greenlandic society both advantageously and disadvantageously. Changes in temperature and precipitation patterns may result in changes in a number of derived society related climate indices, such as the length of growing season or the number of annual dry days or a combination of the two - indices of substantial importance to society in a climate adaptation context.Detailed climate indices require high resolution downscaling. We have carried out a very high resolution (5 km) simulation with the regional climate model HIRHAM5, forced by the global model EC-Earth. Evaluation of RCM output is usually done with an ensemble of downscaled output with multiple RCM's and GCM's. Here we have introduced and tested a new technique; a translation of the robustness of an ensemble of GCM models from CMIP5 into the specific index from the HIRHAM5 downscaling through a correlation between absolute temperatures and its corresponding index values from the HIRHAM5 output.The procedure is basically conducted in two steps: First, the correlation between temperature and a given index for the HIRHAM5 simulation by a best fit to a second order polynomial is identified. Second, the standard deviation from the CMIP5 simulations is introduced to show the corresponding standard deviation of the index from the HIRHAM5 run. The change of specific climate indices due to global warming will then be possible to evaluate elsewhere corresponding to the change in absolute temperature.Results based on selected indices with focus on the future climate in Greenland calculated for the rcp4.5 and rcp8.5 scenarios will be presented.

  16. Toolbox for Urban Mobility Simulation: High Resolution Population Dynamics for Global Cities

    NASA Astrophysics Data System (ADS)

    Bhaduri, B. L.; Lu, W.; Liu, C.; Thakur, G.; Karthik, R.

    2015-12-01

    In this rapidly urbanizing world, unprecedented rate of population growth is not only mirrored by increasing demand for energy, food, water, and other natural resources, but has detrimental impacts on environmental and human security. Transportation simulations are frequently used for mobility assessment in urban planning, traffic operation, and emergency management. Previous research, involving purely analytical techniques to simulations capturing behavior, has investigated questions and scenarios regarding the relationships among energy, emissions, air quality, and transportation. Primary limitations of past attempts have been availability of input data, useful "energy and behavior focused" models, validation data, and adequate computational capability that allows adequate understanding of the interdependencies of our transportation system. With increasing availability and quality of traditional and crowdsourced data, we have utilized the OpenStreetMap roads network, and has integrated high resolution population data with traffic simulation to create a Toolbox for Urban Mobility Simulations (TUMS) at global scale. TUMS consists of three major components: data processing, traffic simulation models, and Internet-based visualizations. It integrates OpenStreetMap, LandScanTM population, and other open data (Census Transportation Planning Products, National household Travel Survey, etc.) to generate both normal traffic operation and emergency evacuation scenarios. TUMS integrates TRANSIMS and MITSIM as traffic simulation engines, which are open-source and widely-accepted for scalable traffic simulations. Consistent data and simulation platform allows quick adaption to various geographic areas that has been demonstrated for multiple cities across the world. We are combining the strengths of geospatial data sciences, high performance simulations, transportation planning, and emissions, vehicle and energy technology development to design and develop a simulation framework to assist decision makers at all levels - local, state, regional, and federal. Using Cleveland, Tennessee as an example, in this presentation, we illustrate how emerging cities could easily assess future land use scenario driven impacts on energy and environment utilizing such a capability.

  17. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.

  18. The WASCAL high-resolution regional climate simulation ensemble for West Africa: concept, dissemination and assessment

    NASA Astrophysics Data System (ADS)

    Heinzeller, Dominikus; Dieng, Diarra; Smiatek, Gerhard; Olusegun, Christiana; Klein, Cornelia; Hamann, Ilse; Salack, Seyni; Bliefernicht, Jan; Kunstmann, Harald

    2018-04-01

    Climate change and constant population growth pose severe challenges to 21st century rural Africa. Within the framework of the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL), an ensemble of high-resolution regional climate change scenarios for the greater West African region is provided to support the development of effective adaptation and mitigation measures. This contribution presents the overall concept of the WASCAL regional climate simulations, as well as detailed information on the experimental design, and provides information on the format and dissemination of the available data. All data are made available to the public at the CERA long-term archive of the German Climate Computing Center (DKRZ) with a subset available at the PANGAEA Data Publisher for Earth & Environmental Science portal (https://doi.pangaea.de/10.1594/PANGAEA.880512). A brief assessment of the data are presented to provide guidance for future users. Regional climate projections are generated at high (12 km) and intermediate (60 km) resolution using the Weather Research and Forecasting Model (WRF). The simulations cover the validation period 1980-2010 and the two future periods 2020-2050 and 2070-2100. A brief comparison to observations and two climate change scenarios from the Coordinated Regional Downscaling Experiment (CORDEX) initiative is presented to provide guidance on the data set to future users and to assess their climate change signal. Under the RCP4.5 (Representative Concentration Pathway 4.5) scenario, the results suggest an increase in temperature by 1.5 °C at the coast of Guinea and by up to 3 °C in the northern Sahel by the end of the 21st century, in line with existing climate projections for the region. They also project an increase in precipitation by up to 300 mm per year along the coast of Guinea, by up to 150 mm per year in the Soudano region adjacent in the north and almost no change in precipitation in the Sahel. This stands in contrast to existing regional climate projections, which predict increasingly drier conditions.The high spatial and temporal resolution of the data, the extensive list of output variables, the large computational domain and the long time periods covered make this data set a unique resource for follow-up analyses and impact modelling studies over the greater West African region. The comprehensive documentation and standardisation of the data facilitate and encourage their use within and outside of the WASCAL community.

  19. Obscuring and Feeding Supermassive Black Holes with Evolving Nuclear Star Clusters

    NASA Astrophysics Data System (ADS)

    Schartmann, M.; Burkert, A.; Krause, M.; Camenzind, M.; Meisenheimer, K.; Davies, R. I.

    2010-05-01

    Recently, high-resolution observations made with the help of the near-infrared adaptive optics integral field spectrograph SINFONI at the VLT proved the existence of massive and young nuclear star clusters in the centers of a sample of Seyfert galaxies. With the help of high-resolution hydrodynamical simulations with the pluto code, we follow the evolution of such clusters, especially focusing on mass and energy feedback from young stars. This leads to a filamentary inflow of gas on large scales (tens of parsecs), whereas a turbulent and very dense disk builds up on the parsec scale. Here we concentrate on the long-term evolution of the nuclear disk in NGC 1068 with the help of an effective viscous disk model, using the mass input from the large-scale simulations and accounting for star formation in the disk. This two-stage modeling enables us to connect the tens-of-parsecs scale region (observable with SINFONI) with the parsec-scale environment (MIDI observations). At the current age of the nuclear star cluster, our simulations predict disk sizes of the order 0.8 to 0.9 pc, gas masses of order 106 M⊙, and mass transfer rates through the inner boundary of order 0.025 M⊙ yr-1, in good agreement with values derived from observations.

  20. Adaptive tracking of a time-varying field with a quantum sensor

    NASA Astrophysics Data System (ADS)

    Bonato, Cristian; Berry, Dominic W.

    2017-05-01

    Sensors based on single spins can enable magnetic-field detection with very high sensitivity and spatial resolution. Previous work has concentrated on sensing of a constant magnetic field or a periodic signal. Here, we instead investigate the problem of estimating a field with nonperiodic variation described by a Wiener process. We propose and study, by numerical simulations, an adaptive tracking protocol based on Bayesian estimation. The tracking protocol updates the probability distribution for the magnetic field based on measurement outcomes and adapts the choice of sensing time and phase in real time. By taking the statistical properties of the signal into account, our protocol strongly reduces the required measurement time. This leads to a reduction of the error in the estimation of a time-varying signal by up to a factor of four compare with protocols that do not take this information into account.

  1. DSP-based adaptive backstepping using the tracking errors for high-performance sensorless speed control of induction motor drive.

    PubMed

    Zaafouri, Abderrahmen; Regaya, Chiheb Ben; Azza, Hechmi Ben; Châari, Abdelkader

    2016-01-01

    This paper presents a modified structure of the backstepping nonlinear control of the induction motor (IM) fitted with an adaptive backstepping speed observer. The control design is based on the backstepping technique complemented by the introduction of integral tracking errors action to improve its robustness. Unlike other research performed on backstepping control with integral action, the control law developed in this paper does not propose the increase of the number of system state so as not increase the complexity of differential equations resolution. The digital simulation and experimental results show the effectiveness of the proposed control compared to the conventional PI control. The results analysis shows the characteristic robustness of the adaptive control to disturbances of the load, the speed variation and low speed. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Adaptive Sparse Representation for Source Localization with Gain/Phase Errors

    PubMed Central

    Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin

    2011-01-01

    Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875

  3. Multiconjugate adaptive optics applied to an anatomically accurate human eye model.

    PubMed

    Bedggood, P A; Ashman, R; Smith, G; Metha, A B

    2006-09-04

    Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.

  4. Adaptive mesh refinement versus subgrid friction interpolation in simulations of Antarctic ice dynamics

    DOE PAGES

    Cornford, S. L.; Martin, D. F.; Lee, V.; ...

    2016-05-13

    At least in conventional hydrostatic ice-sheet models, the numerical error associated with grounding line dynamics can be reduced by modifications to the discretization scheme. These involve altering the integration formulae for the basal traction and/or driving stress close to the grounding line and exhibit lower – if still first-order – error in the MISMIP3d experiments. MISMIP3d may not represent the variety of real ice streams, in that it lacks strong lateral stresses, and imposes a large basal traction at the grounding line. We study resolution sensitivity in the context of extreme forcing simulations of the entire Antarctic ice sheet, using the BISICLES adaptive mesh ice-sheet model with two schemes: the original treatment, and a scheme, which modifies the discretization of the basal traction. The second scheme does indeed improve accuracy – by around a factor of two – for a given mesh spacing, butmore » $$\\lesssim 1$$ km resolution is still necessary. For example, in coarser resolution simulations Thwaites Glacier retreats so slowly that other ice streams divert its trunk. In contrast, with $$\\lesssim 1$$ km meshes, the same glacier retreats far more quickly and triggers the final phase of West Antarctic collapse a century before any such diversion can take place.« less

  5. Climate risk index for Italy.

    PubMed

    Mysiak, Jaroslav; Torresan, Silvia; Bosello, Francesco; Mistry, Malcolm; Amadio, Mattia; Marzi, Sepehr; Furlan, Elisa; Sperotto, Anna

    2018-06-13

    We describe a climate risk index that has been developed to inform national climate adaptation planning in Italy and that is further elaborated in this paper. The index supports national authorities in designing adaptation policies and plans, guides the initial problem formulation phase, and identifies administrative areas with higher propensity to being adversely affected by climate change. The index combines (i) climate change-amplified hazards; (ii) high-resolution indicators of exposure of chosen economic, social, natural and built- or manufactured capital (MC) assets and (iii) vulnerability, which comprises both present sensitivity to climate-induced hazards and adaptive capacity. We use standardized anomalies of selected extreme climate indices derived from high-resolution regional climate model simulations of the EURO-CORDEX initiative as proxies of climate change-altered weather and climate-related hazards. The exposure and sensitivity assessment is based on indicators of manufactured, natural, social and economic capital assets exposed to and adversely affected by climate-related hazards. The MC refers to material goods or fixed assets which support the production process (e.g. industrial machines and buildings); Natural Capital comprises natural resources and processes (renewable and non-renewable) producing goods and services for well-being; Social Capital (SC) addressed factors at the individual (people's health, knowledge, skills) and collective (institutional) level (e.g. families, communities, organizations and schools); and Economic Capital (EC) includes owned and traded goods and services. The results of the climate risk analysis are used to rank the subnational administrative and statistical units according to the climate risk challenges, and possibly for financial resource allocation for climate adaptation.This article is part of the theme issue 'Advances in risk assessment for climate change adaptation policy'. © 2018 The Authors.

  6. Climate risk index for Italy

    NASA Astrophysics Data System (ADS)

    Mysiak, Jaroslav; Torresan, Silvia; Bosello, Francesco; Mistry, Malcolm; Amadio, Mattia; Marzi, Sepehr; Furlan, Elisa; Sperotto, Anna

    2018-06-01

    We describe a climate risk index that has been developed to inform national climate adaptation planning in Italy and that is further elaborated in this paper. The index supports national authorities in designing adaptation policies and plans, guides the initial problem formulation phase, and identifies administrative areas with higher propensity to being adversely affected by climate change. The index combines (i) climate change-amplified hazards; (ii) high-resolution indicators of exposure of chosen economic, social, natural and built- or manufactured capital (MC) assets and (iii) vulnerability, which comprises both present sensitivity to climate-induced hazards and adaptive capacity. We use standardized anomalies of selected extreme climate indices derived from high-resolution regional climate model simulations of the EURO-CORDEX initiative as proxies of climate change-altered weather and climate-related hazards. The exposure and sensitivity assessment is based on indicators of manufactured, natural, social and economic capital assets exposed to and adversely affected by climate-related hazards. The MC refers to material goods or fixed assets which support the production process (e.g. industrial machines and buildings); Natural Capital comprises natural resources and processes (renewable and non-renewable) producing goods and services for well-being; Social Capital (SC) addressed factors at the individual (people's health, knowledge, skills) and collective (institutional) level (e.g. families, communities, organizations and schools); and Economic Capital (EC) includes owned and traded goods and services. The results of the climate risk analysis are used to rank the subnational administrative and statistical units according to the climate risk challenges, and possibly for financial resource allocation for climate adaptation. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  7. Climate risk index for Italy

    PubMed Central

    Torresan, Silvia; Bosello, Francesco; Mistry, Malcolm; Amadio, Mattia; Marzi, Sepehr; Furlan, Elisa; Sperotto, Anna

    2018-01-01

    We describe a climate risk index that has been developed to inform national climate adaptation planning in Italy and that is further elaborated in this paper. The index supports national authorities in designing adaptation policies and plans, guides the initial problem formulation phase, and identifies administrative areas with higher propensity to being adversely affected by climate change. The index combines (i) climate change-amplified hazards; (ii) high-resolution indicators of exposure of chosen economic, social, natural and built- or manufactured capital (MC) assets and (iii) vulnerability, which comprises both present sensitivity to climate-induced hazards and adaptive capacity. We use standardized anomalies of selected extreme climate indices derived from high-resolution regional climate model simulations of the EURO-CORDEX initiative as proxies of climate change-altered weather and climate-related hazards. The exposure and sensitivity assessment is based on indicators of manufactured, natural, social and economic capital assets exposed to and adversely affected by climate-related hazards. The MC refers to material goods or fixed assets which support the production process (e.g. industrial machines and buildings); Natural Capital comprises natural resources and processes (renewable and non-renewable) producing goods and services for well-being; Social Capital (SC) addressed factors at the individual (people's health, knowledge, skills) and collective (institutional) level (e.g. families, communities, organizations and schools); and Economic Capital (EC) includes owned and traded goods and services. The results of the climate risk analysis are used to rank the subnational administrative and statistical units according to the climate risk challenges, and possibly for financial resource allocation for climate adaptation. This article is part of the theme issue ‘Advances in risk assessment for climate change adaptation policy’. PMID:29712797

  8. Impact of high resolution land surface initialization in Indian summer monsoon simulation using a regional climate model

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, C. K.; Rajeevan, M.; Rao, S. Vijaya Bhaskara

    2016-06-01

    The direct impact of high resolution land surface initialization on the forecast bias in a regional climate model in recent years over Indian summer monsoon region is investigated. Two sets of regional climate model simulations are performed, one with a coarse resolution land surface initial conditions and second one used a high resolution land surface data for initial condition. The results show that all monsoon years respond differently to the high resolution land surface initialization. The drought monsoon year 2009 and extended break periods were more sensitive to the high resolution land surface initialization. These results suggest that the drought monsoon year predictions can be improved with high resolution land surface initialization. Result also shows that there are differences in the response to the land surface initialization within the monsoon season. Case studies of heat wave and a monsoon depression simulation show that, the model biases were also improved with high resolution land surface initialization. These results show the need for a better land surface initialization strategy in high resolution regional models for monsoon forecasting.

  9. Evaluation of the Operational Multi-scale Environment model with Grid Adaptivity (OMEGA) for use in Wind Energy Applications in the Great Basin of Nevada

    NASA Astrophysics Data System (ADS)

    King, Kristien C.

    In order to further assess the wind energy potential for Nevada, the accuracy of a computational meteorological model, the Operational Multi-scale Environment model with Grid Adaptivity (OMEGA), was evaluated by comparing simulation results with data collected from a wind monitoring tower near Tonopah, NV. The state of Nevada is characterized by high mountains and low-lying valleys, therefore, in order to determine the wind potential for the state, meteorological models that predict the wind must be able to accurately represent and account for terrain features and simulate topographic forcing with accuracy. Topographic forcing has a dominant role in the development and modification of mesoscale flows in regions of complex terrain, like Tonopah, especially at the level of wind turbine blade heights (~80 m). Additionally, model factors such as horizontal resolution, terrain database resolution, model physics, time of model initialization, stability regime, and source of initial conditions may each affect the ability of a mesoscale model to forecast winds correctly. The observational tower used for comparison was located at Stone Cabin, Nevada. The tower had both sonic anemometers and cup anemometers installed at heights of 40 m, 60 m, and 80 m above the surface. During a previous experiment, tower data were collected for the period February 9 through March 10, 2007 and compared to model simulations using the MM5 and WRF models at a number of varying horizontal resolutions. In this previous research, neither the MM5 nor the WRF showed a significant improvement in ability to forecast wind speed with increasing horizontal grid resolution. The present research evaluated the ability of OMEGA to reproduce point winds as compared to the observational data from the Stone Cabin Tower at heights of 40 m, 60 m, and 80 m. Unlike other mesoscale atmospheric models, OMEGA incorporates an unstructured triangular adaptive grid which allows for increased flexibility and accuracy in characterizing areas of complex terrain. Model sensitivity to horizontal grid resolution, initial conditions, and time of initialization were tested. OMEGA was run over three different horizontal grid resolutions with minimum horizontal edge lengths of: 18 km, 6 km, and 2 km. For each resolution, the model was initialized using both the Global Forecasting System (GFS) and North American Regional Reanalysis (NARR) to determine model sensitivity to initial conditions. For both the NARR and GFS initializations, the model was started at both 0000 UTC and 1200 UTC to determine the effect of start time and stability regime on the performance of the model. An additional intensive study into the model's performance was also conducted by a detailed evaluation of model results during two separate 24-hour periods, the first a period where the model performed well and the second a period where the model performed poorly, to determine which atmospheric factors most affect the predictive ability of the OMEGA model. The statistical results were then compared with the results from the MM5 and WRF simulations to determine the most appropriate model for wind energy potential studies in complex terrain.

  10. Development of high-resolution multi-scale modelling system for simulation of coastal-fluvial urban flooding

    NASA Astrophysics Data System (ADS)

    Comer, Joanne; Indiana Olbert, Agnieszka; Nash, Stephen; Hartnett, Michael

    2017-02-01

    Urban developments in coastal zones are often exposed to natural hazards such as flooding. In this research, a state-of-the-art, multi-scale nested flood (MSN_Flood) model is applied to simulate complex coastal-fluvial urban flooding due to combined effects of tides, surges and river discharges. Cork city on Ireland's southwest coast is a study case. The flood modelling system comprises a cascade of four dynamically linked models that resolve the hydrodynamics of Cork Harbour and/or its sub-region at four scales: 90, 30, 6 and 2 m. Results demonstrate that the internalization of the nested boundary through the use of ghost cells combined with a tailored adaptive interpolation technique creates a highly dynamic moving boundary that permits flooding and drying of the nested boundary. This novel feature of MSN_Flood provides a high degree of choice regarding the location of the boundaries to the nested domain and therefore flexibility in model application. The nested MSN_Flood model through dynamic downscaling facilitates significant improvements in accuracy of model output without incurring the computational expense of high spatial resolution over the entire model domain. The urban flood model provides full characteristics of water levels and flow regimes necessary for flood hazard identification and flood risk assessment.

  11. Modeling unstable alcohol flooding of DNAPL-contaminated columns

    NASA Astrophysics Data System (ADS)

    Roeder, Eberhard; Falta, Ronald W.

    Alcohol flooding, consisting of injection of a mixture of alcohol and water, is one source removal technology for dense non-aqueous phase liquids (DNAPLs) currently under investigation. An existing compositional multiphase flow simulator (UTCHEM) was adapted to accurately represent the equilibrium phase behavior of ternary and quaternary alcohol/DNAPL systems. Simulator predictions were compared to laboratory column experiments and the results are presented here. It was found that several experiments involved unstable displacements of the NAPL bank by the alcohol flood or of the alcohol flood by the following water flood. Unstable displacement led to additional mixing compared to ideal displacement. This mixing was approximated by a large dispersion in one-dimensional simulations and or by including permeability heterogeneities on a very small scale in three-dimensional simulations. Three-dimensional simulations provided the best match. Simulations of unstable displacements require either high-resolution grids, or need to consider the mixing of fluids in a different manner to capture the resulting effects on NAPL recovery.

  12. Adaptive optics full-field OCT: a resolution almost insensitive to aberrations (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xiao, Peng; Fink, Mathias; Boccara, A. Claude

    2016-03-01

    A Full-Field OCT (FFOCT) setup coupled to a compact transmissive liquid crystal spatial light modulator (LCSLM) is used to induce or correct aberrations and simulate eye examinations. To reduce the system complexity, strict pupil conjugation was abandoned. During our work on quantifying the effect of geometrical aberrations on FFOCT images, we found that the image resolution is almost insensitive to aberrations. Indeed if the object channel PSF is distorted, its interference with the reference channel conserves the main feature of an unperturbed PSF with only a reduction of the signal level. This unique behavior is specific to the use of a spatially incoherent illumination. Based on this, the FFOCT image intensity was used as the metric for our wavefront sensorless correction. Aberration correction was first conducted on an USAF resolution target with the LSCLM as both aberration generator and corrector. A random aberration mask was induced, and the low-order Zernike Modes were corrected sequentially according to the intensity metric function optimization. A Ficus leaf and a fixed mouse brain tissue slice were also imaged to demonstrate the correction of sample self-induced wavefront distortions. After optimization, more structured information appears for the leaf imaging. And the high-signal fiber-like myelin fiber structures were resolved much more clearly after the whole correction process for mouse brain imaging. Our experiment shows the potential of this compact AO-FFOCT system for aberration correction imaging. This preliminary approach that simulates eyes aberrations correction also opens the path to a simple implementation of FFOCT adaptive optics for retinal examinations.

  13. Using the Atmospheric Radiation Measurement (ARM) Datasets to Evaluate Climate Models in Simulating Diurnal and Seasonal Variations of Tropical Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hailong; Burleyson, Casey D.; Ma, Po-Lun

    We use the long-term Atmospheric Radiation Measurement (ARM) datasets collected at the three Tropical Western Pacific (TWP) sites as a tropical testbed to evaluate the ability of the Community Atmosphere Model (CAM5) to simulate the various types of clouds, their seasonal and diurnal variations, and their impact on surface radiation. We conducted a series of CAM5 simulations at various horizontal grid spacing (around 2°, 1°, 0.5°, and 0.25°) with meteorological constraints from reanalysis. Model biases in the seasonal cycle of cloudiness are found to be weakly dependent on model resolution. Positive biases (up to 20%) in the annual mean totalmore » cloud fraction appear mostly in stratiform ice clouds. Higher-resolution simulations do reduce the positive bias in the frequency of ice clouds, but they inadvertently increase the negative biases in convective clouds and low-level liquid clouds, leading to a positive bias in annual mean shortwave fluxes at the sites, as high as 65 W m-2 in the 0.25° simulation. Such resolution-dependent biases in clouds can adversely lead to biases in ambient thermodynamic properties and, in turn, feedback on clouds. Both the CAM5 model and ARM observations show distinct diurnal cycles in total, stratiform and convective cloud fractions; however, they are out-of-phase by 12 hours and the biases vary by site. Our results suggest that biases in deep convection affect the vertical distribution and diurnal cycle of stratiform clouds through the transport of vapor and/or the detrainment of liquid and ice. We also found that the modelled gridmean surface longwave fluxes are systematically larger than site measurements when the grid that the ARM sites reside in is partially covered by ocean. The modeled longwave fluxes at such sites also lack a discernable diurnal cycle because the ocean part of the grid is warmer and less sensitive to radiative heating/cooling compared to land. Higher spatial resolution is more helpful is this regard. Our testbed approach can be easily adapted for the evaluation of new parameterizations being developed for CAM5 or other global or regional model simulations at high spatial resolutions.« less

  14. Sources and pathways of the upscale effects on the Southern Hemisphere jet in MPAS-CAM4 variable-resolution simulations

    DOE PAGES

    Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby; ...

    2016-10-22

    Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less

  15. Sources and pathways of the upscale effects on the Southern Hemisphere jet in MPAS-CAM4 variable-resolution simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby

    Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less

  16. Optimized quantum sensing with a single electron spin using real-time adaptive measurements.

    PubMed

    Bonato, C; Blok, M S; Dinani, H T; Berry, D W; Markham, M L; Twitchen, D J; Hanson, R

    2016-03-01

    Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz(-1/2) over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.

  17. Optimized quantum sensing with a single electron spin using real-time adaptive measurements

    NASA Astrophysics Data System (ADS)

    Bonato, C.; Blok, M. S.; Dinani, H. T.; Berry, D. W.; Markham, M. L.; Twitchen, D. J.; Hanson, R.

    2016-03-01

    Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz-1/2 over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.

  18. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  19. The implementation of sea ice model on a regional high-resolution scale

    NASA Astrophysics Data System (ADS)

    Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter

    2015-09-01

    The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.

  20. Adaptive Markov Random Fields for Example-Based Super-resolution of Faces

    NASA Astrophysics Data System (ADS)

    Stephenson, Todd A.; Chen, Tsuhan

    2006-12-01

    Image enhancement of low-resolution images can be done through methods such as interpolation, super-resolution using multiple video frames, and example-based super-resolution. Example-based super-resolution, in particular, is suited to images that have a strong prior (for those frameworks that work on only a single image, it is more like image restoration than traditional, multiframe super-resolution). For example, hallucination and Markov random field (MRF) methods use examples drawn from the same domain as the image being enhanced to determine what the missing high-frequency information is likely to be. We propose to use even stronger prior information by extending MRF-based super-resolution to use adaptive observation and transition functions, that is, to make these functions region-dependent. We show with face images how we can adapt the modeling for each image patch so as to improve the resolution.

  1. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  2. Feasibility of track-based multiple scattering tomography

    NASA Astrophysics Data System (ADS)

    Jansen, H.; Schütze, P.

    2018-04-01

    We present a tomographic technique making use of a gigaelectronvolt electron beam for the determination of the material budget distribution of centimeter-sized objects by means of simulations and measurements. In both cases, the trajectory of electrons traversing a sample under test is reconstructed using a pixel beam-telescope. The width of the deflection angle distribution of electrons undergoing multiple Coulomb scattering at the sample is estimated. Basing the sinogram on position-resolved estimators enables the reconstruction of the original sample using an inverse radon transform. We exemplify the feasibility of this tomographic technique via simulations of two structured cubes—made of aluminium and lead—and via an in-beam measured coaxial adapter. The simulations yield images with FWHM edge resolutions of (177 ± 13) μm and a contrast-to-noise ratio of 5.6 ± 0.2 (7.8 ± 0.3) for aluminium (lead) compared to air. The tomographic reconstruction of a coaxial adapter serves as experimental evidence of the technique and yields a contrast-to-noise ratio of 15.3 ± 1.0 and a FWHM edge resolution of (117 ± 4) μm.

  3. An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

    NASA Astrophysics Data System (ADS)

    Li, Houqiang; Wang, Yi; Chen, Chang Wen

    2007-12-01

    With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

  4. Multiphase flow modelling of explosive volcanic eruptions using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, Christian T.; Collins, Gareth S.; Piggott, Matthew D.; Kramer, Stephan C.

    2014-05-01

    Explosive volcanic eruptions generate highly energetic plumes of hot gas and ash particles that produce diagnostic deposits and pose an extreme environmental hazard. The formation, dispersion and collapse of these volcanic plumes are complex multiscale processes that are extremely challenging to simulate numerically. Accurate description of particle and droplet aggregation, movement and settling requires a model capable of capturing the dynamics on a range of scales (from cm to km) and a model that can correctly describe the important multiphase interactions that take place. However, even the most advanced models of eruption dynamics to date are restricted by the fixed mesh-based approaches that they employ. The research presented herein describes the development of a compressible multiphase flow model within Fluidity, a combined finite element / control volume computational fluid dynamics (CFD) code, for the study of explosive volcanic eruptions. Fluidity adopts a state-of-the-art adaptive unstructured mesh-based approach to discretise the domain and focus numerical resolution only in areas important to the dynamics, while decreasing resolution where it is not needed as a simulation progresses. This allows the accurate but economical representation of the flow dynamics throughout time, and potentially allows large multi-scale problems to become tractable in complex 3D domains. The multiphase flow model is verified with the method of manufactured solutions, and validated by simulating published gas-solid shock tube experiments and comparing the numerical results against pressure gauge data. The application of the model considers an idealised 7 km by 7 km domain in which the violent eruption of hot gas and volcanic ash high into the atmosphere is simulated. Although the simulations do not correspond to a particular eruption case study, the key flow features observed in a typical explosive eruption event are successfully captured. These include a shock wave resulting from the sudden high-velocity inflow of gas and ash; the formation of a particle-laden plume rising several hundred metres into the atmosphere; the eventual collapse of the plume which generates a volcanic ash fountain and a fast ground-hugging pyroclastic density current; and the growth of a dilute convective region that rises above the ash fountain as a result of buoyancy effects. The results from Fluidity are also compared with results from MFIX, a fixed structured mesh-based multiphase flow code, that uses the same set-up. The key flow features are also captured in MFIX, providing at least some confidence in the plausibility of the numerical results in the absence of quantitative field data. Finally, it is shown by a convergence analysis that Fluidity offers the same solution accuracy for reduced computational cost using an adaptive mesh, compared to the same simulation performed with a uniform fixed mesh.

  5. Adaptive optics high-resolution IR spectroscopy with silicon grisms and immersion gratings

    NASA Astrophysics Data System (ADS)

    Ge, Jian; McDavitt, Daniel L.; Chakraborty, Abhijit; Bernecker, John L.; Miller, Shane

    2003-02-01

    The breakthrough of silicon immersion grating technology at Penn State has the ability to revolutionize high-resolution infrared spectroscopy when it is coupled with adaptive optics at large ground-based telescopes. Fabrication of high quality silicon grism and immersion gratings up to 2 inches in dimension, less than 1% integrated scattered light, and diffraction-limited performance becomes a routine process thanks to newly developed techniques. Silicon immersion gratings with etched dimensions of ~ 4 inches are being developed at Penn State. These immersion gratings will be able to provide a diffraction-limited spectral resolution of R = 300,000 at 2.2 micron, or 130,000 at 4.6 micron. Prototype silicon grisms have been successfully used in initial scientific observations at the Lick 3m telescope with adaptive optics. Complete K band spectra of a total of 6 T Tauri and Ae/Be stars and their close companions at a spectral resolution of R ~ 3000 were obtained. This resolving power was achieved by using a silicon echelle grism with a 5 mm pupil diameter in an IR camera. These results represent the first scientific observations conducted by the high-resolution silicon grisms, and demonstrate the extremely high dispersing power of silicon-based gratings. New discoveries from this high spatial and spectral resolution IR spectroscopy will be reported. The future of silicon-based grating applications in ground-based AO IR instruments is promising. Silicon immersion gratings will make very high-resolution spectroscopy (R > 100,000) feasible with compact instruments for implementation on large telescopes. Silicon grisms will offer an efficient way to implement low-cost medium to high resolution IR spectroscopy (R ~ 1000-50000) through the conversion of existing cameras into spectrometers by locating a grism in the instrument's pupil location.

  6. Adaptive optics plug-and-play setup for high-resolution microscopes with multi-actuator adaptive lens

    NASA Astrophysics Data System (ADS)

    Quintavalla, M.; Pozzi, P.; Verhaegen, Michelle; Bijlsma, Hielke; Verstraete, Hans; Bonora, S.

    2018-02-01

    Adaptive Optics (AO) has revealed as a very promising technique for high-resolution microscopy, where the presence of optical aberrations can easily compromise the image quality. Typical AO systems however, are almost impossible to implement on commercial microscopes. We propose a simple approach by using a Multi-actuator Adaptive Lens (MAL) that can be inserted right after the objective and works in conjunction with an image optimization software allowing for a wavefront sensorless correction. We presented the results obtained on several commercial microscopes among which a confocal microscope, a fluorescence microscope, a light sheet microscope and a multiphoton microscope.

  7. [Study on the effect of solar spectra on the retrieval of atmospheric CO2 concentration using high resolution absorption spectra].

    PubMed

    Hu, Zhen-Hua; Huang, Teng; Wang, Ying-Ping; Ding, Lei; Zheng, Hai-Yang; Fang, Li

    2011-06-01

    Taking solar source as radiation in the near-infrared high-resolution absorption spectrum is widely used in remote sensing of atmospheric parameters. The present paper will take retrieval of the concentration of CO2 for example, and study the effect of solar spectra resolution. Retrieving concentrations of CO2 by using high resolution absorption spectra, a method which uses the program provided by AER to calculate the solar spectra at the top of atmosphere as radiation and combine with the HRATS (high resolution atmospheric transmission simulation) to simulate retrieving concentration of CO2. Numerical simulation shows that the accuracy of solar spectrum is important to retrieval, especially in the hyper-resolution spectral retrieavl, and the error of retrieval concentration has poor linear relation with the resolution of observation, but there is a tendency that the decrease in the resolution requires low resolution of solar spectrum. In order to retrieve the concentration of CO2 of atmosphere, the authors' should take full advantage of high-resolution solar spectrum at the top of atmosphere.

  8. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  9. Particle Number Dependence of the N-body Simulations of Moon Formation

    NASA Astrophysics Data System (ADS)

    Sasaki, Takanori; Hosono, Natsuki

    2018-04-01

    The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.

  10. Direct Patlak Reconstruction From Dynamic PET Data Using the Kernel Method With MRI Information Based on Structural Similarity.

    PubMed

    Gong, Kuang; Cheng-Liao, Jinxiu; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2018-04-01

    Positron emission tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neuroscience. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information into image reconstruction. Previously, kernel learning has been successfully embedded into static and dynamic PET image reconstruction using either PET temporal or MRI information. Here, we combine both PET temporal and MRI information adaptively to improve the quality of direct Patlak reconstruction. We examined different approaches to combine the PET and MRI information in kernel learning to address the issue of potential mismatches between MRI and PET signals. Computer simulations and hybrid real-patient data acquired on a simultaneous PET/MR scanner were used to evaluate the proposed methods. Results show that the method that combines PET temporal information and MRI spatial information adaptively based on the structure similarity index has the best performance in terms of noise reduction and resolution improvement.

  11. A new physical model with multilayer architecture for facial expression animation using dynamic adaptive mesh.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2004-01-01

    This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.

  12. Changes in Moisture Flux over the Tibetan Plateau during 1979-2011: Insights from a High Resolution Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin

    2015-05-15

    Net precipitation (precipitation minus evapotranspiration, P-E) changes between 1979 and 2011 from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The high resolution simulation better resolves precipitation changes than its coarse resolution forcing, which contributes dominantly to the improved P-E change in the regional simulation compared to the global reanalysis. Hence, the former may provide better insights about the drivers of P-E changes. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence intomore » thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.« less

  13. Generalised optical differentiation wavefront sensor: a sensitive high dynamic range wavefront sensor.

    PubMed

    Haffert, S Y

    2016-08-22

    Current wavefront sensors for high resolution imaging have either a large dynamic range or a high sensitivity. A new kind of wavefront sensor is developed which can have both: the Generalised Optical Differentiation wavefront sensor. This new wavefront sensor is based on the principles of optical differentiation by amplitude filters. We have extended the theory behind linear optical differentiation and generalised it to nonlinear filters. We used numerical simulations and laboratory experiments to investigate the properties of the generalised wavefront sensor. With this we created a new filter that can decouple the dynamic range from the sensitivity. These properties make it suitable for adaptive optic systems where a large range of phase aberrations have to be measured with high precision.

  14. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  15. Highly undersampled MR image reconstruction using an improved dual-dictionary learning method with self-adaptive dictionaries.

    PubMed

    Li, Jiansen; Song, Ying; Zhu, Zhen; Zhao, Jun

    2017-05-01

    Dual-dictionary learning (Dual-DL) method utilizes both a low-resolution dictionary and a high-resolution dictionary, which are co-trained for sparse coding and image updating, respectively. It can effectively exploit a priori knowledge regarding the typical structures, specific features, and local details of training sets images. The prior knowledge helps to improve the reconstruction quality greatly. This method has been successfully applied in magnetic resonance (MR) image reconstruction. However, it relies heavily on the training sets, and dictionaries are fixed and nonadaptive. In this research, we improve Dual-DL by using self-adaptive dictionaries. The low- and high-resolution dictionaries are updated correspondingly along with the image updating stage to ensure their self-adaptivity. The updated dictionaries incorporate both the prior information of the training sets and the test image directly. Both dictionaries feature improved adaptability. Experimental results demonstrate that the proposed method can efficiently and significantly improve the quality and robustness of MR image reconstruction.

  16. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  17. Evaluation and comparison of different RCMs simulations of the Mediterranean climate: a view on the impact of model resolution and Mediterranean sea coupling.

    NASA Astrophysics Data System (ADS)

    Panthou, Gérémy; Vrac, Mathieu; Drobinski, Philippe; Bastin, Sophie; Somot, Samuel; Li, Laurent

    2015-04-01

    As regularly stated by numerous authors, the Mediterranean climate is considered as one major climate 'hot spot'. At least, three reasons may explain this statement. First, this region is known for being regularly affected by extreme hydro-meteorological events (heavy precipitation and flash-floods during the autumn season; droughts and heat waves during spring and summer). Second, the vulnerability of populations in regard of these extreme events is expected to increase during the XXIst century (at least due to the projected population growth in this region). At last, Global Circulation Models project that this regional climate will be highly sensitive to climate change. Moreover, global warming is expected to intensify the hydrological cycle and thus to increase the frequency of extreme hydro-meteorological events. In order to propose adaptation strategies, the robust estimation of the future evolution of the Mediterranean climate and the associated extreme hydro-meteorological events (in terms of intensity/frequency) is of great relevance. However, these projections are characterized by large uncertainties. Many components of the simulation chain can explain these large uncertainties : (i) uncertainties concerning the emission scenario; (ii) climate model simulations suffer of parametrization errors and uncertainties concerning the initial state of the climate; and (iii) the additional uncertainties given by the (dynamical or statistical) downscaling techniques and the impact model. Narrowing (as fine as possible) these uncertainties is a major challenge of the actual climate research. One way for that is to reduce the uncertainties associated with each component. In this study, we are interested in evaluating the potential improvement of : (i) coupled RCM simulations (with the Mediterranean Sea) in comparison with atmosphere only (stand-alone) RCM simulations and (ii) RCM simulations at a finer resolution in comparison with larger resolution. For that, three different RCMs (WRF, ALADIN, LMDZ4) were run, forced by ERA-Interim reanalyses, within the MED-CORDEX experiment. For each RCM, different versions (coupled/stand-alone, high/low resolution) were realized. A large set of scores was developed and applied in order to evaluate the performances of these different RCMs simulations. These scores were applied for three variables (daily precipitation amount, mean daily air temperature and the dry spell lengths). A particular attention was given to the RCM capability to reproduce the seasonal and spatial pattern of extreme statistics. Results show that the differences between coupled and stand-alone RCMs are localized very near the Mediterranean sea and that the model resolution has a slight impact on the scores obtained. Globally, the main differences between the RCM simulations come from the RCM used. Keywords: Mediterranean climate, extreme hydro-meteorological events, RCM simulations, evaluation of climate simulations

  18. Numerical Simulation and Mechanical Design for TPS Electron Beam Position Monitors

    NASA Astrophysics Data System (ADS)

    Hsueh, H. P.; Kuan, C. K.; Ueng, T. S.; Hsiung, G. Y.; Chen, J. R.

    2007-01-01

    Comprehensive study on the mechanical design and numerical simulation for the high resolution electron beam position monitors are key steps to build the newly proposed 3rd generation synchrotron radiation research facility, Taiwan Photon Source (TPS). With more advanced electromagnetic simulation tool like MAFIA tailored specifically for particle accelerator, the design for the high resolution electron beam position monitors can be tested in such environment before they are experimentally tested. The design goal of our high resolution electron beam position monitors is to get the best resolution through sensitivity and signal optimization. The definitions and differences between resolution and sensitivity of electron beam position monitors will be explained. The design consideration is also explained. Prototype deign has been carried out and the related simulations were also carried out with MAFIA. The results are presented here. Sensitivity as high as 200 in x direction has been achieved in x direction at 500 MHz.

  19. High-Resolution Numerical Simulation and Analysis of Mach Reflection Structures in Detonation Waves in Low-Pressure H 2 –O 2 –Ar Mixtures: A Summary of Results Obtained with the Adaptive Mesh Refinement Framework AMROC

    DOE PAGES

    Deiterding, Ralf

    2011-01-01

    Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniquesmore » in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.« less

  20. Specification and Analysis of Parallel Machine Architecture

    DTIC Science & Technology

    1990-03-17

    Parallel Machine Architeture C.V. Ramamoorthy Computer Science Division Dept. of Electrical Engineering and Computer Science University of California...capacity. (4) Adaptive: The overhead in resolution of deadlocks, etc. should be in proportion to their frequency. (5) Avoid rollbacks: Rollbacks can be...snapshots of system state graphically at a rate proportional to simulation time. Some of the examples are as follow: (1) When the simulation clock of

  1. Climate SPHINX: High-resolution present-day and future climate simulations with an improved representation of small-scale variability

    NASA Astrophysics Data System (ADS)

    Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim

    2016-04-01

    The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).

  2. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  3. An adaptive front tracking technique for three-dimensional transient flows

    NASA Astrophysics Data System (ADS)

    Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.

    2000-01-01

    An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright

  4. Simulation of Mean Flow and Turbulence over a 2D Building Array Using High-Resolution CFD and a Distributed Drag Force Approach

    DTIC Science & Technology

    2016-06-16

    procedure. The predictive capabilities of the high-resolution computational fluid dynamics ( CFD ) simulations of urban flow are validated against a very...turbulence over a 2D building array using high-resolution CFD and a distributed drag force approach a Department of Mechanical Engineering, University

  5. Modeling of the Plume Development Phase of the Shoemaker-Levy 9 Comet Impact

    NASA Astrophysics Data System (ADS)

    Palotai, Csaba J.; Korycansky, D.; Deming, D.; Harrington, J.

    2008-09-01

    We present a progress report on our numerical simulations of the plume blowout and flight/splash phases of the Shoemaker-Levy 9 (SL9) comet impact into Jupiter's atmosphere. For this project we have modified the ZEUS-MP/2 three-dimensional hydrodynamic model (Hayes et al. ApJ.SS. 165. 174-183, 2006) to be suitable for Jovian atmospheric simulations. To initialize our model we map the final state of high-resolution SL9 impact simulations of Korycansky et al. (ApJ 646. 642-652, 2006) onto our larger, stationary grid. In the current phase of the research we investigate how the dynamical chaos in the impact model affects simulations of the subsequent phases. We adapt the atmospheric radiation model from the 2D splash calculation of Deming and Harrington (ApJ 561. 455-467, 2001) to calculate realistic wavelength-dependent lightcurves and low-resolution spectra. Our goal is to compare synthetic images created from model output to the data taken by the Hubble Space Telescope of plumes on the limb of Jupiter during the impacts of various SL9 fragments (Hammel et al. Science 267. 1288-1296, 1995). Details of the model, validation of the code, and results of our latest simulations will be presented. This material is based on work supported by National Science Foundation Grant No. 0307638 and National Aeronautics and Space Administration Grant No. NNG 04GQ35G .

  6. Adaptive temperature-accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2011-02-01

    We present three adaptive methods for optimizing the high temperature Thigh on-the-fly in temperature-accelerated dynamics (TAD) simulations. In all three methods, the high temperature is adjusted periodically in order to maximize the performance. While in the first two methods the adjustment depends on the number of observed events, the third method depends on the minimum activation barrier observed so far and requires an a priori knowledge of the optimal high temperature T^{opt}_{high}(E_a) as a function of the activation barrier Ea for each accepted event. In order to determine the functional form of T^{opt}_{high}(E_a), we have carried out extensive simulations of submonolayer annealing on the (100) surface for a variety of metals (Ag, Cu, Ni, Pd, and Au). While the results for all five metals are different, when they are scaled with the melting temperature Tm, we find that they all lie on a single scaling curve. Similar results have also been obtained for (111) surfaces although in this case the scaling function is slightly different. In order to test the performance of all three methods, we have also carried out adaptive TAD simulations of Ag/Ag(100) annealing and growth at T = 80 K and compared with fixed high-temperature TAD simulations for different values of Thigh. We find that the performance of all three adaptive methods is typically as good as or better than that obtained in fixed high-temperature TAD simulations carried out using the effective optimal fixed high temperature. In addition, we find that the final high temperatures obtained in our adaptive TAD simulations are very close to our results for T^{opt}_{high}(E_a). The applicability of the adaptive methods to a variety of TAD simulations is also briefly discussed.

  7. Adaptive unstructured triangular mesh generation and flow solvers for the Navier-Stokes equations at high Reynolds number

    NASA Technical Reports Server (NTRS)

    Ashford, Gregory A.; Powell, Kenneth G.

    1995-01-01

    A method for generating high quality unstructured triangular grids for high Reynolds number Navier-Stokes calculations about complex geometries is described. Careful attention is paid in the mesh generation process to resolving efficiently the disparate length scales which arise in these flows. First the surface mesh is constructed in a way which ensures that the geometry is faithfully represented. The volume mesh generation then proceeds in two phases thus allowing the viscous and inviscid regions of the flow to be meshed optimally. A solution-adaptive remeshing procedure which allows the mesh to adapt itself to flow features is also described. The procedure for tracking wakes and refinement criteria appropriate for shock detection are described. Although at present it has only been implemented in two dimensions, the grid generation process has been designed with the extension to three dimensions in mind. An implicit, higher-order, upwind method is also presented for computing compressible turbulent flows on these meshes. Two recently developed one-equation turbulence models have been implemented to simulate the effects of the fluid turbulence. Results for flow about a RAE 2822 airfoil and a Douglas three-element airfoil are presented which clearly show the improved resolution obtainable.

  8. High-speed adaptive optics line scan confocal retinal imaging for human eye

    PubMed Central

    Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458

  9. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  10. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  11. Impacts of high resolution data on traveler compliance levels in emergency evacuation simulations

    DOE PAGES

    Lu, Wei; Han, Lee D.; Liu, Cheng; ...

    2016-05-05

    In this article, we conducted a comparison study of evacuation assignment based on Traffic Analysis Zones (TAZ) and high resolution LandScan USA Population Cells (LPC) with detailed real world roads network. A platform for evacuation modeling built on high resolution population distribution data and activity-based microscopic traffic simulation was proposed. This platform can be extended to any cities in the world. The results indicated that evacuee compliance behavior affects evacuation efficiency with traditional TAZ assignment, but it did not significantly compromise the performance with high resolution LPC assignment. The TAZ assignment also underestimated the real travel time during evacuation. Thismore » suggests that high data resolution can improve the accuracy of traffic modeling and simulation. The evacuation manager should consider more diverse assignment during emergency evacuation to avoid congestions.« less

  12. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  13. Coincidences between O VI and O VII Lines: Insights from High-resolution Simulations of the Warm-hot Intergalactic Medium

    NASA Astrophysics Data System (ADS)

    Cen, Renyue

    2012-07-01

    With high-resolution (0.46 h -1 kpc), large-scale, adaptive mesh-refinement Eulerian cosmological hydrodynamic simulations we compute properties of O VI and O VII absorbers from the warm-hot intergalactic medium (WHIM) at z = 0. Our new simulations are in broad agreement with previous simulations with ~40% of the intergalactic medium being in the WHIM. Our simulations are in agreement with observed properties of O VI absorbers with respect to the line incidence rate and Doppler-width-column-density relation. It is found that the amount of gas in the WHIM below and above 106 K is roughly equal. Strong O VI absorbers are found to be predominantly collisionally ionized. It is found that (61%, 57%, 39%) of O VI absorbers of log N(O VI) cm2 = (12.5-13, 13-14, > 14) have T < 105 K. Cross correlations between galaxies and strong [N(O VI) > 1014 cm-2] O VI absorbers on ~100-300 kpc scales are suggested as a potential differentiator between collisional ionization and photoionization models. Quantitative prediction is made for the presence of broad and shallow O VI lines that are largely missed by current observations but will be detectable by Cosmic Origins Spectrograph observations. The reported 3σ upper limit on the mean column density of coincidental O VII lines at the location of detected O VI lines by Yao et al. is above our predicted value by a factor of 2.5-4. The claimed observational detection of O VII lines by Nicastro et al., if true, is 2σ above what our simulations predict.

  14. Simulation of geothermal water extraction in heterogeneous reservoirs using dynamic unstructured mesh optimisation

    NASA Astrophysics Data System (ADS)

    Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.

    2017-12-01

    It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required to reduce an error metric based on the Hessian of the field. This allows the local pressure drawdown to be captured without user¬ driven modification of the mesh. We demonstrate that the method has wide application in reservoir ¬scale models of geothermal fields, and regional models of groundwater resources.

  15. Laser beam projection with adaptive array of fiber collimators. II. Analysis of atmospheric compensation efficiency.

    PubMed

    Lachinova, Svetlana L; Vorontsov, Mikhail A

    2008-08-01

    We analyze the potential efficiency of laser beam projection onto a remote object in atmosphere with incoherent and coherent phase-locked conformal-beam director systems composed of an adaptive array of fiber collimators. Adaptive optics compensation of turbulence-induced phase aberrations in these systems is performed at each fiber collimator. Our analysis is based on a derived expression for the atmospheric-averaged value of the mean square residual phase error as well as direct numerical simulations. Operation of both conformal-beam projection systems is compared for various adaptive system configurations characterized by the number of fiber collimators, the adaptive compensation resolution, and atmospheric turbulence conditions.

  16. Towards a Fine-Resolution Global Coupled Climate System for Prediction on Decadal/Centennial Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClean, Julie L.

    The over-arching goal of this project was to contribute to the realization of a fully coupled fine resolution Earth System Model simulation in which a weather-scale atmosphere is coupled to an ocean in which mesoscale eddies are largely resolved. Both a prototype fine-resolution fully coupled ESM simulation and a first-ever multi-decadal forced fine-resolution global coupled ocean/ice simulation were configured, tested, run, and analyzed as part of this grant. Science questions focused on the gains from the use of high horizontal resolution, particularly in the ocean and sea-ice, with respect to climatically important processes. Both these fine resolution coupled ocean/sea icemore » and fully-coupled simulations and precedent stand-alone eddy-resolving ocean and eddy-permitting coupled ocean/ice simulations were used to explore the high resolution regime. Overall, these studies showed that the presence of mesoscale eddies significantly impacted mixing processes and the global meridional overturning circulation in the ocean simulations. Fourteen refereed publications and a Ph.D. dissertation resulted from this grant.« less

  17. Quantifying the effect of Tmax extreme events on local adaptation to climate change of maize crop in Andalusia for the 21st century

    NASA Astrophysics Data System (ADS)

    Gabaldon, Clara; Lorite, Ignacio J.; Ines Minguez, M.; Lizaso, Jon; Dosio, Alessandro; Sanchez, Enrique; Ruiz-Ramos, Margarita

    2015-04-01

    Extreme events of Tmax can threaten maize production on Andalusia (Ruiz-Ramos et al., 2011). The objective of this work is to attempt a quantification of the effects of Tmax extreme events on the previously identified (Gabaldón et al., 2013) local adaptation strategies to climate change of irrigated maize crop in Andalusia for the first half of the 21st century. This study is focused on five Andalusia locations. Local adaptation strategies identified consisted on combinations of changes on sowing dates and choice of cultivar (Gabaldón et al., 2013). Modified cultivar features were the duration of phenological phases and the grain filling rate. The phenological and yield simulations with the adaptative changes were obtained from a modelling chain: current simulated climate and future climate scenarios (2013-2050) were taken from a group of regional climate models at high resolution (25 km) from the European Project ENSEMBLES (http://www.ensembles-eu.org/). After bias correcting these data for temperature and precipitation (Dosio and Paruolo, 2011; Dosio et al., 2012) crop simulations were generated by the CERES-maize model (Jones and Kiniry, 1986) under DSSAT platform, previously calibrated and validated. Quantification of the effects of extreme Tmax on maize yield was computed for different phenological stages following Teixeira et al. (2013). A heat stress index was computed; this index assumes that yield-damage intensity due to heat stress increases linearly from 0.0 at a critical temperature to a maximum of 1.0 at a limit temperature. The decrease of crop yield is then computed by a normalized production damage index which combines attainable yield and heat stress index for each location. Selection of the most suitable adaptation strategy will be reviewed and discussed in light of the quantified effect on crop yield of the projected change of Tmax extreme events. This study will contribute to MACSUR knowledge Hub within the Joint Programming Initiative on Agriculture, Food Security and Climate Change (FACCE - JPI) of EU and is financed by MULCLIVAR project (CGL2012-38923-C02-02) and IFAPA project AGR6126 from Junta de Andalucía, Spain. References Dosio A. and Paruolo P., 2011. Bias correction of the ENSEMBLES high-resolution climate change projections for use by impact models: Evaluation on the present climate. Journal of Geophysical Research, VOL. 116, D16106, doi:10.1029/2011JD015934 Dosio A., Paruolo P. and Rojas R., 2012. Bias correction of the ENSEMBLES high resolution climate change projections for use by impact models: Analysis of the climate change signal. Journal of Geophysical Research, Volume 117, D17, doi: 0.1029/2012JD017968 Gabaldón C, Lorite IJ, Mínguez MI, Dosio A, Sánchez-Sánchez E and Ruiz-Ramos M, 2013. Evaluation of local adaptation strategies to climate change of maize crop in Andalusia for the first half of 21st century. Geophysical Research Abstracts. Vol. 15, EGU2013-13625, 2013. EGU General Assembly 2013, April 2013, Vienna, Austria. Jones C.A. and J.R. Kiniry. 1986. CERES-Maize: A simulation model of maize growth and development. Texas A&M Univ. Press, College Station. Ruiz-Ramos M., E. Sanchez, C. Galllardo, and M.I. Minguez. 2011. Impacts of projected maximum temperature extremes for C21 by an ensemble of regional climate models on cereal cropping systems in the Iberian Peninsula. Natural Hazards and Earth System Science 11: 3275-3291. Teixeira EI, Fischer G, van Velthuizen H, Walter C, Ewert F. Global hotspots of heat stress on agricultural crops due to climate change. Agric For Meteorol. 2013;170(15):206-215.

  18. Complete protein-protein association kinetics in atomic detail revealed by molecular dynamics simulations and Markov modelling

    NASA Astrophysics Data System (ADS)

    Plattner, Nuria; Doerr, Stefan; de Fabritiis, Gianni; Noé, Frank

    2017-10-01

    Protein-protein association is fundamental to many life processes. However, a microscopic model describing the structures and kinetics during association and dissociation is lacking on account of the long lifetimes of associated states, which have prevented efficient sampling by direct molecular dynamics (MD) simulations. Here we demonstrate protein-protein association and dissociation in atomistic resolution for the ribonuclease barnase and its inhibitor barstar by combining adaptive high-throughput MD simulations and hidden Markov modelling. The model reveals experimentally consistent intermediate structures, energetics and kinetics on timescales from microseconds to hours. A variety of flexibly attached intermediates and misbound states funnel down to a transition state and a native basin consisting of the loosely bound near-native state and the tightly bound crystallographic state. These results offer a deeper level of insight into macromolecular recognition and our approach opens the door for understanding and manipulating a wide range of macromolecular association processes.

  19. Very high-resolution spectroscopy for extremely large telescopes using pupil slicing and adaptive optics.

    PubMed

    Beckers, Jacques M; Andersen, Torben E; Owner-Petersen, Mette

    2007-03-05

    Under seeing limited conditions very high resolution spectroscopy becomes very difficult for extremely large telescopes (ELTs). Using adaptive optics (AO) the stellar image size decreases proportional with the telescope diameter. This makes the spectrograph optics and hence its resolution independent of the telescope diameter. However AO for use with ELTs at visible wavelengths require deformable mirrors with many elements. Those are not likely to be available for quite some time. We propose to use the pupil slicing technique to create a number of sub-pupils each of which having its own deformable mirror. The images from all sub-pupils are combined incoherently with a diameter corresponding to the diffraction limit of the sub-pupil. The technique is referred to as "Pupil Slicing Adaptive Optics" or PSAO.

  20. Super-resolution Doppler beam sharpening method using fast iterative adaptive approach-based spectral estimation

    NASA Astrophysics Data System (ADS)

    Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu

    2018-01-01

    Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.

  1. Single-snapshot DOA estimation by using Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin

    2014-12-01

    This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.

  2. Probing spatial locality in ionic liquids with the grand canonical adaptive resolution molecular dynamics technique

    NASA Astrophysics Data System (ADS)

    Shadrack Jabes, B.; Krekeler, C.; Klein, R.; Delle Site, L.

    2018-05-01

    We employ the Grand Canonical Adaptive Resolution Simulation (GC-AdResS) molecular dynamics technique to test the spatial locality of the 1-ethyl 3-methyl imidazolium chloride liquid. In GC-AdResS, atomistic details are kept only in an open sub-region of the system while the environment is treated at coarse-grained level; thus, if spatial quantities calculated in such a sub-region agree with the equivalent quantities calculated in a full atomistic simulation, then the atomistic degrees of freedom outside the sub-region play a negligible role. The size of the sub-region fixes the degree of spatial locality of a certain quantity. We show that even for sub-regions whose radius corresponds to the size of a few molecules, spatial properties are reasonably reproduced thus suggesting a higher degree of spatial locality, a hypothesis put forward also by other researchers and that seems to play an important role for the characterization of fundamental properties of a large class of ionic liquids.

  3. Fully 3D modeling of tokamak vertical displacement events with realistic parameters

    NASA Astrophysics Data System (ADS)

    Pfefferle, David; Ferraro, Nathaniel; Jardin, Stephen; Bhattacharjee, Amitava

    2016-10-01

    In this work, we model the complex multi-domain and highly non-linear physics of Vertical Displacement Events (VDEs), one of the most damaging off-normal events in tokamaks, with the implicit 3D extended MHD code M3D-C1. The code has recently acquired the capability to include finite thickness conducting structures within the computational domain. By exploiting the possibility of running a linear 3D calculation on top of a non-linear 2D simulation, we monitor the non-axisymmetric stability and assess the eigen-structure of kink modes as the simulation proceeds. Once a stability boundary is crossed, a fully 3D non-linear calculation is launched for the remainder of the simulation, starting from an earlier time of the 2D run. This procedure, along with adaptive zoning, greatly increases the efficiency of the calculation, and allows to perform VDE simulations with realistic parameters and high resolution. Simulations are being validated with NSTX data where both axisymmetric (toroidally averaged) and non-axisymmetric induced and conductive (halo) currents have been measured. This work is supported by US DOE Grant DE-AC02-09CH11466.

  4. Effects of operator splitting and low Mach-number correction in turbulent mixing transition simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grinstein, F. F.; Saenz, J. A.; Dolence, J. C.

    Inmore » this paper, transition and turbulence decay with the Taylor–Green vortex have been effectively used to demonstrate emulation of high Reynolds-number ( R e ) physical dissipation through numerical convective effects of various non-oscillatory finite-volume algorithms for implicit large eddy simulation (ILES), e.g. using the Godunov-based Eulerian adaptive mesh refinement code xRAGE. The inverse-chevron shock tube experiment simulations have been also used to assess xRAGE based ILES for shock driven turbulent mixing, compared with available simulation and laboratory data. The previous assessments are extended to evaluate new directionally-unsplit high-order algorithms in xRAGE, including a correction to address the well-known issue of excessive numerical diffusion of shock-capturing (e.g., Godunov-type) schemes for low Mach numbers. The unsplit options for hydrodynamics in xRAGE are discussed in detail, followed by fundamental tests with representative shock problems. Basic issues of transition to turbulence and turbulent mixing are discussed, and results of simulations of high- R e turbulent flow and mixing in canonical test cases are reported. Finally, compared to the directional-split cases, and for each grid resolution considered, unsplit results exhibit transition to turbulence with much higher effective R e —and significantly more so with the low Mach number correction.« less

  5. Effects of operator splitting and low Mach-number correction in turbulent mixing transition simulations

    DOE PAGES

    Grinstein, F. F.; Saenz, J. A.; Dolence, J. C.; ...

    2018-06-07

    Inmore » this paper, transition and turbulence decay with the Taylor–Green vortex have been effectively used to demonstrate emulation of high Reynolds-number ( R e ) physical dissipation through numerical convective effects of various non-oscillatory finite-volume algorithms for implicit large eddy simulation (ILES), e.g. using the Godunov-based Eulerian adaptive mesh refinement code xRAGE. The inverse-chevron shock tube experiment simulations have been also used to assess xRAGE based ILES for shock driven turbulent mixing, compared with available simulation and laboratory data. The previous assessments are extended to evaluate new directionally-unsplit high-order algorithms in xRAGE, including a correction to address the well-known issue of excessive numerical diffusion of shock-capturing (e.g., Godunov-type) schemes for low Mach numbers. The unsplit options for hydrodynamics in xRAGE are discussed in detail, followed by fundamental tests with representative shock problems. Basic issues of transition to turbulence and turbulent mixing are discussed, and results of simulations of high- R e turbulent flow and mixing in canonical test cases are reported. Finally, compared to the directional-split cases, and for each grid resolution considered, unsplit results exhibit transition to turbulence with much higher effective R e —and significantly more so with the low Mach number correction.« less

  6. Durham extremely large telescope adaptive optics simulation platform.

    PubMed

    Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard

    2007-03-01

    Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.

  7. Construction of anthropomorphic hybrid, dual-lattice voxel models for optimizing image quality and dose in radiography

    NASA Astrophysics Data System (ADS)

    Petoussi-Henss, Nina; Becker, Janine; Greiter, Matthias; Schlattl, Helmut; Zankl, Maria; Hoeschen, Christoph

    2014-03-01

    In radiography there is generally a conflict between the best image quality and the lowest possible patient dose. A proven method of dosimetry is the simulation of radiation transport in virtual human models (i.e. phantoms). However, while the resolution of these voxel models is adequate for most dosimetric purposes, they cannot provide the required organ fine structures necessary for the assessment of the imaging quality. The aim of this work is to develop hybrid/dual-lattice voxel models (called also phantoms) as well as simulation methods by which patient dose and image quality for typical radiographic procedures can be determined. The results will provide a basis to investigate by means of simulations the relationships between patient dose and image quality for various imaging parameters and develop methods for their optimization. A hybrid model, based on NURBS (Non Linear Uniform Rational B-Spline) and PM (Polygon Mesh) surfaces, was constructed from an existing voxel model of a female patient. The organs of the hybrid model can be then scaled and deformed in a non-uniform way i.e. organ by organ; they can be, thus, adapted to patient characteristics without losing their anatomical realism. Furthermore, the left lobe of the lung was substituted by a high resolution lung voxel model, resulting in a dual-lattice geometry model. "Dual lattice" means in this context the combination of voxel models with different resolution. Monte Carlo simulations of radiographic imaging were performed with the code EGS4nrc, modified such as to perform dual lattice transport. Results are presented for a thorax examination.

  8. Millimeter spatial resolution in vivo sodium MRI of the human eye at 7 T using a dedicated radiofrequency transceiver array.

    PubMed

    Wenz, Daniel; Kuehne, Andre; Huelnhagen, Till; Nagel, Armin M; Waiczies, Helmar; Weinberger, Oliver; Oezerdem, Celal; Stachs, Oliver; Langner, Soenke; Seeliger, Erdmann; Flemming, Bert; Hodge, Russell; Niendorf, Thoralf

    2018-08-01

    The aim of this study was to achieve millimeter spatial resolution sodium in vivo MRI of the human eye at 7 T using a dedicated six-channel transceiver array. We present a detailed description of the radiofrequency coil design, along with electromagnetic field and specific absorption ratio simulations, data validation, and in vivo application. Electromagnetic field and specific absorption ratio simulations were performed. Transmit field uniformity was optimized by using a multi-objective genetic algorithm. Transmit field mapping was conducted using a phase-sensitive method. An in vivo feasibility study was carried out with 3-dimensional density-adapted projection reconstruction imaging technique. Measured transmit field distribution agrees well with the one obtained from simulations. The specific absorption ratio simulations confirm that the radiofrequency coil is safe for clinical use. Our radiofrequency coil is light and conforms to an average human head. High spatial resolution (nominal 1.4 and 1.0 mm isotropic) sodium in vivo images of the human eye were acquired within scan times suitable for clinical applications (∼ 10 min). Three most important eye compartments in the context of sodium physiology were clearly delineated in all of the images: the vitreous humor, the aqueous humor, and the lens. Our results provide encouragement for further clinical studies. The implications for research into eye diseases including ocular melanoma, cataract, and glaucoma are discussed. Magn Reson Med 80:672-684, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  9. Use of dynamical downscaling to improve the simulation of Central U.S. warm season precipitation in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Harding, Keith J.; Snyder, Peter K.; Liess, Stefan

    2013-11-01

    supporting exceptionally productive agricultural lands, the Central U.S. is susceptible to severe droughts and floods. Such precipitation extremes are expected to worsen with climate change. However, future projections are highly uncertain as global climate models (GCMs) generally fail to resolve precipitation extremes. In this study, we assess how well models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) simulate summer means, variability, extremes, and the diurnal cycle of Central U.S. summer rainfall. Output from a subset of historical CMIP5 simulations are used to drive the Weather Research and Forecasting model to determine whether dynamical downscaling improves the representation of Central U.S. rainfall. We investigate which boundary conditions influence dynamically downscaled precipitation estimates and identify GCMs that can reasonably simulate precipitation when downscaled. The CMIP5 models simulate the seasonal mean and variability of summer rainfall reasonably well but fail to resolve extremes, the diurnal cycle, and the dynamic forcing of precipitation. Downscaling to 30 km improves these characteristics of precipitation, with the greatest improvement in the representation of extremes. Additionally, sizeable diurnal cycle improvements occur with higher (10 km) resolution and convective parameterization disabled, as the daily rainfall peak shifts 4 h closer to observations than 30 km resolution simulations. This lends greater confidence that the mechanisms responsible for producing rainfall are better simulated. Because dynamical downscaling can more accurately simulate these aspects of Central U.S. summer rainfall, policymakers can have added confidence in dynamically downscaled rainfall projections, allowing for more targeted adaptation and mitigation.

  10. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  11. Asymmetric Eyewall Vertical Motion in a High-Resolution Simulation of Hurricane Bonnie (1998)

    NASA Technical Reports Server (NTRS)

    Braun, Scott A.; Montgomery, Michael T.; Pu, Zhao-Xia

    2003-01-01

    This study examines a high-resolution simulation of Hurricane Bonnie. Results from the simulation will be compared to the conceptual model of Heymsfield et al. (2001) to determine the extent to which this conceptual model explains vertical motions and precipitation growth in the eyewall.

  12. What have we learned from the German consortium project STORM aiming at high-resolution climate simulations?

    NASA Astrophysics Data System (ADS)

    von Storch, Jin-Song

    2014-05-01

    The German consortium STORM was built to explore high-resolution climate simulations using the high-performance computer stored at the German Climate Computer Center (DKRZ). One of the primary goals is to quantify the effect of unresolved (and parametrized) processes on climate sensitivity. We use ECHAM6/MPIOM, the coupled atmosphere-ocean model developed at the Max-Planck Institute for Meteorology. The resolution is T255L95 for the atmosphere and 1/10 degree and 80 vertical levels for the ocean. We discuss results of stand-alone runs, i.e. the ocean-only simulation driven by the NCEP/NCAR renalaysis and the atmosphere-only AMIP-type of simulation. Increasing resolution leads to a redistribution of biases, even though some improvements, both in the atmosphere and in the ocean, can clearly be attributed to the increase in resolution. We represent also new insights on ocean meso-scale eddies, in particular their effects on the ocean's energetics. Finally, we discuss the status and problems of the coupled high-resolution runs.

  13. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1999-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1". The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have successfully used adaptive optics on a 4-m class telescope to obtain 0.1" resolution images of solar system objects in the far red and near infrared (0.7-2.5 microns), aE wavelengths which best discl"lmlnate their spectral signatures. Our efforts have been put into areas of research for which high angular resolution is essential.

  14. Supersampling multiframe blind deconvolution resolution enhancement of adaptive-optics-compensated imagery of LEO satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2000-10-01

    A post-processing methodology for reconstructing undersampled image sequences with randomly varying blur is described which can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive optics compensated imagery taken by the Starfire Optical Range 3.5 meter telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques which includes a representation of spatial sampling by the focal plane array elements in the forward stochastic model of the imaging system. This generalization enables the random shifts and shape of the adaptive compensated PSF to be used to partially eliminate the aliasing effects associated with sub- Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss which occurs when imaging in wide FOV modes.

  15. Evaluating hourly rainfall characteristics over the U.S. Great Plains in dynamically downscaled climate model simulations using NASA-Unified WRF

    NASA Astrophysics Data System (ADS)

    Lee, Huikyo; Waliser, Duane E.; Ferraro, Robert; Iguchi, Takamichi; Peters-Lidard, Christa D.; Tian, Baijun; Loikith, Paul C.; Wright, Daniel B.

    2017-07-01

    Accurate simulation of extreme precipitation events remains a challenge in climate models. This study utilizes hourly precipitation data from ground stations and satellite instruments to evaluate rainfall characteristics simulated by the NASA-Unified Weather Research and Forecasting (NU-WRF) regional climate model at horizontal resolutions of 4, 12, and 24 km over the Great Plains of the United States. We also examined the sensitivity of the simulated precipitation to different spectral nudging approaches and the cumulus parameterizations. The rainfall characteristics in the observations and simulations were defined as an hourly diurnal cycle of precipitation and a joint probability distribution function (JPDF) between duration and peak intensity of precipitation events over the Great Plains in summer. We calculated a JPDF for each data set and the overlapping area between observed and simulated JPDFs to measure the similarity between the two JPDFs. Comparison of the diurnal precipitation cycles between observations and simulations does not reveal the added value of high-resolution simulations. However, the performance of NU-WRF simulations measured by the JPDF metric strongly depends on horizontal resolution. The simulation with the highest resolution of 4 km shows the best agreement with the observations in simulating duration and intensity of wet spells. Spectral nudging does not affect the JPDF significantly. The effect of cumulus parameterizations on the JPDFs is considerable but smaller than that of horizontal resolution. The simulations with lower resolutions of 12 and 24 km show reasonable agreement but only with the high-resolution observational data that are aggregated into coarse resolution and spatially averaged.

  16. High-resolution, regional-scale crop yield simulations for the Southwestern United States

    NASA Astrophysics Data System (ADS)

    Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.

    2012-12-01

    Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and maximum temperature), beyond which the yields were negatively affected. These results are now being used for further regional-scale yield analysis as the aforementioned framework is adaptable to multiple geographic regions and crop types.

  17. Real-time turbulence profiling with a pair of laser guide star Shack-Hartmann wavefront sensors for wide-field adaptive optics systems on large to extremely large telescopes.

    PubMed

    Gilles, L; Ellerbroek, B L

    2010-11-01

    Real-time turbulence profiling is necessary to tune tomographic wavefront reconstruction algorithms for wide-field adaptive optics (AO) systems on large to extremely large telescopes, and to perform a variety of image post-processing tasks involving point-spread function reconstruction. This paper describes a computationally efficient and accurate numerical technique inspired by the slope detection and ranging (SLODAR) method to perform this task in real time from properly selected Shack-Hartmann wavefront sensor measurements accumulated over a few hundred frames from a pair of laser guide stars, thus eliminating the need for an additional instrument. The algorithm is introduced, followed by a theoretical influence function analysis illustrating its impulse response to high-resolution turbulence profiles. Finally, its performance is assessed in the context of the Thirty Meter Telescope multi-conjugate adaptive optics system via end-to-end wave optics Monte Carlo simulations.

  18. Effects of Drake Passage on a strongly eddying global ocean

    NASA Astrophysics Data System (ADS)

    Viebahn, Jan; von der Heydt, Anna S.; Dijkstra, Henk A.

    2015-04-01

    During the past 65 Million (Ma) years, Earth's climate has undergone a major change from warm 'greenhouse' to colder 'icehouse' conditions with extensive ice sheets in the polar regions of both hemispheres. The Eocene-Oligocene (~34 Ma) and Oligocene-Miocene (~23 Ma) boundaries reflect major transitions in Cenozoic global climate change. Proposed mechanisms of these transitions include reorganization of ocean circulation due to critical gateway opening/deepening, changes in atmospheric CO2-concentration, and feedback mechanisms related to land-ice formation. Drake Passage (DP) is an intensively studied gateway because it plays a central role in closing the transport pathways of heat and chemicals in the ocean. The climate response to a closed DP has been explored with a variety of general circulation models, however, all of these models employ low model-grid resolutions such that the effects of subgrid-scale fluctuations ('eddies') are parameterized. We present results of the first high-resolution (0.1° horizontally) realistic global ocean model simulation with a closed DP in which the eddy field is largely resolved. The simulation extends over more than 200 years such that the strong transient adjustment process is passed and a near-equilibrium ocean state is reached. The effects of DP are diagnosed by comparing with both an open DP high-resolution control simulation (of same length) and corresponding low-resolution simulations. By focussing on the heat/tracer transports we demonstrate that the results are twofold: Considering spatially integrated transports the overall response to a closed DP is well captured by low-resolution simulations. However, looking at the actual spatial distributions drastic differences appear between far-scattered high-resolution and laminar-uniform low-resolution fields. We conclude that sparse and highly localized tracer proxy observations have to be interpreted carefully with the help of high-resolution model simulations.

  19. Retrieved Products from Simulated Hyperspectral Observations of a Hurricane

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John

    2015-01-01

    Demonstrate via Observing System Simulation Experiments (OSSEs) the potential utility of flying high spatial resolution AIRS class IR sounders on future LEO and GEO missions.The study simulates and analyzes radiances for 3 sounders with AIRS spectral and radiometric properties on different orbits with different spatial resolutions: 1) Control run 13 kilometers AIRS spatial resolution at nadir on LEO in Aqua orbit; 2) 2 kilometer spatial resolution LEO sounder at nadir ARIES; 3) 5 kilometers spatial resolution sounder on a GEO orbit, radiances simulated every 72 minutes.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less

  1. Initial phantom study comparing image quality in computed tomography using adaptive statistical iterative reconstruction and new adaptive statistical iterative reconstruction v.

    PubMed

    Lim, Kyungjae; Kwon, Heejin; Cho, Jinhan; Oh, Jongyoung; Yoon, Seongkuk; Kang, Myungjin; Ha, Dongho; Lee, Jinhwa; Kang, Eunju

    2015-01-01

    The purpose of this study was to assess the image quality of a novel advanced iterative reconstruction (IR) method called as "adaptive statistical IR V" (ASIR-V) by comparing the image noise, contrast-to-noise ratio (CNR), and spatial resolution from those of filtered back projection (FBP) and adaptive statistical IR (ASIR) on computed tomography (CT) phantom image. We performed CT scans at 5 different tube currents (50, 70, 100, 150, and 200 mA) using 3 types of CT phantoms. Scanned images were subsequently reconstructed in 7 different scan settings, such as FBP, and 3 levels of ASIR and ASIR-V (30%, 50%, and 70%). The image noise was measured in the first study using body phantom. The CNR was measured in the second study using contrast phantom and the spatial resolutions were measured in the third study using a high-resolution phantom. We compared the image noise, CNR, and spatial resolution among the 7 reconstructed image scan settings to determine whether noise reduction, high CNR, and high spatial resolution could be achieved at ASIR-V. At quantitative analysis of the first and second studies, it showed that the images reconstructed using ASIR-V had reduced image noise and improved CNR compared with those of FBP and ASIR (P < 0.001). At qualitative analysis of the third study, it also showed that the images reconstructed using ASIR-V had significantly improved spatial resolution than those of FBP and ASIR (P < 0.001). Our phantom studies showed that ASIR-V provides a significant reduction in image noise and a significant improvement in CNR as well as spatial resolution. Therefore, this technique has the potential to reduce the radiation dose further without compromising image quality.

  2. A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations

    NASA Technical Reports Server (NTRS)

    Dydson, Roger W.; Goodrich, John W.

    2000-01-01

    Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.

  3. Sensitivity studies of high-resolution RegCM3 simulations of precipitation over the European Alps: the effect of lateral boundary conditions and domain size

    NASA Astrophysics Data System (ADS)

    Nadeem, Imran; Formayer, Herbert

    2016-11-01

    A suite of high-resolution (10 km) simulations were performed with the International Centre for Theoretical Physics (ICTP) Regional Climate Model (RegCM3) to study the effect of various lateral boundary conditions (LBCs), domain size, and intermediate domains on simulated precipitation over the Great Alpine Region. The boundary conditions used were ECMWF ERA-Interim Reanalysis with grid spacing 0.75∘, the ECMWF ERA-40 Reanalysis with grid spacing 1.125 and 2.5∘, and finally the 2.5∘ NCEP/DOE AMIP-II Reanalysis. The model was run in one-way nesting mode with direct nesting of the high-resolution RCM (horizontal grid spacing Δx = 10 km) with driving reanalysis, with one intermediate resolution nest (Δx = 30 km) between high-resolution RCM and reanalysis forcings, and also with two intermediate resolution nests (Δx = 90 km and Δx = 30 km) for simulations forced with LBC of resolution 2.5∘. Additionally, the impact of domain size was investigated. The results of multiple simulations were evaluated using different analysis techniques, e.g., Taylor diagram and a newly defined useful statistical parameter, called Skill-Score, for evaluation of daily precipitation simulated by the model. It has been found that domain size has the major impact on the results, while different resolution and versions of LBCs, e.g., 1.125∘ ERA40 and 0.7∘ ERA-Interim, do not produce significantly different results. It is also noticed that direct nesting with reasonable domain size, seems to be the most adequate method for reproducing precipitation over complex terrain, while introducing intermediate resolution nests seems to deteriorate the results.

  4. Unraveling the martian water cycle with high-resolution global climate simulations

    NASA Astrophysics Data System (ADS)

    Pottier, Alizée; Forget, François; Montmessin, Franck; Navarro, Thomas; Spiga, Aymeric; Millour, Ehouarn; Szantai, André; Madeleine, Jean-Baptiste

    2017-07-01

    Global climate modeling of the Mars water cycle is usually performed at relatively coarse resolution (200 - 300km), which may not be sufficient to properly represent the impact of waves, fronts, topography effects on the detailed structure of clouds and surface ice deposits. Here, we present new numerical simulations of the annual water cycle performed at a resolution of 1° × 1° (∼ 60 km in latitude). The model includes the radiative effects of clouds, whose influence on the thermal structure and atmospheric dynamics is significant, thus we also examine simulations with inactive clouds to distinguish the direct impact of resolution on circulation and winds from the indirect impact of resolution via water ice clouds. To first order, we find that the high resolution does not dramatically change the behavior of the system, and that simulations performed at ∼ 200 km resolution capture well the behavior of the simulated water cycle and Mars climate. Nevertheless, a detailed comparison between high and low resolution simulations, with reference to observations, reveal several significant changes that impact our understanding of the water cycle active today on Mars. The key northern cap edge dynamics are affected by an increase in baroclinic wave strength, with a complication of northern summer dynamics. South polar frost deposition is modified, with a westward longitudinal shift, since southern dynamics are also influenced. Baroclinic wave mode transitions are observed. New transient phenomena appear, like spiral and streak clouds, already documented in the observations. Atmospheric circulation cells in the polar region exhibit a large variability and are fine structured, with slope winds. Most modeled phenomena affected by high resolution give a picture of a more turbulent planet, inducing further variability. This is challenging for long-period climate studies.

  5. A method for generating high resolution satellite image time series

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.

  6. Near infra-red astronomy with adaptive optics and laser guide stars at the Keck Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, C.E.; Gavel, D.T.; Olivier, S.S.

    1995-08-03

    A laser guide star adaptive optics system is being built for the W. M. Keck Observatory`s 10-meter Keck II telescope. Two new near infra-red instruments will be used with this system: a high-resolution camera (NIRC 2) and an echelle spectrometer (NIRSPEC). The authors describe the expected capabilities of these instruments for high-resolution astronomy, using adaptive optics with either a natural star or a sodium-layer laser guide star as a reference. They compare the expected performance of these planned Keck adaptive optics instruments with that predicted for the NICMOS near infra-red camera, which is scheduled to be installed on the Hubblemore » Space Telescope in 1997.« less

  7. Automated Segmentation of High-Resolution Photospheric Images of Active Regions

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Tian, Yu; Rao, Changhui

    2018-02-01

    Due to the development of ground-based, large-aperture solar telescopes with adaptive optics (AO) resulting in increasing resolving ability, more accurate sunspot identifications and characterizations are required. In this article, we have developed a set of automated segmentation methods for high-resolution solar photospheric images. Firstly, a local-intensity-clustering level-set method is applied to roughly separate solar granulation and sunspots. Then reinitialization-free level-set evolution is adopted to adjust the boundaries of the photospheric patch; an adaptive intensity threshold is used to discriminate between umbra and penumbra; light bridges are selected according to their regional properties from candidates produced by morphological operations. The proposed method is applied to the solar high-resolution TiO 705.7-nm images taken by the 151-element AO system and Ground-Layer Adaptive Optics prototype system at the 1-m New Vacuum Solar Telescope of the Yunnan Observatory. Experimental results show that the method achieves satisfactory robustness and efficiency with low computational cost on high-resolution images. The method could also be applied to full-disk images, and the calculated sunspot areas correlate well with the data given by the National Oceanic and Atmospheric Administration (NOAA).

  8. High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6

    NASA Astrophysics Data System (ADS)

    Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; Senior, Catherine A.; Bellucci, Alessio; Bao, Qing; Chang, Ping; Corti, Susanna; Fučkar, Neven S.; Guemas, Virginie; von Hardenberg, Jost; Hazeleger, Wilco; Kodama, Chihiro; Koenigk, Torben; Leung, L. Ruby; Lu, Jian; Luo, Jing-Jia; Mao, Jiafu; Mizielinski, Matthew S.; Mizuta, Ryo; Nobre, Paulo; Satoh, Masaki; Scoccimarro, Enrico; Semmler, Tido; Small, Justin; von Storch, Jin-Song

    2016-11-01

    Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, "what are the origins and consequences of systematic model biases?", but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.

  9. Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.

    2017-12-01

    Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.

  10. Optimal design and critical analysis of a high resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne

    2011-03-01

    A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.

  11. Optimal design and critical analysis of a high-resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne

    2012-01-01

    A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.

  12. Hybrid particle-continuum simulations coupling Brownian dynamics and local dynamic density functional theory.

    PubMed

    Qi, Shuanhu; Schmid, Friederike

    2017-11-08

    We present a multiscale hybrid particle-field scheme for the simulation of relaxation and diffusion behavior of soft condensed matter systems. It combines particle-based Brownian dynamics and field-based local dynamics in an adaptive sense such that particles can switch their level of resolution on the fly. The switching of resolution is controlled by a tuning function which can be chosen at will according to the geometry of the system. As an application, the hybrid scheme is used to study the kinetics of interfacial broadening of a polymer blend, and is validated by comparing the results to the predictions from pure Brownian dynamics and pure local dynamics calculations.

  13. First on-sky demonstration of the piezoelectric adaptive secondary mirror.

    PubMed

    Guo, Youming; Zhang, Ang; Fan, Xinlong; Rao, Changhui; Wei, Ling; Xian, Hao; Wei, Kai; Zhang, Xiaojun; Guan, Chunlin; Li, Min; Zhou, Luchun; Jin, Kai; Zhang, Junbo; Deng, Jijiang; Zhou, Longfeng; Chen, Hao; Zhang, Xuejun; Zhang, Yudong

    2016-12-15

    We propose using a piezoelectric adaptive secondary mirror (PASM) in the medium-sized adaptive telescopes with a 2-4 m aperture for structure and control simplification by utilizing the piezoelectric actuators in contrast with the voice-coil adaptive secondary mirror. A closed-loop experimental setup was built for on-sky demonstration of the 73-element PASM developed by our laboratory. In this Letter, the PASM and the closed-loop adaptive optics system are introduced. High-resolution stellar images were obtained by using the PASM to correct high-order wavefront errors in May 2016. To the best of our knowledge, this is the first successful on-sky demonstration of the PASM. The results show that with the PASM as the deformable mirror, the angular resolution of the 1.8 m telescope can be effectively improved.

  14. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    NASA Astrophysics Data System (ADS)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.

  15. Tracking the time-varying cortical connectivity patterns by adaptive multivariate estimators.

    PubMed

    Astolfi, L; Cincotti, F; Mattia, D; De Vico Fallani, F; Tocci, A; Colosimo, A; Salinari, S; Marciani, M G; Hesse, W; Witte, H; Ursino, M; Zavaglia, M; Babiloni, F

    2008-03-01

    The directed transfer function (DTF) and the partial directed coherence (PDC) are frequency-domain estimators that are able to describe interactions between cortical areas in terms of the concept of Granger causality. However, the classical estimation of these methods is based on the multivariate autoregressive modelling (MVAR) of time series, which requires the stationarity of the signals. In this way, transient pathways of information transfer remains hidden. The objective of this study is to test a time-varying multivariate method for the estimation of rapidly changing connectivity relationships between cortical areas of the human brain, based on DTF/PDC and on the use of adaptive MVAR modelling (AMVAR) and to apply it to a set of real high resolution EEG data. This approach will allow the observation of rapidly changing influences between the cortical areas during the execution of a task. The simulation results indicated that time-varying DTF and PDC are able to estimate correctly the imposed connectivity patterns under reasonable operative conditions of signal-to-noise ratio (SNR) ad number of trials. An SNR of five and a number of trials of at least 20 provide a good accuracy in the estimation. After testing the method by the simulation study, we provide an application to the cortical estimations obtained from high resolution EEG data recorded from a group of healthy subject during a combined foot-lips movement and present the time-varying connectivity patterns resulting from the application of both DTF and PDC. Two different cortical networks were detected with the proposed methods, one constant across the task and the other evolving during the preparation of the joint movement.

  16. KINETIC ENERGY FROM SUPERNOVA FEEDBACK IN HIGH-RESOLUTION GALAXY SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, Christine M.; Bryan, Greg L.; Ostriker, Jeremiah P.

    We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (∼10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 10{sup 9} M{sub ⊙} dwarf halo. Wemore » find that in high-density media (≳50 cm{sup −3}) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.« less

  17. Sensitivity study of the UHI in the city of Szeged (Hungary) to different offline simulation set-ups using SURFEX/TEB

    NASA Astrophysics Data System (ADS)

    Zsebeházi, Gabriella; Hamdi, Rafiq; Szépszó, Gabriella

    2015-04-01

    Urbanised areas modify the local climate due to the physical properties of surface subjects and their morphology. The urban effect on local climate and regional climate change interact, resulting in more serious climate change impacts (e.g., more heatwave events) over cities. Majority of people are now living in cities and thus, affected by these enhanced changes. Therefore, targeted adaptation and mitigation strategies in cities are of high importance. Regional climate models (RCMs) are sufficient tools for estimating future climate change of an area in detail, although most of them cannot represent the urban climate characteristics, because their spatial resolution is too coarse (in general 10-50 km) and they do not use a specific urban parametrization over urbanized areas. To describe the interactions between the urban surface and atmosphere on few km spatial scale, we use the externalised SURFEX land surface scheme including the TEB urban canopy model in offline mode (i.e. the interaction is only one-way). The driving atmospheric conditions highly influence the impact results, thus the good quality of these data is particularly essential. The overall aim of our research is to understand the behaviour of the impact model and its interaction with the forcing coming from the atmospheric model in order to reduce the biases, which can lead to qualified impact studies of climate change over urban areas. As a preliminary test, several short (few-day) 1 km resolution simulations are carried out over a domain covering a Hungarian town, Szeged, which is located at the flat southern part of Hungary. The atmospheric forcing is provided by ALARO (a new version of the limited-area model of the ARPEGE-IFS system running at the Royal Meteorological Institute of Belgium) applied over Hungary. The focal point of our investigations is the ability of SURFEX to simulate the diurnal evolution and spatial pattern of urban heat island (UHI). Different offline simulation set-ups have been tested: 1. Atmospheric forcing at 4km and 10km resolutions; 2. Atmospheric forcing prepared with and without TEB; 3. Coupling of forcings on 3h and 1h temporal frequencies; 4. Different forcing levels on 50m, 40m, 30m, 20m, 10m; 5. Different computation method of 2m temperature using CANOPY, Paulson, and Geleyn schemes. Finally, some outcomes are also compared to the results obtained using ALADIN-Climate RCM (adapted and used at the Hungarian Meteorological Service on 10 km resolution) as driving atmospheric model. The presentation is dedicated to show the results and main conclusions of our studies.

  18. Above-real-time training (ARTT) improves transfer to a simulated flight control task.

    PubMed

    Donderi, D C; Niall, Keith K; Fish, Karyn; Goldstein, Benjamin

    2012-06-01

    The aim of this study was to measure the effects of above-real-time-training (ARTT) speed and screen resolution on a simulated flight control task. ARTT has been shown to improve transfer to the criterion task in some military simulation experiments. We tested training speed and screen resolution in a project, sponsored by Defence Research and Development Canada, to develop components for prototype air mission simulators. For this study, 54 participants used a single-screen PC-based flight simulation program to learn to chase and catch an F-18A fighter jet with another F-18A while controlling the chase aircraft with a throttle and side-stick controller. Screen resolution was varied between participants, and training speed was varied factorially across two sessions within participants. Pretest and posttest trials were at high resolution and criterion (900 knots) speed. Posttest performance was best with high screen resolution training and when one ARTT training session was followed by a session of criterion speed training. ARTT followed by criterion training improves performance on a visual-motor coordination task. We think that ARTT influences known facilitators of transfer, including similarity to the criterion task and contextual interference. Use high-screen resolution, start with ARTT, and finish with criterion speed training when preparing a mission simulation.

  19. A 4.5 km resolution Arctic Ocean simulation with the global multi-resolution model FESOM 1.4

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Wekerle, Claudia; Danilov, Sergey; Wang, Xuezhu; Jung, Thomas

    2018-04-01

    In the framework of developing a global modeling system which can facilitate modeling studies on Arctic Ocean and high- to midlatitude linkage, we evaluate the Arctic Ocean simulated by the multi-resolution Finite Element Sea ice-Ocean Model (FESOM). To explore the value of using high horizontal resolution for Arctic Ocean modeling, we use two global meshes differing in the horizontal resolution only in the Arctic Ocean (24 km vs. 4.5 km). The high resolution significantly improves the model's representation of the Arctic Ocean. The most pronounced improvement is in the Arctic intermediate layer, in terms of both Atlantic Water (AW) mean state and variability. The deepening and thickening bias of the AW layer, a common issue found in coarse-resolution simulations, is significantly alleviated by using higher resolution. The topographic steering of the AW is stronger and the seasonal and interannual temperature variability along the ocean bottom topography is enhanced in the high-resolution simulation. The high resolution also improves the ocean surface circulation, mainly through a better representation of the narrow straits in the Canadian Arctic Archipelago (CAA). The representation of CAA throughflow not only influences the release of water masses through the other gateways but also the circulation pathways inside the Arctic Ocean. However, the mean state and variability of Arctic freshwater content and the variability of freshwater transport through the Arctic gateways appear not to be very sensitive to the increase in resolution employed here. By highlighting the issues that are independent of model resolution, we address that other efforts including the improvement of parameterizations are still required.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Fuyu; Collins, William D.; Wehner, Michael F.

    High-resolution climate models have been shown to improve the statistics of tropical storms and hurricanes compared to low-resolution models. The impact of increasing horizontal resolution in the tropical storm simulation is investigated exclusively using a series of Atmospheric Global Climate Model (AGCM) runs with idealized aquaplanet steady-state boundary conditions and a fixed operational storm-tracking algorithm. The results show that increasing horizontal resolution helps to detect more hurricanes, simulate stronger extreme rainfall, and emulate better storm structures in the models. However, increasing model resolution does not necessarily produce stronger hurricanes in terms of maximum wind speed, minimum sea level pressure, andmore » mean precipitation, as the increased number of storms simulated by high-resolution models is mainly associated with weaker storms. The spatial scale at which the analyses are conducted appears to have more important control on these meteorological statistics compared to horizontal resolution of the model grid. When the simulations are analyzed on common low-resolution grids, the statistics of the hurricanes, particularly the hurricane counts, show reduced sensitivity to the horizontal grid resolution and signs of scale invariant.« less

  1. Changes in Moisture Flux Over the Tibetan Plateau During 1979-2011: Insights from a High Resolution Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin

    2015-05-01

    Net precipitation (precipitation minus evapotranspiration, P-E) changes from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. Improvement in simulating precipitation changes at high elevations contributes dominantly to the improved P-E changes. High-resolution climate simulation also facilitates new and substantial findings regardingmore » the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.« less

  2. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    PubMed

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  3. Turbulence Resolving Flow Simulations of a Francis Turbine in Part Load using Highly Parallel CFD Simulations

    NASA Astrophysics Data System (ADS)

    Krappel, Timo; Riedelbauch, Stefan; Jester-Zuerker, Roland; Jung, Alexander; Flurl, Benedikt; Unger, Friedeman; Galpin, Paul

    2016-11-01

    The operation of Francis turbines in part load conditions causes high fluctuations and dynamic loads in the turbine and especially in the draft tube. At the hub of the runner outlet a rotating vortex rope within a low pressure zone arises and propagates into the draft tube cone. The investigated part load operating point is at about 72% discharge of best efficiency. To reduce the possible influence of boundary conditions on the solution, a flow simulation of a complete Francis turbine is conducted consisting of spiral case, stay and guide vanes, runner and draft tube. As the flow has a strong swirling component for the chosen operating point, it is very challenging to accurately predict the flow and in particular the flow losses in the diffusor. The goal of this study is to reach significantly better numerical prediction of this flow type. This is achieved by an improved resolution of small turbulent structures. Therefore, the Scale Adaptive Simulation SAS-SST turbulence model - a scale resolving turbulence model - is applied and compared to the widely used RANS-SST turbulence model. The largest mesh contains 300 million elements, which achieves LES-like resolution throughout much of the computational domain. The simulations are evaluated in terms of the hydraulic losses in the machine, evaluation of the velocity field, pressure oscillations in the draft tube and visual comparisons of turbulent flow structures. A pre-release version of ANSYS CFX 17.0 is used in this paper, as this CFD solver has a parallel performance up to several thousands of cores for this application which includes a transient rotor-stator interface to support the relative motion between the runner and the stationary portions of the water turbine.

  4. 3D ADAPTIVE MESH REFINEMENT SIMULATIONS OF THE GAS CLOUD G2 BORN WITHIN THE DISKS OF YOUNG STARS IN THE GALACTIC CENTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schartmann, M.; Ballone, A.; Burkert, A.

    The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtainedmore » results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position–velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.« less

  5. Adaptive optics and interferometry

    NASA Technical Reports Server (NTRS)

    Beichman, Charles A.; Ridgway, Stephen

    1991-01-01

    Adaptive optics and interferometry, two techniques that will improve the limiting resolution of optical and infrared observations by factors of tens or even thousands, are discussed. The real-time adjustment of optical surfaces to compensate for wavefront distortions will improve image quality and increase sensitivity. The phased operation of multiple telescopes separated by large distances will make it possible to achieve very high angular resolution and precise positional measurements. Infrared and optical interferometers that will manipulate light beams and measure interference directly are considered. Angular resolutions of single telescopes will be limited to around 10 milliarcseconds even using the adaptive optics techniques. Interferometry would surpass this limit by a factor of 100 or more. Future telescope arrays with 100-m baselines (resolution of 2.5 milliarcseconds at a 1-micron wavelength) are also discussed.

  6. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  7. Nonlinear responses of southern African rainfall to forcing from Atlantic SST in a high-resolution regional climate model

    NASA Astrophysics Data System (ADS)

    Williams, C.; Kniveton, D.; Layberry, R.

    2009-04-01

    It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The subcontinent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. In this research, high resolution satellite derived rainfall data from the Microwave Infra-Red Algorithm (MIRA) are used as a basis for undertaking model experiments using a state-of-the-art regional climate model. The MIRA dataset covers the period from 1993-2002 and the whole of southern Africa at a spatial resolution of 0.1 degree longitude/latitude. Once the model's ability to reproduce extremes has been assessed, idealised regions of sea surface temperature (SST) anomalies are used to force the model, with the overall aim of investigating the ways in which SST anomalies influence rainfall extremes over southern Africa. In this paper, results from sensitivity testing of the regional climate model's domain size are briefly presented, before a comparison of simulated daily rainfall from the model with the satellite-derived dataset. Secondly, simulations of current climate and rainfall extremes from the model are compared to the MIRA dataset at daily timescales. Finally, the results from the idealised SST experiments are presented, suggesting highly nonlinear associations between rainfall extremes remote SST anomalies.

  8. A New High Resolution Climate Dataset for Climate Change Impacts Assessments in New England

    NASA Astrophysics Data System (ADS)

    Komurcu, M.; Huber, M.

    2016-12-01

    Assessing regional impacts of climate change (such as changes in extreme events, land surface hydrology, water resources, energy, ecosystems and economy) requires much higher resolution climate variables than those available from global model projections. While it is possible to run global models in higher resolution, the high computational cost associated with these simulations prevent their use in such manner. To alleviate this problem, dynamical downscaling offers a method to deliver higher resolution climate variables. As part of an NSF EPSCoR funded interdisciplinary effort to assess climate change impacts on New Hampshire ecosystems, hydrology and economy (the New Hampshire Ecosystems and Society project), we create a unique high-resolution climate dataset for New England. We dynamically downscale global model projections under a high impact emissions scenario using the Weather Research and Forecasting model (WRF) with three nested grids of 27, 9 and 3 km horizontal resolution with the highest resolution innermost grid focusing over New England. We prefer dynamical downscaling over other methods such as statistical downscaling because it employs physical equations to progressively simulate climate variables as atmospheric processes interact with surface processes, emissions, radiation, clouds, precipitation and other model components, hence eliminates fix relationships between variables. In addition to simulating mean changes in regional climate, dynamical downscaling also allows for the simulation of climate extremes that significantly alter climate change impacts. We simulate three time slices: 2006-2015, 2040-2060 and 2080-2100. This new high-resolution climate dataset (with more than 200 variables saved in hourly (six hourly) intervals for the highest resolution domain (outer two domains)) along with model input and restart files used in our WRF simulations will be publicly available for use to the broader scientific community to support in-depth climate change impacts assessments for New England. We present results focusing on future changes in New England extreme events.

  9. Retina imaging system with adaptive optics for the eye with or without myopia

    NASA Astrophysics Data System (ADS)

    Li, Chao; Xia, Mingliang; Jiang, Baoguang; Mu, Quanquan; Chen, Shaoyuan; Xuan, Li

    2009-04-01

    An adaptive optics system for the retina imaging is introduced in the paper. It can be applied to the eye with myopia from 0 to 6 diopters without any adjustment of the system. A high-resolution liquid crystal on silicon (LCOS) device is used as the wave-front corrector. The aberration is detected by a Shack-Harmann wave-front sensor (HASO) that has a Root Mean Square (RMS) measurement accuracy of λ/100 ( λ = 0.633 μm). And an equivalent scale model eye is constructed with a short focal length lens (˜18 mm) and a diffuse reflection object (paper screen) as the retina. By changing the distance between the paper screen and the lens, we simulate the eye with larger diopters than 5 and the depth of field. The RMS value both before and after correction is obtained by the wave-front sensor. After correction, the system reaches the diffraction-limited resolution approximately 230 cycles/mm at the object space. It is proved that if the myopia is smaller than 6 diopters and the depth of field is between -40 and +50 mm, the system can correct the aberration very well.

  10. Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.

    PubMed

    Park, Jongin; Wi, Seok-Min; Lee, Jin S

    2016-02-01

    Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.

  11. Mapping Crop Patterns in Central US Agricultural Systems from 2000 to 2014 Based on Landsat Data: To What Degree Does Fusing MODIS Data Improve Classification Accuracies?

    NASA Astrophysics Data System (ADS)

    Zhu, L.; Radeloff, V.; Ives, A. R.; Barton, B.

    2015-12-01

    Deriving crop pattern with high accuracy is of great importance for characterizing landscape diversity, which affects the resilience of food webs in agricultural systems in the face of climatic and land cover changes. Landsat sensors were originally designed to monitor agricultural areas, and both radiometric and spatial resolution are optimized for monitoring large agricultural fields. Unfortunately, few clear Landsat images per year are available, which has limited the use of Landsat for making crop classification, and this situation is worse in cloudy areas of the Earth. Meanwhile, the MODerate Resolution Imaging Spectroradiometer (MODIS) data has better temporal resolution but cannot capture fine spatial heterogeneity of agricultural systems. Our question was to what extent fusing imagery from both sensors could improve crop classifications. We utilized the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to simulate Landsat-like images at MODIS temporal resolution. Based on Random Forests (RF) classifier, we tested whether and by what degree crop maps from 2000 to 2014 of the Arlington Agricultural Research Station (Wisconsin, USA) were improved by integrating available clear Landsat images each year with synthetic images. We predicted that the degree to which classification accuracy can be improved by incorporating synthetic imagery depends on the number and acquisition time of clear Landsat images. Moreover, multi-season data are essential for mapping crop types by capturing their phenological dynamics, and STARFM-simulated images can be used to compensate for missing Landsat observations. Our study is helpful for eliminating the limits of the use of Landsat data in mapping crop patterns, and can provide a benchmark of accuracy when choosing STARFM-simulated images to make crop classification at broader scales.

  12. Simulating Microbial Community Patterning Using Biocellion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak

    2014-04-17

    Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less

  13. Use of Hilbert Curves in Parallelized CUDA code: Interaction of Interstellar Atoms with the Heliosphere

    NASA Astrophysics Data System (ADS)

    Destefano, Anthony; Heerikhuisen, Jacob

    2015-04-01

    Fully 3D particle simulations can be a computationally and memory expensive task, especially when high resolution grid cells are required. The problem becomes further complicated when parallelization is needed. In this work we focus on computational methods to solve these difficulties. Hilbert curves are used to map the 3D particle space to the 1D contiguous memory space. This method of organization allows for minimized cache misses on the GPU as well as a sorted structure that is equivalent to an octal tree data structure. This type of sorted structure is attractive for uses in adaptive mesh implementations due to the logarithm search time. Implementations using the Message Passing Interface (MPI) library and NVIDIA's parallel computing platform CUDA will be compared, as MPI is commonly used on server nodes with many CPU's. We will also compare static grid structures with those of adaptive mesh structures. The physical test bed will be simulating heavy interstellar atoms interacting with a background plasma, the heliosphere, simulated from fully consistent coupled MHD/kinetic particle code. It is known that charge exchange is an important factor in space plasmas, specifically it modifies the structure of the heliosphere itself. We would like to thank the Alabama Supercomputer Authority for the use of their computational resources.

  14. Global high-resolution simulations of tropospheric nitrogen dioxide using CHASER V4.0

    NASA Astrophysics Data System (ADS)

    Sekiya, Takashi; Miyazaki, Kazuyuki; Ogochi, Koji; Sudo, Kengo; Takigawa, Masayuki

    2018-03-01

    We evaluate global tropospheric nitrogen dioxide (NO2) simulations using the CHASER V4.0 global chemical transport model (CTM) at horizontal resolutions of 0.56, 1.1, and 2.8°. Model evaluation was conducted using satellite tropospheric NO2 retrievals from the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment-2 (GOME-2) and aircraft observations from the 2014 Front Range Air Pollution and Photochemistry Experiment (FRAPPÉ). Agreement against satellite retrievals improved greatly at 1.1 and 0.56° resolutions (compared to 2.8° resolution) over polluted and biomass burning regions. The 1.1° simulation generally captured the regional distribution of the tropospheric NO2 column well, whereas 0.56° resolution was necessary to improve the model performance over areas with strong local sources, with mean bias reductions of 67 % over Beijing and 73 % over San Francisco in summer. Validation using aircraft observations indicated that high-resolution simulations reduced negative NO2 biases below 700 hPa over the Denver metropolitan area. These improvements in high-resolution simulations were attributable to (1) closer spatial representativeness between simulations and observations and (2) better representation of large-scale concentration fields (i.e., at 2.8°) through the consideration of small-scale processes. Model evaluations conducted at 0.5 and 2.8° bin grids indicated that the contributions of both these processes were comparable over most polluted regions, whereas the latter effect (2) made a larger contribution over eastern China and biomass burning areas. The evaluations presented in this paper demonstrate the potential of using a high-resolution global CTM for studying megacity-scale air pollutants across the entire globe, potentially also contributing to global satellite retrievals and chemical data assimilation.

  15. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less

  16. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    NASA Astrophysics Data System (ADS)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten

    2017-11-01

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.

  17. An Efficient Adaptive Window Size Selection Method for Improving Spectrogram Visualization.

    PubMed

    Nisar, Shibli; Khan, Omar Usman; Tariq, Muhammad

    2016-01-01

    Short Time Fourier Transform (STFT) is an important technique for the time-frequency analysis of a time varying signal. The basic approach behind it involves the application of a Fast Fourier Transform (FFT) to a signal multiplied with an appropriate window function with fixed resolution. The selection of an appropriate window size is difficult when no background information about the input signal is known. In this paper, a novel empirical model is proposed that adaptively adjusts the window size for a narrow band-signal using spectrum sensing technique. For wide-band signals, where a fixed time-frequency resolution is undesirable, the approach adapts the constant Q transform (CQT). Unlike the STFT, the CQT provides a varying time-frequency resolution. This results in a high spectral resolution at low frequencies and high temporal resolution at high frequencies. In this paper, a simple but effective switching framework is provided between both STFT and CQT. The proposed method also allows for the dynamic construction of a filter bank according to user-defined parameters. This helps in reducing redundant entries in the filter bank. Results obtained from the proposed method not only improve the spectrogram visualization but also reduce the computation cost and achieves 87.71% of the appropriate window length selection.

  18. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE)

    PubMed Central

    Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram

    2010-01-01

    MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794

  19. Multiscale landscape genomic models to detect signatures of selection in the alpine plant Biscutella laevigata.

    PubMed

    Leempoel, Kevin; Parisod, Christian; Geiser, Céline; Joost, Stéphane

    2018-02-01

    Plant species are known to adapt locally to their environment, particularly in mountainous areas where conditions can vary drastically over short distances. The climate of such landscapes being largely influenced by topography, using fine-scale models to evaluate environmental heterogeneity may help detecting adaptation to micro-habitats. Here, we applied a multiscale landscape genomic approach to detect evidence of local adaptation in the alpine plant Biscutella laevigata . The two gene pools identified, experiencing limited gene flow along a 1-km ridge, were different in regard to several habitat features derived from a very high resolution (VHR) digital elevation model (DEM). A correlative approach detected signatures of selection along environmental gradients such as altitude, wind exposure, and solar radiation, indicating adaptive pressures likely driven by fine-scale topography. Using a large panel of DEM-derived variables as ecologically relevant proxies, our results highlighted the critical role of spatial resolution. These high-resolution multiscale variables indeed indicate that the robustness of associations between genetic loci and environmental features depends on spatial parameters that are poorly documented. We argue that the scale issue is critical in landscape genomics and that multiscale ecological variables are key to improve our understanding of local adaptation in highly heterogeneous landscapes.

  20. Bistatic synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Yates, Gillian

    Synthetic aperture radar (SAR) allows all-weather, day and night, surface surveillance and has the ability to detect, classify and geolocate objects at long stand-off ranges. Bistatic SAR, where the transmitter and the receiver are on separate platforms, is seen as a potential means of countering the vulnerability of conventional monostatic SAR to electronic countermeasures, particularly directional jamming, and avoiding physical attack of the imaging platform. As the receiving platform can be totally passive, it does not advertise its position by RF emissions. The transmitter is not susceptible to jamming and can, for example, operate at long stand-off ranges to reduce its vulnerability to physical attack. This thesis examines some of the complications involved in producing high-resolution bistatic SAR imagery. The effect of bistatic operation on resolution is examined from a theoretical viewpoint and analytical expressions for resolution are developed. These expressions are verified by simulation work using a simple 'point by point' processor. This work is extended to look at using modern practical processing engines for bistatic geometries. Adaptations of the polar format algorithm and range migration algorithm are considered. The principal achievement of this work is a fully airborne demonstration of bistatic SAR. The route taken in reaching this is given, along with some results. The bistatic SAR imagery is analysed and compared to the monostatic imagery collected at the same time. Demonstrating high-resolution bistatic SAR imagery using two airborne platforms represents what I believe to be a European first and is likely to be the first time that this has been achieved outside the US (the UK has very little insight into US work on this topic). Bistatic target characteristics are examined through the use of simulations. This also compares bistatic imagery with monostatic and gives further insight into the utility of bistatic SAR.

  1. Complex behaviour in complex terrain - Modelling bird migration in a high resolution wind field across mountainous terrain to simulate observed patterns.

    PubMed

    Aurbach, Annika; Schmid, Baptiste; Liechti, Felix; Chokani, Ndaona; Abhari, Reza

    2018-06-03

    Crossing of large ecological barriers, such as mountains, is in terms of energy considered to be a demanding and critical step during bird migration. Besides forming a geographical barrier, mountains have a profound impact on the resulting wind flow. We use a novel framework of mathematical models to investigate the influences of wind and topography on nocturnal passerine bird behaviour, and to assess the energy costs for different flight strategies for crossing the Jura Mountains. The mathematical models include three biological models of bird behaviour: i) wind drift compensation; ii) adaptation of flight height for favourable winds; and, iii) avoidance of obstacles (cross over and/or circumvention of an obstacle following a minimum energy expenditure strategy), which are assessed separately and in combination. Further, we use a mesoscale weather model for high-resolution predictions of the wind fields. We simulate the broad front nocturnal passerine migration for autumn nights with peak migration intensities. The bird densities retrieved from a weather radar are used as the initial intensities and to specify the vertical distributions of the simulated birds. It is shown that migration over complex terrain represents the most expensive flight option in terms of energy expenditure, and wind is seen to be the main factor that influences the energy expenditure in the bird's preferred flight direction. Further, the combined effects of wind and orography lead to a high concentration of migratory birds within the favourable wind conditions of the Swiss lowlands and north of the Jura Mountains. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Using Voronoi Tessellations to identify groups in N-body Simulation

    NASA Astrophysics Data System (ADS)

    Gonzalez, R. E.; Theuns, T.

    Dark matter N-body simulations often use a friends-of-friends (FOF) group finder to link together particles above a specified density threshold. An over density of 200 picks-out objects that can be identified with virialised dark matter haloes, based on the spherical collapse model for the formation of structure. When the halo contains significant substructure, as is the case in very high resolution simulations, then FOF will simply link all substructure to the parent halo. Many cosmological simulations now also include gas and stars, and these are often distributed differently from the dark matter. It is then not clear whether the structures identified by FOF are very physical. Here we use Voronoi tesselations to identify structures in hydrodynamical cosmological simulations, that contain dark matter, gas and stars. This adaptive technique allows accurate estimates of densities, and density gradients, for a non-structured distribution of points. We discuss how these estimates allow us to identify structures in the dark matter that can be identified with haloes, and in the stars, to identify galaxies.

  3. MAD Adaptive Optics Imaging of High-luminosity Quasars: A Pilot Project

    NASA Astrophysics Data System (ADS)

    Liuzzo, E.; Falomo, R.; Paiano, S.; Treves, A.; Uslenghi, M.; Arcidiacono, C.; Baruffolo, A.; Diolaiti, E.; Farinato, J.; Lombini, M.; Moretti, A.; Ragazzoni, R.; Brast, R.; Donaldson, R.; Kolb, J.; Marchetti, E.; Tordo, S.

    2016-08-01

    We present near-IR images of five luminous quasars at z ˜ 2 and one at z ˜ 4 obtained with an experimental adaptive optics (AO) instrument at the European Southern Observatory Very Large Telescope. The observations are part of a program aimed at demonstrating the capabilities of multi-conjugated adaptive optics imaging combined with the use of natural guide stars for high spatial resolution studies on large telescopes. The observations were mostly obtained under poor seeing conditions but in two cases. In spite of these nonoptimal conditions, the resulting images of point sources have cores of FWHM ˜ 0.2 arcsec. We are able to characterize the host galaxy properties for two sources and set stringent upper limits to the galaxy luminosity for the others. We also report on the expected capabilities for investigating the host galaxies of distant quasars with AO systems coupled with future Extremely Large Telescopes. Detailed simulations show that it will be possible to characterize compact (2-3 kpc) quasar host galaxies for quasi-stellar objects at z = 2 with nucleus K-magnitude spanning from 15 to 20 (corresponding to absolute magnitude -31 to -26) and host galaxies that are 4 mag fainter than their nuclei.

  4. High Resolution Regional Climate Simulations over Alaska

    NASA Astrophysics Data System (ADS)

    Monaghan, A. J.; Clark, M. P.; Arnold, J.; Newman, A. J.; Musselman, K. N.; Barlage, M. J.; Xue, L.; Liu, C.; Gutmann, E. D.; Rasmussen, R.

    2016-12-01

    In order to appropriately plan future projects to build and maintain infrastructure (e.g., dams, dikes, highways, airports), a number of U.S. federal agencies seek to better understand how hydrologic regimes may shift across the country due to climate change. Building on the successful completion of a series of high-resolution WRF simulations over the Colorado River Headwaters and contiguous USA, our team is now extending these simulations over the challenging U.S. States of Alaska and Hawaii. In this presentation we summarize results from a newly completed 4-km resolution WRF simulation over Alaska spanning 2002-2016 at 4-km spatial resolution. Our aim is to gain insight into the thermodynamics that drive key precipitation processes, particularly the extremes that are most damaging to infrastructure.

  5. Uncertainty of global summer precipitation in the CMIP5 models: a comparison between high-resolution and low-resolution models

    NASA Astrophysics Data System (ADS)

    Huang, Danqing; Yan, Peiwen; Zhu, Jian; Zhang, Yaocun; Kuang, Xueyuan; Cheng, Jing

    2018-04-01

    The uncertainty of global summer precipitation simulated by the 23 CMIP5 CGCMs and the possible impacts of model resolutions are investigated in this study. Large uncertainties exist over the tropical and subtropical regions, which can be mainly attributed to convective precipitation simulation. High-resolution models (HRMs) and low-resolution models (LRMs) are further investigated to demonstrate their different contributions to the uncertainties of the ensemble mean. It shows that the high-resolution model ensemble means (HMME) and low-resolution model ensemble mean (LMME) mitigate the biases between the MME and observation over most continents and oceans, respectively. The HMME simulates more precipitation than the LMME over most oceans, but less precipitation over some continents. The dominant precipitation category in the HRMs (LRMs) is the heavy precipitation (moderate precipitation) over the tropic regions. The combinations of convective and stratiform precipitation are also quite different: the HMME has much higher ratio of stratiform precipitation while the LMME has more convective precipitation. Finally, differences in precipitation between the HMME and LMME can be traced to their differences in the SST simulations via the local and remote air-sea interaction.

  6. ESiWACE: A Center of Excellence for HPC applications to support cloud resolving earth system modelling

    NASA Astrophysics Data System (ADS)

    Biercamp, Joachim; Adamidis, Panagiotis; Neumann, Philipp

    2017-04-01

    With the exa-scale era approaching, length and time scales used for climate research on one hand and numerical weather prediction on the other hand blend into each other. The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) represents a European consortium comprising partners from climate, weather and HPC in their effort to address key scientific challenges that both communities have in common. A particular challenge is to reach global models with spatial resolutions that allow simulating convective clouds and small-scale ocean eddies. These simulations would produce better predictions of trends and provide much more fidelity in the representation of high-impact regional events. However, running such models in operational mode, i.e with sufficient throughput in ensemble mode clearly will require exa-scale computing and data handling capability. We will discuss the ESiWACE initiative and relate it to work-in-progress on high-resolution simulations in Europe. We present recent strong scalability measurements from ESiWACE to demonstrate current computability in weather and climate simulation. A special focus in this particular talk is on the Icosahedal Nonhydrostatic (ICON) model used for a comparison of high resolution regional and global simulations with high quality observation data. We demonstrate that close-to-optimal parallel efficiency can be achieved in strong scaling global resolution experiments on Mistral/DKRZ, e.g. 94% for 5km resolution simulations using 36k cores on Mistral/DKRZ. Based on our scalability and high-resolution experiments, we deduce and extrapolate future capabilities for ICON that are expected for weather and climate research at exascale.

  7. Study of key factors influencing dust emission: An assessment of GEOS-Chem and DEAD simulations with observations

    NASA Astrophysics Data System (ADS)

    Bartlett, Kevin S.

    Mineral dust aerosols can impact air quality, climate change, biological cycles, tropical cyclone development and flight operations due to reduced visibility. Dust emissions are primarily limited to the extensive arid regions of the world, yet can negatively impact local to global scales, and are extremely complex to model accurately. Within this dissertation, the Dust Entrainment And Deposition (DEAD) model was adapted to run, for the first known time, using high temporal (hourly) and spatial (0.3°x0.3°) resolution data to methodically interrogate the key parameters and factors influencing global dust emissions. The dependence of dust emissions on key parameters under various conditions has been quantified and it has been shown that dust emissions within DEAD are largely determined by wind speeds, vegetation extent, soil moisture and topographic depressions. Important findings were that grid degradation from 0.3ºx0.3º to 1ºx1º, 2ºx2.5º, and 4°x5° of key meteorological, soil, and surface input parameters greatly reduced emissions approximately 13% and 29% and 64% respectively, as a result of the loss of sub grid detail within these key parameters at coarse grids. After running high resolution DEAD emissions globally for 2 years, two severe dust emission cases were chosen for an in-depth investigation of the root causes of the events and evaluation of the 2°x2.5° Goddard Earth Observing System (GEOS)-Chem and 0.3°x0.3° DEAD model capabilities to simulate the events: one over South West Asia (SWA) in June 2008 and the other over the Middle East in July 2009. The 2 year lack of rain over SWA preceding June 2008 with a 43% decrease in mean rainfall, yielded less than normal plant growth, a 28% increase in Aerosol Optical Depth (AOD), and a 24% decrease in Meteorological Aerodrome Report (METAR) observed visibility (VSBY) compared to average years. GEOS-Chem captured the observed higher AOD over SWA in June 2008. More detailed comparisons of GEOS-Chem predicted AOD and visibility over SWA with those observed at surface stations and from satellites revealed overall success of the model, although substantial regional differences exist. Within the extended drought, the study area was zoomed into the Middle East (ME) for July 2009 where multi-grid DEAD dust emissions using hourly CFSR meteorological input were compared with observations. The high resolution input yielded the best spatial and temporal dust patterns compared with Defense Meteorological Satellite Program (DMSP), Moderate Resolution Imaging Spectroradiometer (MODIS) and METAR VSBY observations and definitively revealed Syria as a major dust source for the region. The coarse resolution dust emissions degraded or missed daily dust emissions entirely. This readily showed that the spatial scale degradation of the input data can significantly impair DEAD dust emissions and offers a strong argument for adapting higher resolution dust emission schemes into future global models for improvements of dust simulations.

  8. Video flow active control by means of adaptive shifted foveal geometries

    NASA Astrophysics Data System (ADS)

    Urdiales, Cristina; Rodriguez, Juan A.; Bandera, Antonio J.; Sandoval, Francisco

    2000-10-01

    This paper presents a control mechanism for video transmission that relies on transmitting non-uniform resolution images depending on the delay of the communication channel. These images are built in an active way to keep the areas of interest of the image at the highest resolution available. In order to shift the area of high resolution over the image and to achieve a data structure easy to process by using conventional algorithms, a shifted fovea multi resolution geometry of adaptive size is used. Besides, if delays are nevertheless too high, the different areas of resolution of the image can be transmitted at different rates. A functional system has been developed for corridor surveillance with static cameras. Tests with real video images have proven that the method allows an almost constant rate of images per second as long as the channel is not collapsed.

  9. The dynamics of plate tectonics and mantle flow: from local to global scales.

    PubMed

    Stadler, Georg; Gurnis, Michael; Burstedde, Carsten; Wilcox, Lucas C; Alisic, Laura; Ghattas, Omar

    2010-08-27

    Plate tectonics is regulated by driving and resisting forces concentrated at plate boundaries, but observationally constrained high-resolution models of global mantle flow remain a computational challenge. We capitalized on advances in adaptive mesh refinement algorithms on parallel computers to simulate global mantle flow by incorporating plate motions, with individual plate margins resolved down to a scale of 1 kilometer. Back-arc extension and slab rollback are emergent consequences of slab descent in the upper mantle. Cold thermal anomalies within the lower mantle couple into oceanic plates through narrow high-viscosity slabs, altering the velocity of oceanic plates. Viscous dissipation within the bending lithosphere at trenches amounts to approximately 5 to 20% of the total dissipation through the entire lithosphere and mantle.

  10. Center for Adaptive Optics | Jobs

    Science.gov Websites

    , 2015 University of Geneva Adaptive Optics Scientist or Engineer March 16, 2015 NRC-Herzberg Astronomy Max Planck Institute for Astronomy (MPIA) Post-doctoral Fellowships in High-angular Resolution

  11. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  12. Climate change projections for Tamil Nadu, India: deriving high-resolution climate data by a downscaling approach using PRECIS

    NASA Astrophysics Data System (ADS)

    Bal, Prasanta Kumar; Ramachandran, A.; Geetha, R.; Bhaskaran, B.; Thirumurugan, P.; Indumathi, J.; Jayanthi, N.

    2016-02-01

    In this paper, we present regional climate change projections for the Tamil Nadu state of India, simulated by the Met Office Hadley Centre regional climate model. The model is run at 25 km horizontal resolution driven by lateral boundary conditions generated by a perturbed physical ensemble of 17 simulations produced by a version of Hadley Centre coupled climate model, known as HadCM3Q under A1B scenario. The large scale features of these 17 simulations were evaluated for the target region to choose lateral boundary conditions from six members that represent a range of climate variations over the study region. The regional climate, known as PRECIS, was then run 130 years from 1970. The analyses primarily focus on maximum and minimum temperatures and rainfall over the region. For the Tamil Nadu as a whole, the projections of maximum temperature show an increase of 1.0, 2.2 and 3.1 °C for the periods 2020s (2005-2035), 2050s (2035-2065) and 2080s (2065-2095), respectively, with respect to baseline period (1970-2000). Similarly, the projections of minimum temperature show an increase of 1.1, 2.4 and 3.5 °C, respectively. This increasing trend is statistically significant (Mann-Kendall trend test). The annual rainfall projections for the same periods indicate a general decrease in rainfall of about 2-7, 1-4 and 4-9 %, respectively. However, significant exceptions are noticed over some pockets of western hilly areas and high rainfall areas where increases in rainfall are seen. There are also indications of increasing heavy rainfall events during the northeast monsoon season and a slight decrease during the southwest monsoon season. Such an approach of using climate models may maximize the utility of high-resolution climate change information for impact-adaptation-vulnerability assessments.

  13. Figuring Out Gas in Galaxies In Enzo (FOGGIE): Resolving the Inner Circumgalactic Medium

    NASA Astrophysics Data System (ADS)

    Corlies, Lauren; Peeples, Molly; Tumlinson, Jason; O'Shea, Brian; Smith, Britton

    2018-01-01

    Cosmological hydrodynamical simulations using every common numerical method have struggled to reproduce the multiphase nature of the circumgalactic medium (CGM) revealed by recent observations. However, to date, resolution in these simulations has been aimed at dense regions — the galactic disk and in-falling satellites — while the diffuse CGM never reaches comparable levels of refinement. Taking advantage of the flexible grid structure of the adaptive mesh refinement code Enzo, we force refinement in a region of the CGM of a Milky Way-like galaxy to the same spatial resolution as that of the disk. In this talk, I will present how the physical and structural distributions of the circumgalactic gas change dramatically as a function of the resolution alone. I will also show the implications these changes have for the observational properties of the gas in the context of the observations.

  14. Robust control of electrostatic torsional micromirrors using adaptive sliding-mode control

    NASA Astrophysics Data System (ADS)

    Sane, Harshad S.; Yazdi, Navid; Mastrangelo, Carlos H.

    2005-01-01

    This paper presents high-resolution control of torsional electrostatic micromirrors beyond their inherent pull-in instability using robust sliding-mode control (SMC). The objectives of this paper are two-fold - firstly, to demonstrate the applicability of SMC for MEMS devices; secondly - to present a modified SMC algorithm that yields improved control accuracy. SMC enables compact realization of a robust controller tolerant of device characteristic variations and nonlinearities. Robustness of the control loop is demonstrated through extensive simulations and measurements on MEMS with a wide range in their characteristics. Control of two-axis gimbaled micromirrors beyond their pull-in instability with overall 10-bit pointing accuracy is confirmed experimentally. In addition, this paper presents an analysis of the sources of errors in discrete-time implementation of the control algorithm. To minimize these errors, we present an adaptive version of the SMC algorithm that yields substantial performance improvement without considerably increasing implementation complexity.

  15. High-Resolution Autoradiography

    NASA Technical Reports Server (NTRS)

    Towe, George C; Gomberg, Henry J; Freemen, J W

    1955-01-01

    This investigation was made to adapt wet-process autoradiography to metallurgical samples to obtain high resolution of segregated radioactive elements in microstructures. Results are confined to development of the technique, which was perfected to a resolution of less than 10 microns. The radioactive samples included carbon-14 carburized iron and steel, nickel-63 electroplated samples, a powder product containing nickel-63, and tungsten-185 in N-155 alloy.

  16. High Resolution PET with 250 micrometer LSO Detectors and Adaptive Zoom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cherry, Simon R.; Qi, Jinyi

    2012-01-08

    There have been impressive improvements in the performance of small-animal positron emission tomography (PET) systems since their first development in the mid 1990s, both in terms of spatial resolution and sensitivity, which have directly contributed to the increasing adoption of this technology for a wide range of biomedical applications. Nonetheless, current systems still are largely dominated by the size of the scintillator elements used in the detector. Our research predicts that developing scintillator arrays with an element size of 250 {micro}m or smaller will lead to an image resolution of 500 {micro}m when using 18F- or 64Cu-labeled radiotracers, giving amore » factor of 4-8 improvement in volumetric resolution over the highest resolution research systems currently in existence. This proposal had two main objectives: (i) To develop and evaluate much higher resolution and efficiency scintillator arrays that can be used in the future as the basis for detectors in a small-animal PET scanner where the spatial resolution is dominated by decay and interaction physics rather than detector size. (ii) To optimize one such high resolution, high sensitivity detector and adaptively integrate it into the existing microPET II small animal PET scanner as a 'zoom-in' detector that provides higher spatial resolution and sensitivity in a limited region close to the detector face. The knowledge gained from this project will provide valuable information for building future PET systems with a complete ring of very high-resolution detector arrays and also lay the foundations for utilizing high-resolution detectors in combination with existing PET systems for localized high-resolution imaging.« less

  17. Development of ALARO-Climate regional climate model for a very high resolution

    NASA Astrophysics Data System (ADS)

    Skalak, Petr; Farda, Ales; Brozkova, Radmila; Masek, Jan

    2013-04-01

    ALARO-Climate is a new regional climate model (RCM) derived from the ALADIN LAM model family. It is based on the numerical weather prediction model ALARO and developed at the Czech Hydrometeorological Institute. The model is expected to able to work in the so called "grey zone" physics (horizontal resolution of 4 - 7 km) and at the same time retain its ability to be operated in resolutions in between 20 and 50 km, which are typical for contemporary generation of regional climate models. Here we present the main features of the RCM ALARO-Climate and results of the first model simulations on longer time-scales (1961-1990). The model was driven by the ERA-40/Interim re-analyses and run on the large pan-European integration domain ("ENSEMBLES / Euro-Cordex domain") with spatial resolution of 25 km. The simulated model climate was compared with the gridded observation of air temperature (mean, maximum, minimum) and precipitation from the E-OBS version 7 dataset. The validation of the first ERA-40 simulation has revealed significant cold biases in all seasons (between -4 and -2 °C) and overestimation of precipitation on 20% to 60% in the selected Central Europe target area (0° - 30° eastern longitude ; 40° - 60° northern latitude). The consequent adaptations in the model and their effect on the simulated properties of climate variables are illustrated. Acknowledgements: This study was performed within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation) and CzechGlobe Centre (CZ.1.05/1.1.00/02.0073). The partial support was also provided under the projects P209-11-0956 of the Czech Science Foundation and CZ.1.07/2.4.00/31.0056 (Operational Programme of Education for Competitiveness of Ministry of Education, Youth and Sports of the Czech Republic).

  18. Pulse-Inversion Subharmonic Ultrafast Active Cavitation Imaging in Tissue Using Fast Eigenspace-Based Adaptive Beamforming and Cavitation Deconvolution.

    PubMed

    Bai, Chen; Xu, Shanshan; Duan, Junbo; Jing, Bowen; Yang, Miao; Wan, Mingxi

    2017-08-01

    Pulse-inversion subharmonic (PISH) imaging can display information relating to pure cavitation bubbles while excluding that of tissue. Although plane-wave-based ultrafast active cavitation imaging (UACI) can monitor the transient activities of cavitation bubbles, its resolution and cavitation-to-tissue ratio (CTR) are barely satisfactory but can be significantly improved by introducing eigenspace-based (ESB) adaptive beamforming. PISH and UACI are a natural combination for imaging of pure cavitation activity in tissue; however, it raises two problems: 1) the ESB beamforming is hard to implement in real time due to the enormous amount of computation associated with the covariance matrix inversion and eigendecomposition and 2) the narrowband characteristic of the subharmonic filter will incur a drastic degradation in resolution. Thus, in order to jointly address these two problems, we propose a new PISH-UACI method using novel fast ESB (F-ESB) beamforming and cavitation deconvolution for nonlinear signals. This method greatly reduces the computational complexity by using F-ESB beamforming through dimensionality reduction based on principal component analysis, while maintaining the high quality of ESB beamforming. The degraded resolution is recovered using cavitation deconvolution through a modified convolution model and compressive deconvolution. Both simulations and in vitro experiments were performed to verify the effectiveness of the proposed method. Compared with the ESB-based PISH-UACI, the entire computation of our proposed approach was reduced by 99%, while the axial resolution gain and CTR were increased by 3 times and 2 dB, respectively, confirming that satisfactory performance can be obtained for monitoring pure cavitation bubbles in tissue erosion.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson III, David J

    The climate of the last glacial maximum (LGM) is simulated with a high-resolution atmospheric general circulation model, the NCAR CCM3 at spectral truncation of T170, corresponding to a grid cell size of roughly 75 km. The purpose of the study is to assess whether there are significant benefits from the higher resolution simulation compared to the lower resolution simulation associated with the role of topography. The LGM simulations were forced with modified CLIMAP sea ice distribution and sea surface temperatures (SST) reduced by 1 C, ice sheet topography, reduced CO{sub 2}, and 21,000 BP orbital parameters. The high-resolution model capturesmore » modern climate reasonably well, in particular the distribution of heavy precipitation in the tropical Pacific. For the ice age case, surface temperature simulated by the high-resolution model agrees better with those of proxy estimates than does the low-resolution model. Despite the fact that tropical SSTs were only 2.1 C less than the control run, there are many lowland tropical land areas 4-6 C colder than present. Comparison of T170 model results with the best constrained proxy temperature estimates (noble gas concentrations in groundwater) now yield no significant differences between model and observations. There are also significant upland temperature changes in the best resolved tropical mountain belt (the Andes). We provisionally attribute this result in part as resulting from decreased lateral mixing between ocean and land in a model with more model grid cells. A longstanding model-data discrepancy therefore appears to be resolved without invoking any unusual model physics. The response of the Asian summer monsoon can also be more clearly linked to local geography in the high-resolution model than in the low-resolution model; this distinction should enable more confident validation of climate proxy data with the high-resolution model. Elsewhere, an inferred salinity increase in the subtropical North Atlantic may have significant implications for ocean circulation changes during the LGM. A large part of the Amazon and Congo Basins are simulated to be substantially drier in the ice age - consistent with many (but not all) paleo data. These results suggest that there are considerable benefits derived from high-resolution model regarding regional climate responses, and that observationalists can now compare their results with models that resolve geography at a resolution comparable to that which the proxy data represent.« less

  20. High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6

    DOE PAGES

    Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; ...

    2016-11-22

    Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relativelymore » few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950–2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. Lastly, HighResMIP thereby focuses on one of the CMIP6 broad questions, “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.« less

  1. A High-resolution Multi-wavelength Simultaneous Imaging System with Solar Adaptive Optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Changhui; Zhu, Lei; Gu, Naiting

    A high-resolution multi-wavelength simultaneous imaging system from visible to near-infrared bands with a solar adaptive optics system, in which seven imaging channels, including the G band (430.5 nm), the Na i line (589 nm), the H α line (656.3 nm), the TiO band (705.7 nm), the Ca ii IR line (854.2 nm), the He i line (1083 nm), and the Fe i line (1565.3 nm), are chosen, is developed to image the solar atmosphere from the photosphere layer to the chromosphere layer. To our knowledge, this is the solar high-resolution imaging system with the widest spectral coverage. This system wasmore » demonstrated at the 1 m New Vaccum Solar Telescope and the on-sky high-resolution observational results were acquired. In this paper, we will illustrate the design and performance of the imaging system. The calibration and the data reduction of the system are also presented.« less

  2. Development of high resolution simulations of the atmospheric environment using the MASS model

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Zack, John W.; Karyampudi, V. Mohan

    1989-01-01

    Numerical simulations were performed with a very high resolution (7.25 km) version of the MASS model (Version 4.0) in an effort to diagnose the vertical wind shear and static stability structure during the Shuttle Challenger disaster which occurred on 28 January 1986. These meso-beta scale simulations reveal that the strongest vertical wind shears were concentrated in the 200 to 150 mb layer at 1630 GMT, i.e., at about the time of the disaster. These simulated vertical shears were the result of two primary dynamical processes. The juxtaposition of both of these processes produced a shallow (30 mb deep) region of strong vertical wind shear, and hence, low Richardson number values during the launch time period. Comparisons with the Cape Canaveral (XMR) rawinsonde indicates that the high resolution MASS 4.0 simulation more closely emulated nature than did previous simulations of the same event with the GMASS model.

  3. Resolution dependence of precipitation statistical fidelity in hindcast simulations

    DOE PAGES

    O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik; ...

    2016-06-19

    This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less

  4. Resolution dependence of precipitation statistical fidelity in hindcast simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik

    This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less

  5. Interleaved diffusion-weighted EPI improved by adaptive partial-Fourier and multi-band multiplexed sensitivity-encoding reconstruction

    PubMed Central

    Chang, Hing-Chiu; Guhaniyogi, Shayan; Chen, Nan-kuei

    2014-01-01

    Purpose We report a series of techniques to reliably eliminate artifacts in interleaved echo-planar imaging (EPI) based diffusion weighted imaging (DWI). Methods First, we integrate the previously reported multiplexed sensitivity encoding (MUSE) algorithm with a new adaptive Homodyne partial-Fourier reconstruction algorithm, so that images reconstructed from interleaved partial-Fourier DWI data are free from artifacts even in the presence of either a) motion-induced k-space energy peak displacement, or b) susceptibility field gradient induced fast phase changes. Second, we generalize the previously reported single-band MUSE framework to multi-band MUSE, so that both through-plane and in-plane aliasing artifacts in multi-band multi-shot interleaved DWI data can be effectively eliminated. Results The new adaptive Homodyne-MUSE reconstruction algorithm reliably produces high-quality and high-resolution DWI, eliminating residual artifacts in images reconstructed with previously reported methods. Furthermore, the generalized MUSE algorithm is compatible with multi-band and high-throughput DWI. Conclusion The integration of the multi-band and adaptive Homodyne-MUSE algorithms significantly improves the spatial-resolution, image quality, and scan throughput of interleaved DWI. We expect that the reported reconstruction framework will play an important role in enabling high-resolution DWI for both neuroscience research and clinical uses. PMID:24925000

  6. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.

  7. Grand Canonical adaptive resolution simulation for molecules with electrons: A theoretical framework based on physical consistency

    NASA Astrophysics Data System (ADS)

    Delle Site, Luigi

    2018-01-01

    A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.

  8. Numerical simulations of significant orographic precipitation in Madeira island

    NASA Astrophysics Data System (ADS)

    Couto, Flavio Tiago; Ducrocq, Véronique; Salgado, Rui; Costa, Maria João

    2016-03-01

    High-resolution simulations of high precipitation events with the MESO-NH model are presented, and also used to verify that increasing horizontal resolution in zones of complex orography, such as in Madeira island, improve the simulation of the spatial distribution and total precipitation. The simulations succeeded in reproducing the general structure of the cloudy systems over the ocean in the four periods considered of significant accumulated precipitation. The accumulated precipitation over the Madeira was better represented with the 0.5 km horizontal resolution and occurred under four distinct synoptic situations. Different spatial patterns of the rainfall distribution over the Madeira have been identified.

  9. Microdome-gooved Gd(2)O(2)S:Tb scintillator for flexible and high resolution digital radiography.

    PubMed

    Jung, Phill Gu; Lee, Chi Hoon; Bae, Kong Myeong; Lee, Jae Min; Lee, Sang Min; Lim, Chang Hwy; Yun, Seungman; Kim, Ho Kyung; Ko, Jong Soo

    2010-07-05

    A flexible microdome-grooved Gd(2)O(2)S:Tb scintillator is simulated, fabricated, and characterized for digital radiography applications. According to Monte Carlo simulation results, the dome-grooved structure has a high spatial resolution, which is verified by X-ray image performance of the scintillator. The proposed scintillator has lower X-ray sensitivity than a nonstructured scintillator but almost two times higher spatial resolution at high spatial frequency. Through evaluation of the X-ray performance of the fabricated scintillators, we confirm that the microdome-grooved scintillator can be applied to next-generation flexible digital radiography systems requiring high spatial resolution.

  10. Integration of High-resolution Data for Temporal Bone Surgical Simulations

    PubMed Central

    Wiet, Gregory J.; Stredney, Don; Powell, Kimerly; Hittle, Brad; Kerwin, Thomas

    2016-01-01

    Purpose To report on the state of the art in obtaining high-resolution 3D data of the microanatomy of the temporal bone and to process that data for integration into a surgical simulator. Specifically, we report on our experience in this area and discuss the issues involved to further the field. Data Sources Current temporal bone image acquisition and image processing established in the literature as well as in house methodological development. Review Methods We reviewed the current English literature for the techniques used in computer-based temporal bone simulation systems to obtain and process anatomical data for use within the simulation. Search terms included “temporal bone simulation, surgical simulation, temporal bone.” Articles were chosen and reviewed that directly addressed data acquisition and processing/segmentation and enhancement with emphasis given to computer based systems. We present the results from this review in relationship to our approach. Conclusions High-resolution CT imaging (≤100μm voxel resolution), along with unique image processing and rendering algorithms, and structure specific enhancement are needed for high-level training and assessment using temporal bone surgical simulators. Higher resolution clinical scanning and automated processes that run in efficient time frames are needed before these systems can routinely support pre-surgical planning. Additionally, protocols such as that provided in this manuscript need to be disseminated to increase the number and variety of virtual temporal bones available for training and performance assessment. PMID:26762105

  11. The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1

    DOE PAGES

    Wehner, Michael F.; Reed, Kevin A.; Li, Fuyu; ...

    2014-10-13

    We present an analysis of version 5.1 of the Community Atmospheric Model (CAM5.1) at a high horizontal resolution. Intercomparison of this global model at approximately 0.25°, 1°, and 2° is presented for extreme daily precipitation as well as for a suite of seasonal mean fields. In general, extreme precipitation amounts are larger in high resolution than in lower-resolution configurations. In many but not all locations and/or seasons, extreme daily precipitation rates in the high-resolution configuration are higher and more realistic. The high-resolution configuration produces tropical cyclones up to category 5 on the Saffir-Simpson scale and a comparison to observations revealsmore » both realistic and unrealistic model behavior. In the absence of extensive model tuning at high resolution, simulation of many of the mean fields analyzed in this study is degraded compared to the tuned lower-resolution public released version of the model.« less

  12. Spatial resolution limitation of liquid crystal spatial light modulator

    NASA Astrophysics Data System (ADS)

    Wang, Xinghua; Wang, Bin; McManamon, Paul F., III; Pouch, John J.; Miranda, Felix A.; Anderson, James E.; Bos, Philip J.

    2004-10-01

    The effect of fringing electric fields in a liquid crystal (LC) Optical Phased Array (OPA), also referred to as a spatial light modulator (SLM), is a governing factor that determines the diffraction efficiency (DE) of the LC OPA for high resolution spatial phase modulation. In this article, the fringing field effect in a high resolution LC OPA is studied by accurate modeling the DE of the LC blazed gratings by LC director simulation and Finite Difference Time Domain (FDTD) simulation. Influence factors that contribute significantly to the DE are discussed. Such results provide fundamental understanding for high resolution LC devices.

  13. Can the black box be cracked? The augmentation of microbial ecology by high-resolution, automated sensing technologies.

    PubMed

    Shade, Ashley; Carey, Cayelan C; Kara, Emily; Bertilsson, Stefan; McMahon, Katherine D; Smith, Matthew C

    2009-08-01

    Automated sensing technologies, 'ASTs,' are tools that can monitor environmental or microbial-related variables at increasingly high temporal resolution. Microbial ecologists are poised to use AST data to couple microbial structure, function and associated environmental observations on temporal scales pertinent to microbial processes. In the context of aquatic microbiology, we discuss three applications of ASTs: windows on the microbial world, adaptive sampling and adaptive management. We challenge microbial ecologists to push AST potential in helping to reveal relationships between microbial structure and function.

  14. Three-dimensional anisotropic adaptive filtering of projection data for noise reduction in cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.

    2011-11-15

    Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-raymore » views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8.9-fold speed-up of the processing (from 1336 to 150 s). Conclusions: Adaptive anisotropic filtering has the potential to substantially improve image quality and/or reduce the radiation dose required for obtaining 3D image data using cone beam CT.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydrationmore » shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.« less

  16. Aberration control in 4Pi nanoscopy: definitions, properties, and applications (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hao, Xiang; Allgeyer, Edward S.; Velasco, Mary Grace M.; Booth, Martin J.; Bewersdorf, Joerg

    2016-03-01

    The development of fluorescence microscopy, which allows live-cell imaging with high labeling specificity, has made the visualization of cellular architecture routine. However, for centuries, the spatial resolution of optical microscopy was fundamentally limited by diffraction. The past two decades have seen a revolution in far-field optical nanoscopy (or "super-resolution" microscopy). The best 3D resolution is achieved by optical nanoscopes like the isoSTED or the iPALM/4Pi-SMS, which utilize two opposing objective lenses in a coherent manner. These system are, however, also more complex and the required interference conditions demand precise aberration control. Our research involves developing novel adaptive optics techniques that enable high spatial and temporal resolution imaging for biological applications. In this talk, we will discuss how adaptive optics can enhance dual-objective lens nanoscopes. We will demonstrate how adaptive optics devices provide unprecedented freedom to manipulate the light field in isoSTED nanoscopy, allow to realize automatic beam alignment, suppress the inherent side-lobes of the point-spread function, and dynamically compensate for sample-induced aberrations. We will present both the theoretical groundwork and the experimental confirmations.

  17. A Study of the Unstable Modes in High Mach Number Gaseous Jets and Shear Layers

    NASA Astrophysics Data System (ADS)

    Bassett, Gene Marcel

    1993-01-01

    Instabilities affecting the propagation of supersonic gaseous jets have been studied using high resolution computer simulations with the Piecewise-Parabolic-Method (PPM). These results are discussed in relation to jets from galactic nuclei. These studies involve a detailed treatment of a single section of a very long jet, approximating the dynamics by using periodic boundary conditions. Shear layer simulations have explored the effects of shear layers on the growth of nonlinear instabilities. Convergence of the numerical approximations has been tested by comparing jet simulations with different grid resolutions. The effects of initial conditions and geometry on the dominant disruptive instabilities have also been explored. Simulations of shear layers with a variety of thicknesses, Mach numbers and densities perturbed by incident sound waves imply that the time for the excited kink modes to grow large in amplitude and disrupt the shear layer is taug = (546 +/- 24) (M/4)^{1.7 } (Apert/0.02) ^{-0.4} delta/c, where M is the jet Mach number, delta is the half-width of the shear layer, and A_ {pert} is the perturbation amplitude. For simulations of periodic jets, the initial velocity perturbations set up zig-zag shock patterns inside the jet. In each case a single zig-zag shock pattern (an odd mode) or a double zig-zag shock pattern (an even mode) grows to dominate the flow. The dominant kink instability responsible for these shock patterns moves approximately at the linear resonance velocity, nu_ {mode} = cextnu_ {relative}/(cjet + c_ {ext}). For high resolution simulations (those with 150 or more computational zones across the jet width), the even mode dominates if the even penetration is higher in amplitude initially than the odd perturbation. For low resolution simulations, the odd mode dominates even for a stronger even mode perturbation. In high resolution simulations the jet boundary rolls up and large amounts of external gas are entrained into the jet. In low resolution simulations this entrainment process is impeded by numerical viscosity. The three-dimensional jet simulations behave similarly to two-dimensional jet runs with the same grid resolutions.

  18. Signature of present and projected climate change at an urban scale: The case of Addis Ababa

    NASA Astrophysics Data System (ADS)

    Arsiso, Bisrat Kifle; Mengistu Tsidu, Gizaw; Stoffberg, Gerrit Hendrik

    2018-06-01

    Understanding climate change and variability at an urban scale is essential for water resource management, land use planning, development of adaption plans, mitigation of air and water pollution. However, there are serious challenges to meet these goals due to unavailability of observed and/or simulated high resolution spatial and temporal climate data. The statistical downscaling of general circulation climate model, for instance, is usually driven by sparse observational data hindering the use of downscaled data to investigate urban scale climate variability and change in the past. Recently, these challenges are partly resolved by concerted international effort to produce global and high spatial resolution climate data. In this study, the 1 km2 high resolution NIMR-HadGEM2-AO simulations for future projections under Representative Concentration Pathways (RCP4.5 and RCP8.5) scenarios and gridded observations provided by Worldclim data center are used to assess changes in rainfall, minimum and maximum temperature expected under the two scenarios over Addis Ababa city. The gridded 1 km2 observational data set for the base period (1950-2000) is compared to observation from a meteorological station in the city in order to assess its quality for use as a reference (baseline) data. The comparison revealed that the data set has a very good quality. The rainfall anomalies under RCPs scenarios are wet in the 2030s (2020-2039), 2050s (2040-2069) and 2080s (2070-2099). Both minimum and maximum temperature anomalies under RCPs are successively getting warmer during these periods. Thus, the projected changes under RCPs scenarios show a general increase in rainfall and temperatures with strong variabilities in rainfall during rainy season implying level of difficulty in water resource use and management as well as land use planning and management.

  19. Super-resolution fusion of complementary panoramic images based on cross-selection kernel regression interpolation.

    PubMed

    Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu

    2014-03-20

    A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.

  20. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-Level Rule-Based Models in Cell Biology.

    PubMed

    Bittig, Arne T; Uhrmacher, Adelinde M

    2017-01-01

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  1. Sensitivity to grid resolution in the ability of a chemical transport model to simulate observed oxidant chemistry under high-isoprene conditions

    NASA Astrophysics Data System (ADS)

    Yu, Karen; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Miller, Christopher C.; Travis, Katherine R.; Zhu, Lei; Yantosca, Robert M.; Sulprizio, Melissa P.; Cohen, Ron C.; Dibb, Jack E.; Fried, Alan; Mikoviny, Tomas; Ryerson, Thomas B.; Wennberg, Paul O.; Wisthaler, Armin

    2016-04-01

    Formation of ozone and organic aerosol in continental atmospheres depends on whether isoprene emitted by vegetation is oxidized by the high-NOx pathway (where peroxy radicals react with NO) or by low-NOx pathways (where peroxy radicals react by alternate channels, mostly with HO2). We used mixed layer observations from the SEAC4RS aircraft campaign over the Southeast US to test the ability of the GEOS-Chem chemical transport model at different grid resolutions (0.25° × 0.3125°, 2° × 2.5°, 4° × 5°) to simulate this chemistry under high-isoprene, variable-NOx conditions. Observations of isoprene and NOx over the Southeast US show a negative correlation, reflecting the spatial segregation of emissions; this negative correlation is captured in the model at 0.25° × 0.3125° resolution but not at coarser resolutions. As a result, less isoprene oxidation takes place by the high-NOx pathway in the model at 0.25° × 0.3125° resolution (54 %) than at coarser resolution (59 %). The cumulative probability distribution functions (CDFs) of NOx, isoprene, and ozone concentrations show little difference across model resolutions and good agreement with observations, while formaldehyde is overestimated at coarse resolution because excessive isoprene oxidation takes place by the high-NOx pathway with high formaldehyde yield. The good agreement of simulated and observed concentration variances implies that smaller-scale non-linearities (urban and power plant plumes) are not important on the regional scale. Correlations of simulated vs. observed concentrations do not improve with grid resolution because finer modes of variability are intrinsically more difficult to capture. Higher model resolution leads to decreased conversion of NOx to organic nitrates and increased conversion to nitric acid, with total reactive nitrogen oxides (NOy) changing little across model resolutions. Model concentrations in the lower free troposphere are also insensitive to grid resolution. The overall low sensitivity of modeled concentrations to grid resolution implies that coarse resolution is adequate when modeling continental boundary layer chemistry for global applications.

  2. High resolution frequency analysis techniques with application to the redshift experiment

    NASA Technical Reports Server (NTRS)

    Decher, R.; Teuber, D.

    1975-01-01

    High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.

  3. Evolution of flexibility and rigidity in retaliatory punishment

    PubMed Central

    MacGlashan, James; Littman, Michael L.

    2017-01-01

    Natural selection designs some social behaviors to depend on flexible learning processes, whereas others are relatively rigid or reflexive. What determines the balance between these two approaches? We offer a detailed case study in the context of a two-player game with antisocial behavior and retaliatory punishment. We show that each player in this game—a “thief” and a “victim”—must balance two competing strategic interests. Flexibility is valuable because it allows adaptive differentiation in the face of diverse opponents. However, it is also risky because, in competitive games, it can produce systematically suboptimal behaviors. Using a combination of evolutionary analysis, reinforcement learning simulations, and behavioral experimentation, we show that the resolution to this tension—and the adaptation of social behavior in this game—hinges on the game’s learning dynamics. Our findings clarify punishment’s adaptive basis, offer a case study of the evolution of social preferences, and highlight an important connection between natural selection and learning in the resolution of social conflicts. PMID:28893996

  4. Evolution of flexibility and rigidity in retaliatory punishment.

    PubMed

    Morris, Adam; MacGlashan, James; Littman, Michael L; Cushman, Fiery

    2017-09-26

    Natural selection designs some social behaviors to depend on flexible learning processes, whereas others are relatively rigid or reflexive. What determines the balance between these two approaches? We offer a detailed case study in the context of a two-player game with antisocial behavior and retaliatory punishment. We show that each player in this game-a "thief" and a "victim"-must balance two competing strategic interests. Flexibility is valuable because it allows adaptive differentiation in the face of diverse opponents. However, it is also risky because, in competitive games, it can produce systematically suboptimal behaviors. Using a combination of evolutionary analysis, reinforcement learning simulations, and behavioral experimentation, we show that the resolution to this tension-and the adaptation of social behavior in this game-hinges on the game's learning dynamics. Our findings clarify punishment's adaptive basis, offer a case study of the evolution of social preferences, and highlight an important connection between natural selection and learning in the resolution of social conflicts.

  5. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less

  6. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens.

    PubMed

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V

    2015-08-24

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.

  7. High-Resolution Mesoscale Simulations of the 6-7 May 2000 Missouri Flash Flood: Impact of Model Initialization and Land Surface Treatment

    NASA Technical Reports Server (NTRS)

    Baker, R. David; Wang, Yansen; Tao, Wei-Kuo; Wetzel, Peter; Belcher, Larry R.

    2004-01-01

    High-resolution mesoscale model simulations of the 6-7 May 2000 Missouri flash flood event were performed to test the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation. In this flash flood event, a mesoscale convective system (MCS) produced over 340 mm of rain in roughly 9 hours in some locations. Two different types of model initialization were employed: 1) NCEP global reanalysis with 2.5-degree grid spacing and 12-hour temporal resolution, and 2) Eta reanalysis with 40- km grid spacing and $hour temporal resolution. In addition, two different land surface treatments were considered. A simple land scheme. (SLAB) keeps soil moisture fixed at initial values throughout the simulation, while a more sophisticated land model (PLACE) allows for r interactive feedback. Simulations with high-resolution Eta model initialization show considerable improvement in the intensity of precipitation due to the presence in the initialization of a residual mesoscale convective vortex (hlCV) from a previous MCS. Simulations with the PLACE land model show improved location of heavy precipitation. Since soil moisture can vary over time in the PLACE model, surface energy fluxes exhibit strong spatial gradients. These surface energy flux gradients help produce a strong low-level jet (LLJ) in the correct location. The LLJ then interacts with the cold outflow boundary of the MCS to produce new convective cells. The simulation with both high-resolution model initialization and time-varying soil moisture test reproduces the intensity and location of observed rainfall.

  8. Robust video super-resolution with registration efficiency adaptation

    NASA Astrophysics Data System (ADS)

    Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen

    2010-07-01

    Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.

  9. Techniques and resources for storm-scale numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Droegemeier, Kelvin; Grell, Georg; Doyle, James; Soong, Su-Tzai; Skamarock, William; Bacon, David; Staniforth, Andrew; Crook, Andrew; Wilhelmson, Robert

    1993-01-01

    The topics discussed include the following: multiscale application of the 5th-generation PSU/NCAR mesoscale model, the coupling of nonhydrostatic atmospheric and hydrostatic ocean models for air-sea interaction studies; a numerical simulation of cloud formation over complex topography; adaptive grid simulations of convection; an unstructured grid, nonhydrostatic meso/cloud scale model; efficient mesoscale modeling for multiple scales using variable resolution; initialization of cloud-scale models with Doppler radar data; and making effective use of future computing architectures, networks, and visualization software.

  10. Multiple Scales in Fluid Dynamics and Meteorology: The DFG Priority Programme 1276 MetStröm

    NASA Astrophysics Data System (ADS)

    von Larcher, Th; Klein, R.

    2012-04-01

    Geophysical fluid motions are characterized by a very wide range of length and time scales, and by a rich collection of varying physical phenomena. The mathematical description of these motions reflects this multitude of scales and mechanisms in that it involves strong non-linearities and various scale-dependent singular limit regimes. Considerable progress has been made in recent years in the mathematical modelling and numerical simulation of such flows in detailed process studies, numerical weather forecasting, and climate research. One task of outstanding importance in this context has been and will remain for the foreseeable future the subgrid scale parameterization of the net effects of non-resolved processes that take place on spacio-temporal scales not resolvable even by the largest most recent supercomputers. Since the advent of numerical weather forecasting some 60 years ago, one simple but efficient means to achieve improved forecasting skills has been increased spacio-temporal resolution. This seems quite consistent with the concept of convergence of numerical methods in Applied Mathematics and Computational Fluid Dynamics (CFD) at a first glance. Yet, the very notion of increased resolution in atmosphere-ocean science is very different from the one used in Applied Mathematics: For the mathematician, increased resolution provides the benefit of getting closer to the ideal of a converged solution of some given partial differential equations. On the other hand, the atmosphere-ocean scientist would naturally refine the computational grid and adjust his mathematical model, such that it better represents the relevant physical processes that occur at smaller scales. This conceptual contradiction remains largely irrelevant as long as geophysical flow models operate with fixed computational grids and time steps and with subgrid scale parameterizations being optimized accordingly. The picture changes fundamentally when modern techniques from CFD involving spacio-temporal grid adaptivity get invoked in order to further improve the net efficiency in exploiting the given computational resources. In the setting of geophysical flow simulation one must then employ subgrid scale parameterizations that dynamically adapt to the changing grid sizes and time steps, implement ways to judiciously control and steer the newly available flexibility of resolution, and invent novel ways of quantifying the remaining errors. The DFG priority program MetStröm covers the expertise of Meteorology, Fluid Dynamics, and Applied Mathematics to develop model- as well as grid-adaptive numerical simulation concepts in multidisciplinary projects. The goal of this priority programme is to provide simulation models which combine scale-dependent (mathematical) descriptions of key physical processes with adaptive flow discretization schemes. Deterministic continuous approaches and discrete and/or stochastic closures and their possible interplay are taken into consideration. Research focuses on the theory and methodology of multiscale meteorological-fluid mechanics modelling. Accompanying reference experiments support model validation.

  11. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    NASA Astrophysics Data System (ADS)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  12. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  13. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  14. Adaptive Multiscale Modeling of Geochemical Impacts on Fracture Evolution

    NASA Astrophysics Data System (ADS)

    Molins, S.; Trebotich, D.; Steefel, C. I.; Deng, H.

    2016-12-01

    Understanding fracture evolution is essential for many subsurface energy applications, including subsurface storage, shale gas production, fracking, CO2 sequestration, and geothermal energy extraction. Geochemical processes in particular play a significant role in the evolution of fractures through dissolution-driven widening, fines migration, and/or fracture sealing due to precipitation. One obstacle to understanding and exploiting geochemical fracture evolution is that it is a multiscale process. However, current geochemical modeling of fractures cannot capture this multi-scale nature of geochemical and mechanical impacts on fracture evolution, and is limited to either a continuum or pore-scale representation. Conventional continuum-scale models treat fractures as preferential flow paths, with their permeability evolving as a function (often, a cubic law) of the fracture aperture. This approach has the limitation that it oversimplifies flow within the fracture in its omission of pore scale effects while also assuming well-mixed conditions. More recently, pore-scale models along with advanced characterization techniques have allowed for accurate simulations of flow and reactive transport within the pore space (Molins et al., 2014, 2015). However, these models, even with high performance computing, are currently limited in their ability to treat tractable domain sizes (Steefel et al., 2013). Thus, there is a critical need to develop an adaptive modeling capability that can account for separate properties and processes, emergent and otherwise, in the fracture and the rock matrix at different spatial scales. Here we present an adaptive modeling capability that treats geochemical impacts on fracture evolution within a single multiscale framework. Model development makes use of the high performance simulation capability, Chombo-Crunch, leveraged by high resolution characterization and experiments. The modeling framework is based on the adaptive capability in Chombo which not only enables mesh refinement, but also refinement of the model-pore scale or continuum Darcy scale-in a dynamic way such that the appropriate model is used only when and where it is needed. Explicit flux matching provides coupling betwen the scales.

  15. Cart3D Simulations for the First AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2014-01-01

    Simulation results for the First AIAA Sonic Boom Prediction Workshop (LBW1) are presented using an inviscid, embedded-boundary Cartesian mesh method. The method employs adjoint-based error estimation and adaptive meshing to automatically determine resolution requirements of the computational domain. Results are presented for both mandatory and optional test cases. These include an axisymmetric body of revolution, a 69deg delta wing model and a complete model of the Lockheed N+2 supersonic tri-jet with V-tail and flow through nacelles. In addition to formal mesh refinement studies and examination of the adjoint-based error estimates, mesh convergence is assessed by presenting simulation results for meshes at several resolutions which are comparable in size to the unstructured grids distributed by the workshop organizers. Data provided includes both the pressure signals required by the workshop and information on code performance in both memory and processing time. Various enhanced techniques offering improved simulation efficiency will be demonstrated and discussed.

  16. High-resolution downscaling for hydrological management

    NASA Astrophysics Data System (ADS)

    Ulbrich, Uwe; Rust, Henning; Meredith, Edmund; Kpogo-Nuwoklo, Komlan; Vagenas, Christos

    2017-04-01

    Hydrological modellers and water managers require high-resolution climate data to model regional hydrologies and how these may respond to future changes in the large-scale climate. The ability to successfully model such changes and, by extension, critical infrastructure planning is often impeded by a lack of suitable climate data. This typically takes the form of too-coarse data from climate models, which are not sufficiently detailed in either space or time to be able to support water management decisions and hydrological research. BINGO (Bringing INnovation in onGOing water management; ) aims to bridge the gap between the needs of hydrological modellers and planners, and the currently available range of climate data, with the overarching aim of providing adaptation strategies for climate change-related challenges. Producing the kilometre- and sub-daily-scale climate data needed by hydrologists through continuous simulations is generally computationally infeasible. To circumvent this hurdle, we adopt a two-pronged approach involving (1) selective dynamical downscaling and (2) conditional stochastic weather generators, with the former presented here. We take an event-based approach to downscaling in order to achieve the kilometre-scale input needed by hydrological modellers. Computational expenses are minimized by identifying extremal weather patterns for each BINGO research site in lower-resolution simulations and then only downscaling to the kilometre-scale (convection permitting) those events during which such patterns occur. Here we (1) outline the methodology behind the selection of the events, and (2) compare the modelled precipitation distribution and variability (preconditioned on the extremal weather patterns) with that found in observations.

  17. Developing parallel GeoFEST(P) using the PYRAMID AMR library

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Tisdale, Robert E.

    2004-01-01

    The PYRAMID parallel unstructured adaptive mesh refinement (AMR) library has been coupled with the GeoFEST geophysical finite element simulation tool to support parallel active tectonics simulations. Specifically, we have demonstrated modeling of coseismic and postseismic surface displacement due to a simulated Earthquake for the Landers system of interacting faults in Southern California. The new software demonstrated a 25-times resolution improvement and a 4-times reduction in time to solution over the sequential baseline milestone case. Simulations on workstations using a few tens of thousands of stress displacement finite elements can now be expanded to multiple millions of elements with greater than 98% scaled efficiency on various parallel platforms over many hundreds of processors. Our most recent work has demonstrated that we can dynamically adapt the computational grid as stress grows on a fault. In this paper, we will describe the major issues and challenges associated with coupling these two programs to create GeoFEST(P). Performance and visualization results will also be described.

  18. Capabilities of Fully Parallelized MHD Stability Code MARS

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2016-10-01

    Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.

  19. High-resolution surface analysis for extended-range downscaling with limited-area atmospheric models

    NASA Astrophysics Data System (ADS)

    Separovic, Leo; Husain, Syed Zahid; Yu, Wei; Fernig, David

    2014-12-01

    High-resolution limited-area model (LAM) simulations are frequently employed to downscale coarse-resolution objective analyses over a specified area of the globe using high-resolution computational grids. When LAMs are integrated over extended time frames, from months to years, they are prone to deviations in land surface variables that can be harmful to the quality of the simulated near-surface fields. Nudging of the prognostic surface fields toward a reference-gridded data set is therefore devised in order to prevent the atmospheric model from diverging from the expected values. This paper presents a method to generate high-resolution analyses of land-surface variables, such as surface canopy temperature, soil moisture, and snow conditions, to be used for the relaxation of lower boundary conditions in extended-range LAM simulations. The proposed method is based on performing offline simulations with an external surface model, forced with the near-surface meteorological fields derived from short-range forecast, operational analyses, and observed temperatures and humidity. Results show that the outputs of the surface model obtained in the present study have potential to improve the near-surface atmospheric fields in extended-range LAM integrations.

  20. Satellite image time series simulation for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-11-01

    The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to generate satellite images in higher spatial and temporal details, which further enables satellite image time series simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and the temporal change is then projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only, and usage of costly high resolution data can be reduced as much as possible, and it presents an efficient solution with great cost performance to build up an economically operational monitoring service for environment, agriculture, forest, land use investigation, and other applications.

  1. Revising Hydrology of a Land Surface Model

    NASA Astrophysics Data System (ADS)

    Le Vine, Nataliya; Butler, Adrian; McIntyre, Neil; Jackson, Christopher

    2015-04-01

    Land Surface Models (LSMs) are key elements in guiding adaptation to the changing water cycle and the starting points to develop a global hyper-resolution model of the terrestrial water, energy and biogeochemical cycles. However, before this potential is realised, there are some fundamental limitations of LSMs related to how meaningfully hydrological fluxes and stores are represented. An important limitation is the simplistic or non-existent representation of the deep subsurface in LSMs; and another is the lack of connection of LSM parameterisations to relevant hydrological information. In this context, the paper uses a case study of the JULES (Joint UK Land Environmental Simulator) LSM applied to the Kennet region in Southern England. The paper explores the assumptions behind JULES hydrology, adapts the model structure and optimises the coupling with the ZOOMQ3D regional groundwater model. The analysis illustrates how three types of information can be used to improve the model's hydrology: a) observations, b) regionalized information, and c) information from an independent physics-based model. It is found that: 1) coupling to the groundwater model allows realistic simulation of streamflows; 2) a simple dynamic lower boundary improves upon JULES' stationary unit gradient condition; 3) a 1D vertical flow in the unsaturated zone is sufficient; however there is benefit in introducing a simple dual soil moisture retention curve; 4) regionalized information can be used to describe soil spatial heterogeneity. It is concluded that relatively simple refinements to the hydrology of JULES and its parameterisation method can provide a substantial step forward in realising its potential as a high-resolution multi-purpose model.

  2. Nonlinear filtering for character recognition in low quality document images

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2014-09-01

    Optical character recognition in scanned printed documents is a well-studied task, where the captured conditions like sheet position, illumination, contrast and resolution are controlled. Nowadays, it is more practical to use mobile devices for document capture than a scanner. So as a consequence, the quality of document images is often poor owing to presence of geometric distortions, nonhomogeneous illumination, low resolution, etc. In this work we propose to use multiple adaptive nonlinear composite filters for detection and classification of characters. Computer simulation results obtained with the proposed system are presented and discussed.

  3. Formation of Cool Cores in Galaxy Clusters via Hierarchical Mergers

    NASA Astrophysics Data System (ADS)

    Motl, Patrick M.; Burns, Jack O.; Loken, Chris; Norman, Michael L.; Bryan, Greg

    2004-05-01

    We present a new scenario for the formation of cool cores in rich galaxy clusters, based on results from recent high spatial dynamic range, adaptive mesh Eulerian hydrodynamic simulations of large-scale structure formation. We find that cores of cool gas, material that would be identified as a classical cooling flow on the basis of its X-ray luminosity excess and temperature profile, are built from the accretion of discrete stable subclusters. Any ``cooling flow'' present is overwhelmed by the velocity field within the cluster; the bulk flow of gas through the cluster typically has speeds up to about 2000 km s-1, and significant rotation is frequently present in the cluster core. The inclusion of consistent initial cosmological conditions for the cluster within its surrounding supercluster environment is crucial when the evolution of cool cores in rich galaxy clusters is simulated. This new model for the hierarchical assembly of cool gas naturally explains the high frequency of cool cores in rich galaxy clusters, despite the fact that a majority of these clusters show evidence of substructure that is believed to arise from recent merger activity. Furthermore, our simulations generate complex cluster cores in concordance with recent X-ray observations of cool fronts, cool ``bullets,'' and filaments in a number of galaxy clusters. Our simulations were computed with a coupled N-body, Eulerian, adaptive mesh refinement, hydrodynamics cosmology code that properly treats the effects of shocks and radiative cooling by the gas. We employ up to seven levels of refinement to attain a peak resolution of 15.6 kpc within a volume 256 Mpc on a side and assume a standard ΛCDM cosmology.

  4. Numerical Study of Richtmyer-Meshkov Instability with Re-Shock

    NASA Astrophysics Data System (ADS)

    Wong, Man Long; Livescu, Daniel; Lele, Sanjiva

    2017-11-01

    The interaction of a Mach 1.45 shock wave with a perturbed planar interface between two gases with an Atwood number 0.68 is studied through 2D and 3D shock-capturing adaptive mesh refinement (AMR) simulations with physical diffusive and viscous terms. The simulations have initial conditions similar to those in the actual experiment conducted by Poggi et al. [1998]. The development of the flow and evolution of mixing due to the interactions with the first shock and the re-shock are studied together with the sensitivity of various global parameters to the properties of the initial perturbation. Grid resolutions needed for fully resolved and 2D and 3D simulations are also evaluated. Simulations are conducted with an in-house AMR solver HAMeRS built on the SAMRAI library. The code utilizes the high-order localized dissipation weighted compact nonlinear scheme [Wong and Lele, 2017] for shock-capturing and different sensors including the wavelet sensor [Wong and Lele, 2016] to identify regions for grid refinement. First and third authors acknowledge the project sponsor LANL.

  5. Simulation of climatology and Interannual Variability of Spring Persistent Rains by Meteorological Research Institute Model: Impacts of different horizontal resolutions

    NASA Astrophysics Data System (ADS)

    Li, Puxi; Zhou, Tianjun; Zou, Liwei

    2016-04-01

    The authors evaluated the performance of Meteorological Research Institute (MRI) AGCM3.2 models in the simulations of climatology and interannual variability of the Spring Persistent Rains (SPR) over southeastern China. The possible impacts of different horizontal resolutions were also investigated based on the experiments with three different horizontal resolutions (i.e., 120, 60, and 20km). The model could reasonably reproduce the main rainfall center over southeastern China in boreal spring under the three different resolutions. In comparison with 120 simulation, it revealed that 60km and 20km simulations show the superiority in simulating rainfall centers anchored by the Nanling-Wuyi Mountains, but overestimate rainfall intensity. Water vapor budget diagnosis showed that, the 60km and 20km simulations tended to overestimate the water vapor convergence over southeastern China, which leads to wet biases. In the aspect of interannual variability of SPR, the model could reasonably reproduce the anomalous lower-tropospheric anticyclone in the western North Pacific (WNPAC) and positive precipitation anomalies over southeastern China in El Niño decaying spring. Compared with the 120km resolution, the large positive biases are substantially reduced in the mid and high resolution models which evidently improve the simulation of horizontal moisture advection in El Niño decaying spring. We highlight the importance of developing high resolution climate model as it could potentially improve the climatology and interannual variability of SPR.

  6. Transitional hemodynamics in intracranial aneurysms - Comparative velocity investigations with high resolution lattice Boltzmann simulations, normal resolution ANSYS simulations, and MR imaging.

    PubMed

    Jain, Kartik; Jiang, Jingfeng; Strother, Charles; Mardal, Kent-André

    2016-11-01

    Blood flow in intracranial aneurysms has, until recently, been considered to be disturbed but still laminar. Recent high resolution computational studies have demonstrated, in some situations, however, that the flow may exhibit high frequency fluctuations that resemble weakly turbulent or transitional flow. Due to numerous assumptions required for simplification in computational fluid dynamics (CFD) studies, the occurrence of these events, in vivo, remains unsettled. The detection of these fluctuations in aneurysmal blood flow, i.e., hemodynamics by CFD, poses additional challenges as such phenomena cannot be captured in clinical data acquisition with magnetic resonance (MR) due to inadequate temporal and spatial resolutions. The authors' purpose was to address this issue by comparing results from highly resolved simulations, conventional resolution laminar simulations, and MR measurements, identify the differences, and identify their causes. Two aneurysms in the basilar artery, one with disturbed yet laminar flow and the other with transitional flow, were chosen. One set of highly resolved direct numerical simulations using the lattice Boltzmann method (LBM) and another with adequate resolutions under laminar flow assumption were conducted using a commercially available ANSYS Fluent solver. The velocity fields obtained from simulation results were qualitatively and statistically compared against each other and with MR acquisition. Results from LBM, ANSYS Fluent, and MR agree well qualitatively and quantitatively for one of the aneurysms with laminar flow in which fluctuations were <80 Hz. The comparisons for the second aneurysm with high fluctuations of > ∼ 600 Hz showed vivid differences between LBM, ANSYS Fluent, and magnetic resonance imaging. After ensemble averaging and down-sampling to coarser space and time scales, these differences became minimal. A combination of MR derived data and CFD can be helpful in estimating the hemodynamic environment of intracranial aneurysms. Adequately resolved CFD would suffice gross assessment of hemodynamics, potentially in a clinical setting, and highly resolved CFD could be helpful in a detailed and retrospective understanding of the physiological mechanisms.

  7. Coherent optical adaptive technique improves the spatial resolution of STED microscopy in thick samples

    PubMed Central

    Yan, Wei; Yang, Yanlong; Tan, Yu; Chen, Xun; Li, Yang; Qu, Junle; Ye, Tong

    2018-01-01

    Stimulated emission depletion microscopy (STED) is one of far-field optical microscopy techniques that can provide sub-diffraction spatial resolution. The spatial resolution of the STED microscopy is determined by the specially engineered beam profile of the depletion beam and its power. However, the beam profile of the depletion beam may be distorted due to aberrations of optical systems and inhomogeneity of specimens’ optical properties, resulting in a compromised spatial resolution. The situation gets deteriorated when thick samples are imaged. In the worst case, the sever distortion of the depletion beam profile may cause complete loss of the super resolution effect no matter how much depletion power is applied to specimens. Previously several adaptive optics approaches have been explored to compensate aberrations of systems and specimens. However, it is hard to correct the complicated high-order optical aberrations of specimens. In this report, we demonstrate that the complicated distorted wavefront from a thick phantom sample can be measured by using the coherent optical adaptive technique (COAT). The full correction can effectively maintain and improve the spatial resolution in imaging thick samples. PMID:29400356

  8. Engineering aspects of the Large Binocular Telescope Observatory adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Brusa, Guido; Ashby, Dave; Christou, Julian C.; Kern, Jonathan; Lefebvre, Michael; McMahon, Tom J.; Miller, Douglas; Rahmer, Gustavo; Sosa, Richard; Taylor, Gregory; Vogel, Conrad; Zhang, Xianyu

    2016-07-01

    Vertical profiles of the atmospheric optical turbulence strength and velocity is of critical importance for simulating, designing, and operating the next generation of instruments for the European Extremely Large Telescope. Many of these instruments are already well into the design phase meaning these profies are required immediately to ensure they are optimised for the unique conditions likely to be observed. Stereo-SCIDAR is a generalised SCIDAR instrument which is used to characterise the profile of the atmospheric optical turbulence strength and wind velocity using triangulation between two optical binary stars. Stereo-SCIDAR has demonstrated the capability to resolve turbulent layers with the required vertical resolution to support wide-field ELT instrument designs. These high resolution atmospheric parameters are critical for design studies and statistical evaluation of on-sky performance under real conditions. Here we report on the new Stereo-SCIDAR instrument installed on one of the Auxillary Telescope ports of the Very Large Telescope array at Cerro Paranal. Paranal is located approximately 20 km from Cerro Armazones, the site of the E-ELT. Although the surface layer of the turbulence will be different for the two sites due to local geography, the high-altitude resolution profiles of the free atmosphere from this instrument will be the most accurate available for the E-ELT site. In addition, these unbiased and independent profiles are also used to further characterise the site of the VLT. This enables instrument performance calibration, optimisation and data analysis of, for example, the ESO Adaptive Optics facility and the Next Generation Transit Survey. It will also be used to validate atmospheric models for turbulence forecasting. We show early results from the commissioning and address future implications of the results.

  9. Adapted wavelet transform improves time-frequency representations: a study of auditory elicited P300-like event-related potentials in rats.

    PubMed

    Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M; Graversen, Carina; Sørensen, Helge B D; Bastlund, Jesper F

    2017-04-01

    Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.

  10. A high-resolution oxygen A-band spectrometer (HABS) and its radiation closure

    NASA Astrophysics Data System (ADS)

    Min, Q.; Yin, B.; Li, S.; Berndt, J.; Harrison, L.; Joseph, E.; Duan, M.; Kiedron, P.

    2014-06-01

    Various studies indicate that high-resolution oxygen A-band spectrum has the capability to retrieve the vertical profiles of aerosol and cloud properties. To improve the understanding of oxygen A-band inversions and utility, we developed a high-resolution oxygen A-band spectrometer (HABS), and deployed it at Howard University Beltsville site during the NASA Discover Air-Quality Field Campaign in July, 2011. By using a single telescope, the HABS instrument measures the direct solar and the zenith diffuse radiation subsequently. HABS exhibits excellent performance: stable spectral response ratio, high signal-to-noise ratio (SNR), high-spectrum resolution (0.016 nm), and high out-of-band rejection (10-5). For the spectral retrievals of HABS measurements, a simulator is developed by combining a discrete ordinates radiative transfer code (DISORT) with the High Resolution Transmission (HITRAN) database HITRAN2008. The simulator uses a double-k approach to reduce the computational cost. The HABS-measured spectra are consistent with the related simulated spectra. For direct-beam spectra, the discrepancies between measurements and simulations, indicated by confidence intervals (95%) of relative difference, are (-0.06, 0.05) and (-0.08, 0.09) for solar zenith angles of 27 and 72°, respectively. For zenith diffuse spectra, the related discrepancies between measurements and simulations are (-0.06, 0.05) and (-0.08, 0.07) for solar zenith angles of 27 and 72°, respectively. The main discrepancies between measurements and simulations occur at or near the strong oxygen absorption line centers. They are mainly due to two kinds of causes: (1) measurement errors associated with the noise/spikes of HABS-measured spectra, as a result of combined effects of weak signal, low SNR, and errors in wavelength registration; (2) modeling errors in the simulation, including the error of model parameters setting (e.g., oxygen absorption line parameters, vertical profiles of temperature and pressure) and the lack of treatment of the rotational Raman scattering. The high-resolution oxygen A-band measurements from HABS can constrain the active radar retrievals for more accurate cloud optical properties (e.g., cloud optical depth, effective radius), particularly for multi-layer clouds and for mixed-phase clouds.

  11. Combining population genomics and fitness QTLs to identify the genetics of local adaptation in Arabidopsis thaliana.

    PubMed

    Price, Nicholas; Moyers, Brook T; Lopez, Lua; Lasky, Jesse R; Monroe, J Grey; Mullen, Jack L; Oakley, Christopher G; Lin, Junjiang; Ågren, Jon; Schrider, Daniel R; Kern, Andrew D; McKay, John K

    2018-05-08

    Evidence for adaptation to different climates in the model species Arabidopsis thaliana is seen in reciprocal transplant experiments, but the genetic basis of this adaptation remains poorly understood. Field-based quantitative trait locus (QTL) studies provide direct but low-resolution evidence for the genetic basis of local adaptation. Using high-resolution population genomic approaches, we examine local adaptation along previously identified genetic trade-off (GT) and conditionally neutral (CN) QTLs for fitness between locally adapted Italian and Swedish A. thaliana populations [Ågren J, et al. (2013) Proc Natl Acad Sci USA 110:21077-21082]. We find that genomic regions enriched in high F ST SNPs colocalize with GT QTL peaks. Many of these high F ST regions also colocalize with regions enriched for SNPs significantly correlated to climate in Eurasia and evidence of recent selective sweeps in Sweden. Examining unfolded site frequency spectra across genes containing high F ST SNPs suggests GTs may be due to more recent adaptation in Sweden than Italy. Finally, we collapse a list of thousands of genes spanning GT QTLs to 42 genes that likely underlie the observed GTs and explore potential biological processes driving these trade-offs, from protein phosphorylation, to seed dormancy and longevity. Our analyses link population genomic analyses and field-based QTL studies of local adaptation, and emphasize that GTs play an important role in the process of local adaptation. Copyright © 2018 the Author(s). Published by PNAS.

  12. Retrieval of Precipitation Profiles from Multiresolution, Multifrequency, Active and Passive Microwave Observations

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Anagnostou, Emmanouil N.; Olson, William S.; Starr, David OC. (Technical Monitor)

    2002-01-01

    In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated using both simulated and airborne data. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) satellite multi-frequency active and passive observations. These observations are characterized by various spatial and sampling resolutions. This makes the retrieval problem mathematically more difficult and ill-determined because the quality of information decreases with decreasing resolution. A model that, given reflectivity profiles and a small set of parameters (including the cloud water content, the intercept drop size distribution, and a variable describing the frozen hydrometeor properties), simulates high-resolution brightness temperatures is used. The high-resolution simulated brightness temperatures are convolved at the real sensor resolution. An optimal estimation procedure is used to minimize the differences between simulated and observed brightness temperatures. The retrieval technique is investigated using cloud model synthetic and airborne data from the Fourth Convection And Moisture Experiment. Simulated high-resolution brightness temperatures and reflectivities and airborne observation strong are convolved at the resolution of the TRMM instruments and retrievals are performed and analyzed relative to the reference data used in observations synthesis. An illustration of the possible use of the technique in satellite rainfall estimation is presented through an application to TRMM data. The study suggests improvements in combined active and passive retrievals even when the instruments resolutions are significantly different. Future work needs to better quantify the retrievals performance, especially in connection with satellite applications, and the uncertainty of the models used in retrieval.

  13. Unravel the submesoscale dynamics of the phytoplanktonic community in the NW Mediterranean Sea by in situ observations: the 2015 OSCAHR cruise

    NASA Astrophysics Data System (ADS)

    Marrec, Pierre; Doglioli, Andrea M.; Grégori, Gérald; Della Penna, Alice; Wagener, Thibaut; Rougier, Gille; Bhairy, Nagib; Dugenne, Mathilde; Lahbib, Soumaya; Thyssen, Melilotus

    2017-04-01

    Submesoscale phenomena have been recently recognized as a key factor in physical-biological-biogeochemical interactions, even if it remains unclear how these processes affect the global state of the ocean. Significant large-scale impacts of submesoscale structures on primary production and influence on the phytoplankton community structure and diversity have also been reported. In the past decade submesoscale dynamics have been predominately studied through the analysis of numerical simulations. Observing the coupled physical and biogeochemical variability at this scale remains challenging due to the ephemeral nature of submesoscale structures. The in-situ study of such structures necessitates multidisciplinary approaches involving in situ observations, remote sensing and modeling. Last progresses in biogeochemical sensor development and advanced methodology including Lagrangian real-time adaptative strategies represent outstanding opportunities. The OSCAHR (Observing Submesoscale Coupling At High Resolution) campaign has been conducted thanks to a multidisciplinary approach in order to improve the understanding of submesoscale processes. An ephemeral submesoscale structure was first identified in the Ligurian Sea in fall 2015 using both satellite and numerical modeling data before the campaign. Afterwards, advanced observing systems for the physical, biological and biogeochemical characterization of the sea surface layer at a high spatial and temporal frequency were deployed during a 10-days cruise. A MVP (Moving Vessel Profiler) was used to obtain high resolution CTD profiles associated to a new pumping system with 1-m vertical resolution. Moreover, along the ship track, in addition to the standard measurements of seawater surface samples (Chl-a, nutrients, O2, SST, SSS …), we deployed an automated flow cytometer for near real-time characterization of phytoplankton functional groups (from micro-phytoplankton down to cyanobacteria). The observed submesoscale feature presented a cyclonic structure with a relatively cold core surrounded by warmer waters. Six phytoplankton groups were identified across the structure with an unprecedented spatial and temporal resolution. According to our observations, we could quantify the influence of the fast established physical structure on the spatial distribution of the phytoplankton functional groups, giving coherence to the observed community structuration. Moreover, the high resolution of our observations allows us to estimate the growth rate of the main phytoplankton groups. Our innovative adaptative strategy with a multidisciplinary and transversal approach provides a deeper understanding of the marine biogeochemical dynamics through the first trophic levels.

  14. Toward 10-km mesh global climate simulations

    NASA Astrophysics Data System (ADS)

    Ohfuchi, W.; Enomoto, T.; Takaya, K.; Yoshioka, M. K.

    2002-12-01

    An atmospheric general circulation model (AGCM) that runs very efficiently on the Earth Simulator (ES) was developed. The ES is a gigantic vector-parallel computer with the peak performance of 40 Tflops. The AGCM, named AFES (AGCM for ES), was based on the version 5.4.02 of an AGCM developed jointly by the Center for Climate System Research, the University of Tokyo and the Japanese National Institute for Environmental Sciences. The AFES was, however, totally rewritten in FORTRAN90 and MPI while the original AGCM was written in FORTRAN77 and not capable of parallel computing. The AFES achieved 26 Tflops (about 65 % of the peak performance of the ES) at resolution of T1279L96 (10-km horizontal resolution and 500-m vertical resolution in middle troposphere to lower stratosphere). Some results of 10- to 20-day global simulations will be presented. At this moment, only short-term simulations are possible due to data storage limitation. As ten tera flops computing is achieved, peta byte data storage are necessary to conduct climate-type simulations at this super-high resolution global simulations. Some possibilities for future research topics in global super-high resolution climate simulations will be discussed. Some target topics are mesoscale structures and self-organization of the Baiu-Meiyu front over Japan, cyclogenecsis over the North Pacific and typhoons around the Japan area. Also improvement in local precipitation with increasing horizontal resolution will be demonstrated.

  15. Tropical Cyclone Activity in the High-Resolution Community Earth System Model and the Impact of Ocean Coupling

    NASA Astrophysics Data System (ADS)

    Li, Hui; Sriver, Ryan L.

    2018-01-01

    High-resolution Atmosphere General Circulation Models (AGCMs) are capable of directly simulating realistic tropical cyclone (TC) statistics, providing a promising approach for TC-climate studies. Active air-sea coupling in a coupled model framework is essential to capturing TC-ocean interactions, which can influence TC-climate connections on interannual to decadal time scales. Here we investigate how the choices of ocean coupling can affect the directly simulated TCs using high-resolution configurations of the Community Earth System Model (CESM). We performed a suite of high-resolution, multidecadal, global-scale CESM simulations in which the atmosphere (˜0.25° grid spacing) is configured with three different levels of ocean coupling: prescribed climatological sea surface temperature (SST) (ATM), mixed layer ocean (SLAB), and dynamic ocean (CPL). We find that different levels of ocean coupling can influence simulated TC frequency, geographical distributions, and storm intensity. ATM simulates more storms and higher overall storm intensity than the coupled simulations. It also simulates higher TC track density over the eastern Pacific and the North Atlantic, while TC tracks are relatively sparse within CPL and SLAB for these regions. Storm intensification and the maximum wind speed are sensitive to the representations of local surface flux feedbacks in different coupling configurations. Key differences in storm number and distribution can be attributed to variations in the modeled large-scale climate mean state and variability that arise from the combined effect of intrinsic model biases and air-sea interactions. Results help to improve our understanding about the representation of TCs in high-resolution coupled Earth system models, with important implications for TC-climate applications.

  16. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  17. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE PAGES

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...

    2017-11-27

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  18. Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.

    PubMed

    Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang

    2017-01-01

    Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.

  19. Compensating Atmospheric Turbulence Effects at High Zenith Angles with Adaptive Optics Using Advanced Phase Reconstructors

    NASA Astrophysics Data System (ADS)

    Roggemann, M.; Soehnel, G.; Archer, G.

    Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.

  20. From classical to quantum and back: Hamiltonian adaptive resolution path integral, ring polymer, and centroid molecular dynamics

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.

    2017-12-01

    Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.

  1. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    NASA Technical Reports Server (NTRS)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  2. The Impact II, a Very High-Resolution Quadrupole Time-of-Flight Instrument (QTOF) for Deep Shotgun Proteomics*

    PubMed Central

    Beck, Scarlet; Michalski, Annette; Raether, Oliver; Lubeck, Markus; Kaspar, Stephanie; Goedecke, Niels; Baessmann, Carsten; Hornburg, Daniel; Meier, Florian; Paron, Igor; Kulak, Nils A.; Cox, Juergen; Mann, Matthias

    2015-01-01

    Hybrid quadrupole time-of-flight (QTOF) mass spectrometry is one of the two major principles used in proteomics. Although based on simple fundamentals, it has over the last decades greatly evolved in terms of achievable resolution, mass accuracy, and dynamic range. The Bruker impact platform of QTOF instruments takes advantage of these developments and here we develop and evaluate the impact II for shotgun proteomics applications. Adaption of our heated liquid chromatography system achieved very narrow peptide elution peaks. The impact II is equipped with a new collision cell with both axial and radial ion ejection, more than doubling ion extraction at high tandem MS frequencies. The new reflectron and detector improve resolving power compared with the previous model up to 80%, i.e. to 40,000 at m/z 1222. We analyzed the ion current from the inlet capillary and found very high transmission (>80%) up to the collision cell. Simulation and measurement indicated 60% transfer into the flight tube. We adapted MaxQuant for QTOF data, improving absolute average mass deviations to better than 1.45 ppm. More than 4800 proteins can be identified in a single run of HeLa digest in a 90 min gradient. The workflow achieved high technical reproducibility (R2 > 0.99) and accurate fold change determination in spike-in experiments in complex mixtures. Using label-free quantification we rapidly quantified haploid against diploid yeast and characterized overall proteome differences in mouse cell lines originating from different tissues. Finally, after high pH reversed-phase fractionation we identified 9515 proteins in a triplicate measurement of HeLa peptide mixture and 11,257 proteins in single measurements of cerebellum—the highest proteome coverage reported with a QTOF instrument so far. PMID:25991688

  3. Air-Sea Interaction Processes in Low and High-Resolution Coupled Climate Model Simulations for the Southeast Pacific

    NASA Astrophysics Data System (ADS)

    Porto da Silveira, I.; Zuidema, P.; Kirtman, B. P.

    2017-12-01

    The rugged topography of the Andes Cordillera along with strong coastal upwelling, strong sea surface temperatures (SST) gradients and extensive but geometrically-thin stratocumulus decks turns the Southeast Pacific (SEP) into a challenge for numerical modeling. In this study, hindcast simulations using the Community Climate System Model (CCSM4) at two resolutions were analyzed to examine the importance of resolution alone, with the parameterizations otherwise left unchanged. The hindcasts were initialized on January 1 with the real-time oceanic and atmospheric reanalysis (CFSR) from 1982 to 2003, forming a 10-member ensemble. The two resolutions are (0.1o oceanic and 0.5o atmospheric) and (1.125o oceanic and 0.9o atmospheric). The SST error growth in the first six days of integration (fast errors) and those resulted from model drift (saturated errors) are assessed and compared towards evaluating the model processes responsible for the SST error growth. For the high-resolution simulation, SST fast errors are positive (+0.3oC) near the continental borders and negative offshore (-0.1oC). Both are associated with a decrease in cloud cover, a weakening of the prevailing southwesterly winds and a reduction of latent heat flux. The saturated errors possess a similar spatial pattern, but are larger and are more spatially concentrated. This suggests that the processes driving the errors already become established within the first week, in contrast to the low-resolution simulations. These, instead, manifest too-warm SSTs related to too-weak upwelling, driven by too-strong winds and Ekman pumping. Nevertheless, the ocean surface tends to be cooler in the low-resolution simulation than the high-resolution due to a higher cloud cover. Throughout the integration, saturated SST errors become positive and could reach values up to +4oC. These are accompanied by upwelling dumping and a decrease in cloud cover. High and low resolution models presented notable differences in how SST errors variability drove atmospheric changes, especially because the high resolution is sensitive to resurgence regions. This allows the model to resolve cloud heights and establish different radiative feedbacks.

  4. A new variable parallel holes collimator for scintigraphic device with validation method based on Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.

    2010-09-01

    The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.

  5. MCore: A High-Order Finite-Volume Dynamical Core for Atmospheric General Circulation Models

    NASA Astrophysics Data System (ADS)

    Ullrich, P.; Jablonowski, C.

    2011-12-01

    The desire for increasingly accurate predictions of the atmosphere has driven numerical models to smaller and smaller resolutions, while simultaneously exponentially driving up the cost of existing numerical models. Even with the modern rapid advancement of computational performance, it is estimated that it will take more than twenty years before existing models approach the scales needed to resolve atmospheric convection. However, smarter numerical methods may allow us to glimpse the types of results we would expect from these fine-scale simulations while only requiring a fraction of the computational cost. The next generation of atmospheric models will likely need to rely on both high-order accuracy and adaptive mesh refinement in order to properly capture features of interest. We present our ongoing research on developing a set of ``smart'' numerical methods for simulating the global non-hydrostatic fluid equations which govern atmospheric motions. We have harnessed a high-order finite-volume based approach in developing an atmospheric dynamical core on the cubed-sphere. This type of method is desirable for applications involving adaptive grids, since it has been shown that spuriously reflected wave modes are intrinsically damped out under this approach. The model further makes use of an implicit-explicit Runge-Kutta-Rosenbrock (IMEX-RKR) time integrator for accurate and efficient coupling of the horizontal and vertical model components. We survey the algorithmic development of the model and present results from idealized dynamical core test cases, as well as give a glimpse at future work with our model.

  6. Understanding climate variability and global climate change using high-resolution GCM simulations

    NASA Astrophysics Data System (ADS)

    Feng, Xuelei

    In this study, three climate processes are examined using long-term simulations from multiple climate models with increasing horizontal resolutions. These simulations include the European Center for Medium-range Weather Forecasts (ECMWF) atmospheric general circulation model (AGCM) runs forced with observed sea surface temperatures (SST) (the Athena runs) and a set of coupled ocean-atmosphere seasonal hindcasts (the Minerva runs). Both sets of runs use different AGCM resolutions, the highest at 16 km. A pair of the Community Climate System Model (CCSM) simulations with ocean general circulation model (OGCM) resolutions at 100 and 10 km are also examined. The higher resolution CCSM run fully resolves oceanic mesoscale eddies. The resolution influence on the precipitation climatology over the Gulf Stream (GS) region is first investigated. In the Athena simulations, the resolution increase generates enhanced mean GS precipitation moderately in both large-scale and sub-scale rainfalls in the North Atlantic, with the latter more tightly confined near the oceanic front. However, the non-eddy resolving OGCM in the Minerva runs simulates a weaker oceanic front and weakens the mean GS precipitation response. On the other hand, an increase in CCSM oceanic resolutions from non-eddy-resolving to eddy resolving regimes greatly improves the model's GS precipitation climatology, resulting in both stronger intensity and more realistic structure. Further analyses show that the improvement of the GS precipitation climatology due to resolution increases is caused by the enhanced atmospheric response to an increased SST gradient near the oceanic front, which leads to stronger surface convergence and upper level divergence. Another focus of this study is on the global warming impacts on precipitation characteristic changes using the high-resolution Athena simulations under the SST forcing from the observations and a global warming scenario. As a comparison, results from the coarse resolution simulation are also analyzed to examine the dependence on resolution. The increasing rates of globally averaged precipitation amount for the high and low resolution simulations are 1.7%/K-1 and 1.8%/K-1, respectively. The sensitivities for heavy, moderate, light and drizzle rain are 6.8, -1.2, 0.0, 0.2%/K-1 for low and 6.3, -1.5, 0.4, -0.2%/K -1 for high resolution simulations. The number of rainy days decreases in a warming scenario, by 3.4 and 4.2 day/year-1, respectively. The most sensitive response of 6.3-6.8%/K-1 for the heavy rain approaches that of the 7%/K-1 for the Clausius-Clapeyron scaling limit. During the twenty-first century simulation, the increases in precipitation are larger over high latitude and wet regions in low and mid-latitudes. Over the dry regions, such as the subtropics, the precipitation amount and frequency decrease. There is a higher occurrence of low and heavy rain from the tropics to mid-latitudes at the expense of the decreases in the frequency of moderate rain. In the third part, the inter-annual variability of the northern hemisphere storm tracks is examined. In the Athena simulations, the leading modes of the observed storm track variability are reproduced realistically by all runs. In general, the fluctuations of the model storm tracks in the North Pacific and Atlantic basins are largely independent of each other. Within each basin, the variations are characterized by the intensity change near the climatological center and the meridional shift of the storm track location. These two modes are associated with major teleconnection patterns of the low frequency atmospheric variations. These model results are not sensitive to resolution. Using the Minerva hindcast initialized in November, it is shown that a portion of the winter (December-January) storm track variability is predictable, mainly due to the influences of the atmospheric wave trains induced by the El Nino and Southern Oscillation.

  7. Adaptive finite volume methods with well-balanced Riemann solvers for modeling floods in rugged terrain: Application to the Malpasset dam-break flood (France, 1959)

    USGS Publications Warehouse

    George, D.L.

    2011-01-01

    The simulation of advancing flood waves over rugged topography, by solving the shallow-water equations with well-balanced high-resolution finite volume methods and block-structured dynamic adaptive mesh refinement (AMR), is described and validated in this paper. The efficiency of block-structured AMR makes large-scale problems tractable, and allows the use of accurate and stable methods developed for solving general hyperbolic problems on quadrilateral grids. Features indicative of flooding in rugged terrain, such as advancing wet-dry fronts and non-stationary steady states due to balanced source terms from variable topography, present unique challenges and require modifications such as special Riemann solvers. A well-balanced Riemann solver for inundation and general (non-stationary) flow over topography is tested in this context. The difficulties of modeling floods in rugged terrain, and the rationale for and efficacy of using AMR and well-balanced methods, are presented. The algorithms are validated by simulating the Malpasset dam-break flood (France, 1959), which has served as a benchmark problem previously. Historical field data, laboratory model data and other numerical simulation results (computed on static fitted meshes) are shown for comparison. The methods are implemented in GEOCLAW, a subset of the open-source CLAWPACK software. All the software is freely available at. Published in 2010 by John Wiley & Sons, Ltd.

  8. Modelling the pelagic nitrogen cycle and vertical particle flux in the Norwegian sea

    NASA Astrophysics Data System (ADS)

    Haupt, Olaf J.; Wolf, Uli; v. Bodungen, Bodo

    1999-02-01

    A 1D Eulerian ecosystem model (BIological Ocean Model) for the Norwegian Sea was developed to investigate the dynamics of pelagic ecosystems. The BIOM combines six biochemical compartments and simulates the annual nitrogen cycle with specific focus on production, modification and sedimentation of particles in the water column. The external forcing and physical framework is based on a simulated annual cycle of global radiation and an annual mixed-layer cycle derived from field data. The vertical resolution of the model is given by an exponential grid with 200 depth layers, allowing specific parameterization of various sinking velocities, breakdown of particles and the remineralization processes. The aim of the numerical experiments is the simulation of ecosystem dynamics considering the specific biogeochemical properties of the Norwegian Sea, for example the life cycle of the dominant copepod Calanus finmarchicus. The results of the simulations were validated with field data. Model results are in good agreement with field data for the lower trophic levels of the food web. With increasing complexity of the organisms the differences increase between simulated processes and field data. Results of the numerical simulations suggest that BIOM is well adapted to investigate a physically controlled ecosystem. The simulation of grazing controlled pelagic ecosystems, like the Norwegian Sea, requires adaptations of parameterization to the specific ecosystem features. By using seasonally adaptation of the most sensible processes like utilization of light by phytoplankton and grazing by zooplankton results were greatly improved.

  9. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  10. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    NASA Astrophysics Data System (ADS)

    Al-Marouf, M.; Samtaney, R.

    2017-05-01

    We present an embedded ghost fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  11. The Multi-SAG project: filling the MultiDark simulations with semi-analytic galaxies

    NASA Astrophysics Data System (ADS)

    Vega-Martínez, C. A.; Cora, S. A.; Padilla, N. D.; Muñoz Arancibia, A. M.; Orsi, A. A.; Ruiz, A. N.

    2016-08-01

    The semi-analytical model sag is a code of galaxy formation and evolution which is applied to halo catalogs and merger trees extracted from cosmological -body simulations of dark matter. This contribution describes the project of constructing a catalog of simulated galaxies by adapting and applying the model sag over two dark matter simulations of the spanish MultiDark Project publicly available. Those simulations have particles, each, in boxes with sizes of 1000 Mpc and 400 Mpc respectively with Planck cosmological parameters. They cover a large range of masses and have halo mass resolutions of , therefore each simulation is able to produce more than 150 millions of simulated galaxies. A detailed description of the method is explained, and the first statistical results are shown.

  12. Strehl-constrained reconstruction of post-adaptive optics data and the Software Package AIRY, v. 6.1

    NASA Astrophysics Data System (ADS)

    Carbillet, Marcel; La Camera, Andrea; Deguignet, Jérémy; Prato, Marco; Bertero, Mario; Aristidi, Éric; Boccacci, Patrizia

    2014-08-01

    We first briefly present the last version of the Software Package AIRY, version 6.1, a CAOS-based tool which includes various deconvolution methods, accelerations, regularizations, super-resolution, boundary effects reduction, point-spread function extraction/extrapolation, stopping rules, and constraints in the case of iterative blind deconvolution (IBD). Then, we focus on a new formulation of our Strehl-constrained IBD, here quantitatively compared to the original formulation for simulated near-infrared data of an 8-m class telescope equipped with adaptive optics (AO), showing their equivalence. Next, we extend the application of the original method to the visible domain with simulated data of an AO-equipped 1.5-m telescope, testing also the robustness of the method with respect to the Strehl ratio estimation.

  13. Integrating TITAN2D Geophysical Mass Flow Model with GIS

    NASA Astrophysics Data System (ADS)

    Namikawa, L. M.; Renschler, C.

    2005-12-01

    TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.

  14. Mapping high-resolution soil moisture and properties using distributed temperature sensing data and an adaptive particle batch smoother

    USDA-ARS?s Scientific Manuscript database

    This study demonstrated a new method for mapping high-resolution (spatial: 1 m, and temporal: 1 h) soil moisture by assimilating distributed temperature sensing (DTS) observed soil temperatures at intermediate scales. In order to provide robust soil moisture and property estimates, we first proposed...

  15. Challenges and opportunities for improved understanding of regional climate dynamics

    NASA Astrophysics Data System (ADS)

    Collins, Matthew; Minobe, Shoshiro; Barreiro, Marcelo; Bordoni, Simona; Kaspi, Yohai; Kuwano-Yoshida, Akira; Keenlyside, Noel; Manzini, Elisa; O'Reilly, Christopher H.; Sutton, Rowan; Xie, Shang-Ping; Zolina, Olga

    2018-01-01

    Dynamical processes in the atmosphere and ocean are central to determining the large-scale drivers of regional climate change, yet their predictive understanding is poor. Here, we identify three frontline challenges in climate dynamics where significant progress can be made to inform adaptation: response of storms, blocks and jet streams to external forcing; basin-to-basin and tropical-extratropical teleconnections; and the development of non-linear predictive theory. We highlight opportunities and techniques for making immediate progress in these areas, which critically involve the development of high-resolution coupled model simulations, partial coupling or pacemaker experiments, as well as the development and use of dynamical metrics and exploitation of hierarchies of models.

  16. Low-cost, high-resolution scanning laser ophthalmoscope for the clinical environment

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Larichev, A.; Zamora, G.; Murillo, S.; Barriga, E. S.

    2010-02-01

    Researchers have sought to gain greater insight into the mechanisms of the retina and the optic disc at high spatial resolutions that would enable the visualization of small structures such as photoreceptors and nerve fiber bundles. The sources of retinal image quality degradation are aberrations within the human eye, which limit the achievable resolution and the contrast of small image details. To overcome these fundamental limitations, researchers have been applying adaptive optics (AO) techniques to correct for the aberrations. Today, deformable mirror based adaptive optics devices have been developed to overcome the limitations of standard fundus cameras, but at prices that are typically unaffordable for most clinics. In this paper we demonstrate a clinically viable fundus camera with auto-focus and astigmatism correction that is easy to use and has improved resolution. We have shown that removal of low-order aberrations results in significantly better resolution and quality images. Additionally, through the application of image restoration and super-resolution techniques, the images present considerably improved quality. The improvements lead to enhanced visualization of retinal structures associated with pathology.

  17. High-Resolution Climate Data Visualization through GIS- and Web-based Data Portals

    NASA Astrophysics Data System (ADS)

    WANG, X.; Huang, G.

    2017-12-01

    Sound decisions on climate change adaptation rely on an in-depth assessment of potential climate change impacts at regional and local scales, which usually requires finer resolution climate projections at both spatial and temporal scales. However, effective downscaling of global climate projections is practically difficult due to the lack of computational resources and/or long-term reference data. Although a large volume of downscaled climate data has been make available to the public, how to understand and interpret the large-volume climate data and how to make use of the data to drive impact assessment and adaptation studies are still challenging for both impact researchers and decision makers. Such difficulties have become major barriers preventing informed climate change adaptation planning at regional scales. Therefore, this research will explore new GIS- and web-based technologies to help visualize the large-volume regional climate data with high spatiotemporal resolutions. A user-friendly public data portal, named Climate Change Data Portal (CCDP, http://ccdp.network), will be established to allow intuitive and open access to high-resolution regional climate projections at local scales. The CCDP offers functions of visual representation through geospatial maps and data downloading for a variety of climate variables (e.g., temperature, precipitation, relative humidity, solar radiation, and wind) at multiple spatial resolutions (i.e., 25 - 50 km) and temporal resolutions (i.e., annual, seasonal, monthly, daily, and hourly). The vast amount of information the CCDP encompasses can provide a crucial basis for assessing impacts of climate change on local communities and ecosystems and for supporting better decision making under a changing climate.

  18. Realistic mass ratio magnetic reconnection simulations with the Multi Level Multi Domain method

    NASA Astrophysics Data System (ADS)

    Innocenti, Maria Elena; Beck, Arnaud; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    Space physics simulations with the ambition of realistically representing both ion and electron dynamics have to be able to cope with the huge scale separation between the electron and ion parameters while respecting the stability constraints of the numerical method of choice. Explicit Particle In Cell (PIC) simulations with realistic mass ratio are limited in the size of the problems they can tackle by the restrictive stability constraints of the explicit method (Birdsall and Langdon, 2004). Many alternatives are available to reduce such computation costs. Reduced mass ratios can be used, with the caveats highlighted in Bret and Dieckmann (2010). Fully implicit (Chen et al., 2011a; Markidis and Lapenta, 2011) or semi implicit (Vu and Brackbill, 1992; Lapenta et al., 2006; Cohen et al., 1989) methods can bypass the strict stability constraints of explicit PIC codes. Adaptive Mesh Refinement (AMR) techniques (Vay et al., 2004; Fujimoto and Sydora, 2008) can be employed to change locally the simulation resolution. We focus here on the Multi Level Multi Domain (MLMD) method introduced in Innocenti et al. (2013) and Beck et al. (2013). The method combines the advantages of implicit algorithms and adaptivity. Two levels are fully simulated with fields and particles. The so called "refined level" simulates a fraction of the "coarse level" with a resolution RF times bigger than the coarse level resolution, where RF is the Refinement Factor between the levels. This method is particularly suitable for magnetic reconnection simulations (Biskamp, 2005), where the characteristic Ion and Electron Diffusion Regions (IDR and EDR) develop at the ion and electron scales respectively (Daughton et al., 2006). In Innocenti et al. (2013) we showed that basic wave and instability processes are correctly reproduced by MLMD simulations. In Beck et al. (2013) we applied the technique to plasma expansion and magnetic reconnection problems. We showed that notable computational time savings can be achieved. More importantly, we were able to correctly reproduce EDR features, such as the inversion layer of the electric field observed in Chen et al. (2011b), with a MLMD simulation at a significantly lower cost. Here, we present recent results on EDR dynamics achieved with the MLMD method and a realistic mass ratio.

  19. Hydrologic Simulation in Mediterranean flood prone Watersheds using high-resolution quality data

    NASA Astrophysics Data System (ADS)

    Eirini Vozinaki, Anthi; Alexakis, Dimitrios; Pappa, Polixeni; Tsanis, Ioannis

    2015-04-01

    Flooding is a significant threat causing lots of inconveniencies in several societies, worldwide. The fact that the climatic change is already happening, increases the flooding risk, which is no longer a substantial menace to several societies and their economies. The improvement of spatial-resolution and accuracy of the topography and land use data due to remote sensing techniques could provide integrated flood inundation simulations. In this work hydrological analysis of several historic flood events in Mediterranean flood prone watersheds (island of Crete/Greece) takes place. Satellite images of high resolution are elaborated. A very high resolution (VHR) digital elevation model (DEM) is produced from a GeoEye-1 0.5-m-resolution satellite stereo pair and is used for floodplain management and mapping applications such as watershed delineation and river cross-section extraction. Sophisticated classification algorithms are implemented for improving Land Use/ Land Cover maps accuracy. In addition, soil maps are updated with means of Radar satellite images. The above high-resolution data are innovatively used to simulate and validate several historical flood events in Mediterranean watersheds, which have experienced severe flooding in the past. The hydrologic/hydraulic models used for flood inundation simulation in this work are HEC-HMS and HEC-RAS. The Natural Resource Conservation Service (NRCS) curve number (CN) approach is implemented to account for the effect of LULC and soil on the hydrologic response of the catchment. The use of high resolution data provides detailed validation results and results of high precision, accordingly. Furthermore, the meteorological forecasting data, which are also combined to the simulation model results, manage the development of an integrated flood forecasting and early warning system tool, which is capable of confronting or even preventing this imminent risk. The research reported in this paper was fully supported by the "ARISTEIA II" Action ("REINFORCE" program) of the "Operational Education and Life Long Learning programme" and is co-funded by the European Social Fund (ESF) and National Resources.

  20. The Phosphorylation State of the Drosophila TRP Channel Modulates the Frequency Response to Oscillating Light In Vivo

    PubMed Central

    Rhodes-Mordov, Elisheva; Katz, Ben; Oberegelsbacher, Claudia; Yasin, Bushra; Tzadok, Hanan; Huber, Armin

    2017-01-01

    Drosophila photoreceptors respond to oscillating light of high frequency (∼100 Hz), while the detected maximal frequency is modulated by the light rearing conditions, thus enabling high sensitivity to light and high temporal resolution. However, the molecular basis for this adaptive process is unclear. Here, we report that dephosphorylation of the light-activated transient receptor potential (TRP) ion channel at S936 is a fast, graded, light-dependent, and Ca2+-dependent process that is partially modulated by the rhodopsin phosphatase retinal degeneration C (RDGC). Electroretinogram measurements of the frequency response to oscillating lights in vivo revealed that dark-reared flies expressing wild-type TRP exhibited a detection limit of oscillating light at relatively low frequencies, which was shifted to higher frequencies upon light adaptation. Strikingly, preventing phosphorylation of the S936-TRP site by alanine substitution in transgenic Drosophila (trpS936A) abolished the difference in frequency response between dark-adapted and light-adapted flies, resulting in high-frequency response also in dark-adapted flies. In contrast, inserting a phosphomimetic mutation by substituting the S936-TRP site to aspartic acid (trpS936D) set the frequency response of light-adapted flies to low frequencies typical of dark-adapted flies. Light-adapted rdgC mutant flies showed relatively high S936-TRP phosphorylation levels and light–dark phosphorylation dynamics. These findings suggest that RDGC is one but not the only phosphatase involved in pS936-TRP dephosphorylation. Together, this study indicates that TRP channel dephosphorylation is a regulatory process that affects the detection limit of oscillating light according to the light rearing condition, thus adjusting dynamic processing of visual information under varying light conditions. SIGNIFICANCE STATEMENT Drosophila photoreceptors exhibit high temporal resolution as manifested in frequency response to oscillating light of high frequency (≤∼100 Hz). Light rearing conditions modulate the maximal frequency detected by photoreceptors, thus enabling them to maintain high sensitivity to light and high temporal resolution. However, the precise mechanisms for this process are not fully understood. Here, we show by combination of biochemistry and in vivo electrophysiology that transient receptor potential (TRP) channel dephosphorylation at a specific site is a fast, light-activated and Ca2+-dependent regulatory process. TRP dephosphorylation affects the detection limit of oscillating light according to the adaptation state of the photoreceptor cells by shifting the detection limit to higher frequencies upon light adaptation. This novel mechanism thus adjusts dynamic processing of visual information under varying light conditions. PMID:28314815

  1. Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images

    NASA Astrophysics Data System (ADS)

    Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian

    2015-12-01

    With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.

  2. MAD ADAPTIVE OPTICS IMAGING OF HIGH-LUMINOSITY QUASARS: A PILOT PROJECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liuzzo, E.; Falomo, R.; Paiano, S.

    2016-08-01

    We present near-IR images of five luminous quasars at z ∼ 2 and one at z ∼ 4 obtained with an experimental adaptive optics (AO) instrument at the European Southern Observatory Very Large Telescope. The observations are part of a program aimed at demonstrating the capabilities of multi-conjugated adaptive optics imaging combined with the use of natural guide stars for high spatial resolution studies on large telescopes. The observations were mostly obtained under poor seeing conditions but in two cases. In spite of these nonoptimal conditions, the resulting images of point sources have cores of FWHM ∼ 0.2 arcsec. Wemore » are able to characterize the host galaxy properties for two sources and set stringent upper limits to the galaxy luminosity for the others. We also report on the expected capabilities for investigating the host galaxies of distant quasars with AO systems coupled with future Extremely Large Telescopes. Detailed simulations show that it will be possible to characterize compact (2–3 kpc) quasar host galaxies for quasi-stellar objects at z = 2 with nucleus K -magnitude spanning from 15 to 20 (corresponding to absolute magnitude −31 to −26) and host galaxies that are 4 mag fainter than their nuclei.« less

  3. Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations

    NASA Astrophysics Data System (ADS)

    Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.

    2017-12-01

    Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.

  4. An assessment of a film enhancement system for use in a radiation therapy department.

    PubMed

    Solowsky, E L; Reinstein, L E; Meek, A G

    1990-01-01

    The clinical uses of a radiotherapy film enhancement system are explored. The primary functions of the system are to improve the quality of poorly exposed simulator and portal films, and to perform comparisons between the two films to determine whether patient or block positioning errors are present. Other features include: the production of inexpensive, high quality hardcopy images of simulation films and initial portal films for chart documentation, the capacity to overlay lateral simulation films with sagittal MRI films to aid in field design, and a mode to zoom in on individual CT or MRI images and enlarge them for video display during chart rounds or instructional sessions. This commercially available system is comprised of a microcomputer, frame grabber, CCD camera with zoom lens, and a high-resolution thermal printer. The user-friendly software is menu driven and utilizes both keyboard and track ball to perform its functions. At the heart of the software is a very fast Adaptive Histogram Equalization (AHE) routine, which enhances and improves the readability of most portal films. The system has been evaluated for several disease sites, and its advantages and limitations will be presented.

  5. A New Approach to Modeling Jupiter's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Katoh, Y.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2017-12-01

    The scales in planetary magnetospheres range from 10s of planetary radii to kilometers. For a number of years we have studied the magnetospheres of Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations. However, we have not been able to reach even the limits of the MHD approximation because of the large amount of computer resources required. Recently thanks to the progress in supercomputer systems, we have obtained the capability to simulate Jupiter's magnetosphere with 1000 times the number of grid points used in our previous simulations. This has allowed us to combine the high resolution global simulation with a micro-scale simulation of the Jovian magnetosphere. In particular we can combine a hybrid (kinetic ions and fluid electrons) simulation with the MHD simulation. In addition, the new capability enables us to run multi-parameter survey simulations of the Jupiter-solar wind system. In this study we performed a high-resolution simulation of Jovian magnetosphere to connect with the hybrid simulation, and lower resolution simulations under the various solar wind conditions to compare with Hisaki and Juno observations. In the high-resolution simulation we used a regular Cartesian gird with 0.15 RJ grid spacing and placed the inner boundary at 7 RJ. From these simulation settings, we provide the magnetic field out to around 20 RJ from Jupiter as a background field for the hybrid simulation. For the first time we have been able to resolve Kelvin Helmholtz waves on the magnetopause. We have investigated solar wind dynamic pressures between 0.01 and 0.09 nPa for a number of IMF values. These simulation data are open for the registered users to download the raw data. We have compared the results of these simulations with Hisaki auroral observations.

  6. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens

    PubMed Central

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.

    2015-01-01

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images. PMID:26368169

  7. Ultra-high resolution coded wavefront sensor.

    PubMed

    Wang, Congli; Dun, Xiong; Fu, Qiang; Heidrich, Wolfgang

    2017-06-12

    Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor, the Coded Wavefront Sensor, which provides high spatio-temporal resolution using a simple masked sensor under white light illumination. Specifically, we demonstrate megapixel spatial resolution and phase accuracy better than 0.1 wavelengths at reconstruction rates of 50 Hz or more, thus opening up many new applications from high-resolution adaptive optics to real-time phase retrieval in microscopy.

  8. High-resolution Hydrodynamic Simulation of Tidal Detonation of a Helium White Dwarf by an Intermediate Mass Black Hole

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru

    2018-05-01

    We demonstrate tidal detonation during a tidal disruption event (TDE) of a helium (He) white dwarf (WD) with 0.45 M ⊙ by an intermediate mass black hole using extremely high-resolution simulations. Tanikawa et al. have shown tidal detonation in results of previous studies from unphysical heating due to low-resolution simulations, and such unphysical heating occurs in three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations even with 10 million SPH particles. In order to avoid such unphysical heating, we perform 3D SPH simulations up to 300 million SPH particles, and 1D mesh simulations using flow structure in the 3D SPH simulations for 1D initial conditions. The 1D mesh simulations have higher resolutions than the 3D SPH simulations. We show that tidal detonation occurs and confirm that this result is perfectly converged with different space resolution in both 3D SPH and 1D mesh simulations. We find that detonation waves independently arise in leading parts of the WD, and yield large amounts of 56Ni. Although detonation waves are not generated in trailing parts of the WD, the trailing parts would receive detonation waves generated in the leading parts and would leave large amounts of Si group elements. Eventually, this He WD TDE would synthesize 56Ni of 0.30 M ⊙ and Si group elements of 0.08 M ⊙, and could be observed as a luminous thermonuclear transient comparable to SNe Ia.

  9. Declining vulnerability to river floods and the global benefits of adaptation

    NASA Astrophysics Data System (ADS)

    Jongman, Brenden; Winsemius, Hessel; Aerts, Jeroen; Coughlan de Perez, Erin; Van Aalst, Maarten; Kron, Wolfgang; Ward, Philip

    2016-04-01

    The global impacts of river floods are substantial and rising. Effective adaptation to the increasing risks requires an in-depth understanding of the physical and socioeconomic drivers of risk. Whilst the modeling of flood hazard and exposure has improved greatly, compelling evidence on spatiotemporal patterns in vulnerability of societies around the world is lacking. Hence, the effects of vulnerability on global flood risk are not fully understood, and future projections of fatalities and losses available today are based on simplistic assumptions or do not include vulnerability. In this study, we show that trends and fluctuations in vulnerability to river floods around the world can be estimated by dynamic high-resolution modeling of flood hazard and exposure. We show that fatalities and losses as a share of exposed population and gross domestic product are decreasing with rising income. We also show that there is a tendency of convergence in vulnerability levels between low- and high-income countries. Based on these findings, we simulate future flood impacts per country using traditional assumptions of static vulnerability through time, but also using future assumptions on reduced vulnerability in the future. We show that future risk increases can be largely contained using effective disaster risk reduction strategies, including a reduction of vulnerability. The study was carried out using the global flood risk model, GLOFRIS, combined with high-resolution time-series maps of hazard and exposure at the global scale. Based on: Jongman et al., 2015. Proceedings of the National Academy of Sciences of the United States of America, doi:10.1073/pnas.1414439112.

  10. Simulation of Deep Convective Clouds with the Dynamic Reconstruction Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Shi, X.; Chow, F. K.; Street, R. L.; Bryan, G. H.

    2017-12-01

    The terra incognita (TI), or gray zone, in simulations is a range of grid spacing comparable to the most energetic eddy diameter. Spacing in mesoscale and simulations is much larger than the eddies, and turbulence is parameterized with one-dimensional vertical-mixing. Large eddy simulations (LES) have grid spacing much smaller than the energetic eddies, and use three-dimensional models of turbulence. Studies of convective weather use convection-permitting resolutions, which are in the TI. Neither mesoscale-turbulence nor LES models are designed for the TI, so TI turbulence parameterization needs to be discussed. Here, the effects of sub-filter scale (SFS) closure schemes on the simulation of deep tropical convection are evaluated by comparing three closures, i.e. Smagorinsky model, Deardorff-type TKE model and the dynamic reconstruction model (DRM), which partitions SFS turbulence into resolvable sub-filter scales (RSFS) and unresolved sub-grid scales (SGS). The RSFS are reconstructed, and the SGS are modeled with a dynamic eddy viscosity/diffusivity model. The RSFS stresses/fluxes allow backscatter of energy/variance via counter-gradient stresses/fluxes. In high-resolution (100m) simulations of tropical convection use of these turbulence models did not lead to significant differences in cloud water/ice distribution, precipitation flux, or vertical fluxes of momentum and heat. When model resolutions are coarsened, the Smagorinsky and TKE models overestimate cloud ice and produces large-amplitude downward heat flux in the middle troposphere (not found in the high-resolution simulations). This error is a result of unrealistically large eddy diffusivities, i.e., the eddy diffusivity of the DRM is on the order of 1 for the coarse resolution simulations, the eddy diffusivity of the Smagorinsky and TKE model is on the order of 100. Splitting the eddy viscosity/diffusivity scalars into vertical and horizontal components by using different length scales and strain rate components helps to reduce the errors, but does not completely remedy the problem. In contrast, the coarse resolution simulations using the DRM produce results that are more consistent with the high-resolution results, suggesting that the DRM is a more appropriate turbulence model for simulating convection in the TI.

  11. Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging.

    PubMed

    Hardie, Russell C; Barnard, Kenneth J; Ordonez, Raul

    2011-12-19

    Fast nonuniform interpolation based super-resolution (SR) has traditionally been limited to applications with translational interframe motion. This is in part because such methods are based on an underlying assumption that the warping and blurring components in the observation model commute. For translational motion this is the case, but it is not true in general. This presents a problem for applications such as airborne imaging where translation may be insufficient. Here we present a new Fourier domain analysis to show that, for many image systems, an affine warping model with limited zoom and shear approximately commutes with the point spread function when diffraction effects are modeled. Based on this important result, we present a new fast adaptive Wiener filter (AWF) SR algorithm for non-translational motion and study its performance with affine motion. The fast AWF SR method employs a new smart observation window that allows us to precompute all the needed filter weights for any type of motion without sacrificing much of the full performance of the AWF. We evaluate the proposed algorithm using simulated data and real infrared airborne imagery that contains a thermal resolution target allowing for objective resolution analysis.

  12. Influence of Molecular Resolution on Sequence-Based Discovery of Ecological Diversity among Synechococcus Populations in an Alkaline Siliceous Hot Spring Microbial Mat ▿ †

    PubMed Central

    Melendrez, Melanie C.; Lange, Rachel K.; Cohan, Frederick M.; Ward, David M.

    2011-01-01

    Previous research has shown that sequences of 16S rRNA genes and 16S-23S rRNA internal transcribed spacer regions may not have enough genetic resolution to define all ecologically distinct Synechococcus populations (ecotypes) inhabiting alkaline, siliceous hot spring microbial mats. To achieve higher molecular resolution, we studied sequence variation in three protein-encoding loci sampled by PCR from 60°C and 65°C sites in the Mushroom Spring mat (Yellowstone National Park, WY). Sequences were analyzed using the ecotype simulation (ES) and AdaptML algorithms to identify putative ecotypes. Between 4 and 14 times more putative ecotypes were predicted from variation in protein-encoding locus sequences than from variation in 16S rRNA and 16S-23S rRNA internal transcribed spacer sequences. The number of putative ecotypes predicted depended on the number of sequences sampled and the molecular resolution of the locus. Chao estimates of diversity indicated that few rare ecotypes were missed. Many ecotypes hypothesized by sequence analyses were different in their habitat specificities, suggesting different adaptations to temperature or other parameters that vary along the flow channel. PMID:21169433

  13. Lens-based wavefront sensorless adaptive optics swept source OCT

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Lee, Sujin; Ju, Myeong Jin; Heisler, Morgan; Ding, Weiguang; Zawadzki, Robert J.; Bonora, Stefano; Sarunic, Marinko V.

    2016-06-01

    Optical coherence tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. Although the axial resolution of OCT system, which is a function of the light source bandwidth, is sufficient to resolve retinal features at a micrometer scale, the lateral resolution is dependent on the delivery optics and is limited by ocular aberrations. Through the combination of wavefront sensorless adaptive optics and the use of dual deformable transmissive optical elements, we present a compact lens-based OCT system at an imaging wavelength of 1060 nm for high resolution retinal imaging. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient’s eyes, and a novel multi-actuator adaptive lens for aberration correction to achieve near diffraction limited imaging performance at the retina. With a parallel processing computational platform, high resolution cross-sectional and en face retinal image acquisition and display was performed in real time. In order to demonstrate the system functionality and clinical utility, we present images of the photoreceptor cone mosaic and other retinal layers acquired in vivo from research subjects.

  14. Tidal dwarf galaxies in cosmological simulations

    NASA Astrophysics Data System (ADS)

    Ploeckinger, Sylvia; Sharma, Kuldeep; Schaye, Joop; Crain, Robert A.; Schaller, Matthieu; Barber, Christopher

    2018-02-01

    The formation and evolution of gravitationally bound, star forming substructures in tidal tails of interacting galaxies, called tidal dwarf galaxies (TDG), has been studied, until now, only in idealized simulations of individual pairs of interacting galaxies for pre-determined orbits, mass ratios and gas fractions. Here, we present the first identification of TDG candidates in fully cosmological simulations, specifically the high-resolution simulations of the EAGLE suite. The finite resolution of the simulation limits their ability to predict the exact formation rate and survival time-scale of TDGs, but we show that gravitationally bound baryonic structures in tidal arms already form in current state-of-the-art cosmological simulations. In this case, the orbital parameter, disc orientations as well as stellar and gas masses and the specific angular momentum of the TDG forming galaxies are a direct consequence of cosmic structure formation. We identify TDG candidates in a wide range of environments, such as multiple galaxy mergers, clumpy high-redshift (up to z = 2) galaxies, high-speed encounters and tidal interactions with gas-poor galaxies. We present selection methods, the properties of the identified TDG candidates and a road map for more quantitative analyses using future high-resolution simulations.

  15. Fusing Unmanned Aerial Vehicle Imagery with High Resolution Hydrologic Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Pierini, N.; Schreiner-McGraw, A.; Anderson, C.; Saripalli, S.; Rango, A.

    2013-12-01

    After decades of development and applications, high resolution hydrologic models are now common tools in research and increasingly used in practice. More recently, high resolution imagery from unmanned aerial vehicles (UAVs) that provide information on land surface properties have become available for civilian applications. Fusing the two approaches promises to significantly advance the state-of-the-art in terms of hydrologic modeling capabilities. This combination will also challenge assumptions on model processes, parameterizations and scale as land surface characteristics (~0.1 to 1 m) may now surpass traditional model resolutions (~10 to 100 m). Ultimately, predictions from high resolution hydrologic models need to be consistent with the observational data that can be collected from UAVs. This talk will describe our efforts to develop, utilize and test the impact of UAV-derived topographic and vegetation fields on the simulation of two small watersheds in the Sonoran and Chihuahuan Deserts at the Santa Rita Experimental Range (Green Valley, AZ) and the Jornada Experimental Range (Las Cruces, NM). High resolution digital terrain models, image orthomosaics and vegetation species classification were obtained from a fixed wing airplane and a rotary wing helicopter, and compared to coarser analyses and products, including Light Detection and Ranging (LiDAR). We focus the discussion on the relative improvements achieved with UAV-derived fields in terms of terrain-hydrologic-vegetation analyses and summer season simulations using the TIN-based Real-time Integrated Basin Simulator (tRIBS) model. Model simulations are evaluated at each site with respect to a high-resolution sensor network consisting of six rain gauges, forty soil moisture and temperature profiles, four channel runoff flumes, a cosmic-ray soil moisture sensor and an eddy covariance tower over multiple summer periods. We also discuss prospects for the fusion of high resolution models with novel observations from UAVs, including synthetic aperture radar and multispectral imagery.

  16. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  17. Does Explosive Nuclear Burning Occur in Tidal Disruption Events of White Dwarfs by Intermediate-mass Black Holes?

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Sato, Yushi; Nomoto, Ken'ichi; Maeda, Keiichi; Nakasato, Naohito; Hachisu, Izumi

    2017-04-01

    We investigate nucleosynthesis in tidal disruption events (TDEs) of white dwarfs (WDs) by intermediate-mass black holes. We consider various types of WDs with different masses and compositions by means of three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations. We model these WDs with different numbers of SPH particles, N, from a few 104 to a few 107 in order to check mass resolution convergence, where SPH simulations with N > 107 (or a space resolution of several 106 cm) have unprecedentedly high resolution in this kind of simulation. We find that nuclear reactions become less active with increasing N and that these nuclear reactions are excited by spurious heating due to low resolution. Moreover, we find no shock wave generation. In order to investigate the reason for the absence of a shock wave, we additionally perform one-dimensional (1D) SPH and mesh-based simulations with a space resolution ranging from 104 to 107 cm, using a characteristic flow structure extracted from the 3D SPH simulations. We find shock waves in these 1D high-resolution simulations, one of which triggers a detonation wave. However, we must be careful of the fact that, if the shock wave emerged in an outer region, it could not trigger the detonation wave due to low density. Note that the 1D initial conditions lack accuracy to precisely determine where a shock wave emerges. We need to perform 3D simulations with ≲106 cm space resolution in order to conclude that WD TDEs become optical transients powered by radioactive nuclei.

  18. A High-Resolution Capability for Large-Eddy Simulation of Jet Flows

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2011-01-01

    A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.

  19. A hybrid video codec based on extended block sizes, recursive integer transforms, improved interpolation, and flexible motion representation

    NASA Astrophysics Data System (ADS)

    Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.

    2011-01-01

    This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.

  20. Identifying species from the air: UAVs and the very high resolution challenge for plant conservation.

    PubMed

    Baena, Susana; Moat, Justin; Whaley, Oliver; Boyd, Doreen S

    2017-01-01

    The Pacific Equatorial dry forest of Northern Peru is recognised for its unique endemic biodiversity. Although highly threatened the forest provides livelihoods and ecosystem services to local communities. As agro-industrial expansion and climatic variation transform the region, close ecosystem monitoring is essential for viable adaptation strategies. UAVs offer an affordable alternative to satellites in obtaining both colour and near infrared imagery to meet the specific requirements of spatial and temporal resolution of a monitoring system. Combining this with their capacity to produce three dimensional models of the environment provides an invaluable tool for species level monitoring. Here we demonstrate that object-based image analysis of very high resolution UAV images can identify and quantify keystone tree species and their health across wide heterogeneous landscapes. The analysis exposes the state of the vegetation and serves as a baseline for monitoring and adaptive implementation of community based conservation and restoration in the area.

  1. High-resolution numerical models for smoke transport in plumes from wildland fires

    Treesearch

    Philip Cunningham; Scott Goodrick

    2013-01-01

    A high-resolution large-eddy simulation (LES) model is employed to examine the fundamental structure and dynamics of buoyant plumes arising from heat sources representative of wildland fires. Herein we describe several aspects of the mean properties of the simulated plumes. Mean plume trajectories are apparently well described by the traditional two-thirds law for...

  2. Advanced adaptive optics technology development

    NASA Astrophysics Data System (ADS)

    Olivier, Scot S.

    2002-02-01

    The NSF Center for Adaptive Optics (CfAO) is supporting research on advanced adaptive optics technologies. CfAO research activities include development and characterization of micro-electro-mechanical systems (MEMS) deformable mirror (DM) technology, as well as development and characterization of high-resolution adaptive optics systems using liquid crystal (LC) spatial light modulator (SLM) technology. This paper presents an overview of the CfAO advanced adaptive optics technology development activities including current status and future plans.

  3. Adaptive multitaper time-frequency spectrum estimation

    NASA Astrophysics Data System (ADS)

    Pitton, James W.

    1999-11-01

    In earlier work, Thomson's adaptive multitaper spectrum estimation method was extended to the nonstationary case. This paper reviews the time-frequency multitaper method and the adaptive procedure, and explores some properties of the eigenvalues and eigenvectors. The variance of the adaptive estimator is used to construct an adaptive smoother, which is used to form a high resolution estimate. An F-test for detecting and removing sinusoidal components in the time-frequency spectrum is also given.

  4. Patch-based iterative conditional geostatistical simulation using graph cuts

    NASA Astrophysics Data System (ADS)

    Li, Xue; Mariethoz, Gregoire; Lu, DeTang; Linde, Niklas

    2016-08-01

    Training image-based geostatistical methods are increasingly popular in groundwater hydrology even if existing algorithms present limitations that often make real-world applications difficult. These limitations include a computational cost that can be prohibitive for high-resolution 3-D applications, the presence of visual artifacts in the model realizations, and a low variability between model realizations due to the limited pool of patterns available in a finite-size training image. In this paper, we address these issues by proposing an iterative patch-based algorithm which adapts a graph cuts methodology that is widely used in computer graphics. Our adapted graph cuts method optimally cuts patches of pixel values borrowed from the training image and assembles them successively, each time accounting for the information of previously stitched patches. The initial simulation result might display artifacts, which are identified as regions of high cost. These artifacts are reduced by iteratively placing new patches in high-cost regions. In contrast to most patch-based algorithms, the proposed scheme can also efficiently address point conditioning. An advantage of the method is that the cut process results in the creation of new patterns that are not present in the training image, thereby increasing pattern variability. To quantify this effect, a new measure of variability is developed, the merging index, quantifies the pattern variability in the realizations with respect to the training image. A series of sensitivity analyses demonstrates the stability of the proposed graph cuts approach, which produces satisfying simulations for a wide range of parameters values. Applications to 2-D and 3-D cases are compared to state-of-the-art multiple-point methods. The results show that the proposed approach obtains significant speedups and increases variability between realizations. Connectivity functions applied to 2-D models transport simulations in 3-D models are used to demonstrate that pattern continuity is preserved.

  5. Estimation of Coastal Freshwater Discharge into Prince William Sound using a High-Resolution Hydrological Model

    NASA Astrophysics Data System (ADS)

    Beamer, J. P.; Hill, D. F.; Liston, G. E.; Arendt, A. A.; Hood, E. W.

    2013-12-01

    In Prince William Sound (PWS), Alaska, there is a pressing need for accurate estimates of the spatial and temporal variations in coastal freshwater discharge (FWD). FWD into PWS originates from streamflow due to rainfall, annual snowmelt, and changes in stored glacier mass and is important because it helps establish spatial and temporal patterns in ocean salinity and temperature, and is a time-varying boundary condition for oceanographic circulation models. Previous efforts to model FWD into PWS have been heavily empirical, with many physical processes absorbed into calibration coefficients that, in many cases, were calibrated to streams and rivers not hydrologically similar to those discharging into PWS. In this work we adapted and validated a suite of high-resolution (in space and time), physically-based, distributed weather, snowmelt, and runoff-routing models designed specifically for snow melt- and glacier melt-dominated watersheds like PWS in order to: 1) provide high-resolution, real-time simulations of snowpack and FWD, and 2) provide a record of historical variations of FWD. SnowModel, driven with gridded topography, land cover, and 32 years (1979-2011) of 3-hourly North American Regional Reanalysis (NARR) atmospheric forcing data, was used to simulate snowpack accumulation and melt across a PWS model domain. SnowModel outputs of daily snow water equivalent (SWE) depth and grid-cell runoff volumes were then coupled with HydroFlow, a runoff-routing model which routed snowmelt, glacier-melt, and rainfall to each watershed outlet (PWS coastline) in the simulation domain. The end product was a continuous 32-year simulation of daily FWD into PWS. In order to validate the models, SWE and snow depths from SnowModel were compared with observed SWE and snow depths from SnoTel and snow survey data, and discharge from HydroFlow was compared with observed streamflow measurements. As a second phase of this research effort, the coupled models will be set-up to run in real-time, where daily measurements from weather stations in the PWS will be used to drive simulations of snow cover and streamflow. In addition, we will deploy a strategic array of instrumentation aimed at validating the simulated weather estimates and the calculations of freshwater discharge. Upon successful implementation and validation of the modeling system, it will join established and ongoing computational and observational efforts that have a common goal of establishing a comprehensive understanding of the physical behavior of PWS.

  6. The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.

    2012-04-01

    Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.

  7. Solid rocket booster internal flow analysis by highly accurate adaptive computational methods

    NASA Technical Reports Server (NTRS)

    Huang, C. Y.; Tworzydlo, W.; Oden, J. T.; Bass, J. M.; Cullen, C.; Vadaketh, S.

    1991-01-01

    The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described.

  8. Precision cosmology with time delay lenses: High resolution imaging requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Xiao -Lei; Treu, Tommaso; Agnello, Adriano

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration ofmore » the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ tot∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive Optics System, and TMT, will only be of order a few minutes per system, thus making the follow-up of hundreds of systems a practical and efficient cosmological probe.« less

  9. Precision cosmology with time delay lenses: high resolution imaging requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Xiao-Lei; Liao, Kai; Treu, Tommaso

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ''Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration ofmore » the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ{sub tot}∝ r{sup −γ'} for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive Optics System, and TMT, will only be of order a few minutes per system, thus making the follow-up of hundreds of systems a practical and efficient cosmological probe.« less

  10. Introducing CGOLS: The Cholla Galactic Outflow Simulation Suite

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2018-06-01

    We present the Cholla Galactic OutfLow Simulations (CGOLS) suite, a set of extremely high resolution global simulations of isolated disk galaxies designed to clarify the nature of multiphase structure in galactic winds. Using the GPU-based code Cholla, we achieve unprecedented resolution in these simulations, modeling galaxies over a 20 kpc region at a constant resolution of 5 pc. The simulations include a feedback model designed to test the effects of different mass- and energy-loading factors on galactic outflows over kiloparsec scales. In addition to describing the simulation methodology in detail, we also present the results from an adiabatic simulation that tests the frequently adopted analytic galactic wind model of Chevalier & Clegg. Our results indicate that the Chevalier & Clegg model is a good fit to nuclear starburst winds in the nonradiative region of parameter space. Finally, we investigate the role of resolution and convergence in large-scale simulations of multiphase galactic winds. While our largest-scale simulations show convergence of observable features like soft X-ray emission, our tests demonstrate that simulations of this kind with resolutions greater than 10 pc are not yet converged, confirming the need for extreme resolution in order to study the structure of winds and their effects on the circumgalactic medium.

  11. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  12. GFDL's unified regional-global weather-climate modeling system with variable resolution capability for severe weather predictions and regional climate simulations

    NASA Astrophysics Data System (ADS)

    Lin, S. J.

    2015-12-01

    The NOAA/Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions (e.g., tornado outbreak events and cat-5 hurricanes) and ultra-high-resolution (1-km) regional climate simulations within a consistent global modeling framework. The fundation of this flexible regional-global modeling system is the non-hydrostatic extension of the vertically Lagrangian dynamical core (Lin 2004, Monthly Weather Review) known in the community as FV3 (finite-volume on the cubed-sphere). Because of its flexability and computational efficiency, the FV3 is one of the final candidates of NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched (single) grid capability, a two-way (regional-global) multiple nested grid capability, and the combination of the stretched and two-way nests, so as to make convection-resolving regional climate simulation within a consistent global modeling system feasible using today's High Performance Computing System. One of our main scientific goals is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornadoes using a global model that was originally designed for century long climate simulations. As a unified weather-climate modeling system, we evaluated the performance of the model with horizontal resolution ranging from 1 km to as low as 200 km. In particular, for downscaling studies, we have developed various tests to ensure that the large-scale circulation within the global varaible resolution system is well simulated while at the same time the small-scale can be accurately captured within the targeted high resolution region.

  13. Monte Carlo simulations of neutron-scattering instruments using McStas

    NASA Astrophysics Data System (ADS)

    Nielsen, K.; Lefmann, K.

    2000-06-01

    Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.

  14. North Atlantic Tropical Cyclones: historical simulations and future changes with the new high-resolution Arpege AGCM.

    NASA Astrophysics Data System (ADS)

    Pilon, R.; Chauvin, F.; Palany, P.; Belmadani, A.

    2017-12-01

    A new version of the variable high-resolution Meteo-France Arpege atmospheric general circulation model (AGCM) has been developed for tropical cyclones (TC) studies, with a focus on the North Atlantic basin, where the model horizontal resolution is 15 km. Ensemble historical AMIP (Atmospheric Model Intercomparison Project)-type simulations (1965-2014) and future projections (2020-2080) under the IPCC (Intergovernmental Panel on Climate Change) representative concentration pathway (RCP) 8.5 scenario have been produced. TC-like vortices tracking algorithm is used to investigate TC activity and variability. TC frequency, genesis, geographical distribution and intensity are examined. Historical simulations are compared to best-track and reanalysis datasets. Model TC frequency is generally realistic but tends to be too high during the rst decade of the historical simulations. Biases appear to originate from both the tracking algorithm and model climatology. Nevertheless, the model is able to simulate extremely well intense TCs corresponding to category 5 hurricanes in the North Atlantic, where grid resolution is highest. Interaction between developing TCs and vertical wind shear is shown to be contributing factor for TC variability. Future changes in TC activity and properties are also discussed.

  15. Comparing AMR and SPH Cosmological Simulations. I. Dark Matter and Adiabatic Simulations

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian W.; Nagamine, Kentaro; Springel, Volker; Hernquist, Lars; Norman, Michael L.

    2005-09-01

    We compare two cosmological hydrodynamic simulation codes in the context of hierarchical galaxy formation: the Lagrangian smoothed particle hydrodynamics (SPH) code GADGET, and the Eulerian adaptive mesh refinement (AMR) code Enzo. Both codes represent dark matter with the N-body method but use different gravity solvers and fundamentally different approaches for baryonic hydrodynamics. The SPH method in GADGET uses a recently developed ``entropy conserving'' formulation of SPH, while for the mesh-based Enzo two different formulations of Eulerian hydrodynamics are employed: the piecewise parabolic method (PPM) extended with a dual energy formulation for cosmology, and the artificial viscosity-based scheme used in the magnetohydrodynamics code ZEUS. In this paper we focus on a comparison of cosmological simulations that follow either only dark matter, or also a nonradiative (``adiabatic'') hydrodynamic gaseous component. We perform multiple simulations using both codes with varying spatial and mass resolution with identical initial conditions. The dark matter-only runs agree generally quite well provided Enzo is run with a comparatively fine root grid and a low overdensity threshold for mesh refinement, otherwise the abundance of low-mass halos is suppressed. This can be readily understood as a consequence of the hierarchical particle-mesh algorithm used by Enzo to compute gravitational forces, which tends to deliver lower force resolution than the tree-algorithm of GADGET at early times before any adaptive mesh refinement takes place. At comparable force resolution we find that the latter offers substantially better performance and lower memory consumption than the present gravity solver in Enzo. In simulations that include adiabatic gasdynamics we find general agreement in the distribution functions of temperature, entropy, and density for gas of moderate to high overdensity, as found inside dark matter halos. However, there are also some significant differences in the same quantities for gas of lower overdensity. For example, at z=3 the fraction of cosmic gas that has temperature logT>0.5 is ~80% for both Enzo ZEUS and GADGET, while it is 40%-60% for Enzo PPM. We argue that these discrepancies are due to differences in the shock-capturing abilities of the different methods. In particular, we find that the ZEUS implementation of artificial viscosity in Enzo leads to some unphysical heating at early times in preshock regions. While this is apparently a significantly weaker effect in GADGET, its use of an artificial viscosity technique may also make it prone to some excess generation of entropy that should be absent in Enzo PPM. Overall, the hydrodynamical results for GADGET are bracketed by those for Enzo ZEUS and Enzo PPM but are closer to Enzo ZEUS.

  16. Los Angeles megacity: a high-resolution land–atmosphere modelling system for urban CO 2 emissions

    DOE PAGES

    Feng, Sha; Lauvaux, Thomas; Newman, Sally; ...

    2016-07-22

    Megacities are major sources of anthropogenic fossil fuel CO 2 (FFCO 2) emissions. The spatial extents of these large urban systems cover areas of 10 000 km 2 or more with complex topography and changing landscapes. We present a high-resolution land–atmosphere modelling system for urban CO 2 emissions over the Los Angeles (LA) megacity area. The Weather Research and Forecasting (WRF)-Chem model was coupled to a very high-resolution FFCO 2 emission product, Hestia-LA, to simulate atmospheric CO 2 concentrations across the LA megacity at spatial resolutions as fine as ~1 km. We evaluated multiple WRF configurations, selecting one that minimizedmore » errors in wind speed, wind direction, and boundary layer height as evaluated by its performance against meteorological data collected during the CalNex-LA campaign (May–June 2010). Our results show no significant difference between moderate-resolution (4 km) and high-resolution (1.3 km) simulations when evaluated against surface meteorological data, but the high-resolution configurations better resolved planetary boundary layer heights and vertical gradients in the horizontal mean winds. We coupled our WRF configuration with the Vulcan 2.2 (10 km resolution) and Hestia-LA (1.3 km resolution) fossil fuel CO 2 emission products to evaluate the impact of the spatial resolution of the CO 2 emission products and the meteorological transport model on the representation of spatiotemporal variability in simulated atmospheric CO 2 concentrations. We find that high spatial resolution in the fossil fuel CO 2 emissions is more important than in the atmospheric model to capture CO 2 concentration variability across the LA megacity. Finally, we present a novel approach that employs simultaneous correlations of the simulated atmospheric CO 2 fields to qualitatively evaluate the greenhouse gas measurement network over the LA megacity. Spatial correlations in the atmospheric CO 2 fields reflect the coverage of individual measurement sites when a statistically significant number of sites observe emissions from a specific source or location. We conclude that elevated atmospheric CO 2 concentrations over the LA megacity are composed of multiple fine-scale plumes rather than a single homogenous urban dome. Furthermore, we conclude that FFCO 2 emissions monitoring in the LA megacity requires FFCO 2 emissions modelling with ~1 km resolution because coarser-resolution emissions modelling tends to overestimate the observational constraints on the emissions estimates.« less

  17. Los Angeles megacity: a high-resolution land–atmosphere modelling system for urban CO 2 emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Sha; Lauvaux, Thomas; Newman, Sally

    Megacities are major sources of anthropogenic fossil fuel CO 2 (FFCO 2) emissions. The spatial extents of these large urban systems cover areas of 10 000 km 2 or more with complex topography and changing landscapes. We present a high-resolution land–atmosphere modelling system for urban CO 2 emissions over the Los Angeles (LA) megacity area. The Weather Research and Forecasting (WRF)-Chem model was coupled to a very high-resolution FFCO 2 emission product, Hestia-LA, to simulate atmospheric CO 2 concentrations across the LA megacity at spatial resolutions as fine as ~1 km. We evaluated multiple WRF configurations, selecting one that minimizedmore » errors in wind speed, wind direction, and boundary layer height as evaluated by its performance against meteorological data collected during the CalNex-LA campaign (May–June 2010). Our results show no significant difference between moderate-resolution (4 km) and high-resolution (1.3 km) simulations when evaluated against surface meteorological data, but the high-resolution configurations better resolved planetary boundary layer heights and vertical gradients in the horizontal mean winds. We coupled our WRF configuration with the Vulcan 2.2 (10 km resolution) and Hestia-LA (1.3 km resolution) fossil fuel CO 2 emission products to evaluate the impact of the spatial resolution of the CO 2 emission products and the meteorological transport model on the representation of spatiotemporal variability in simulated atmospheric CO 2 concentrations. We find that high spatial resolution in the fossil fuel CO 2 emissions is more important than in the atmospheric model to capture CO 2 concentration variability across the LA megacity. Finally, we present a novel approach that employs simultaneous correlations of the simulated atmospheric CO 2 fields to qualitatively evaluate the greenhouse gas measurement network over the LA megacity. Spatial correlations in the atmospheric CO 2 fields reflect the coverage of individual measurement sites when a statistically significant number of sites observe emissions from a specific source or location. We conclude that elevated atmospheric CO 2 concentrations over the LA megacity are composed of multiple fine-scale plumes rather than a single homogenous urban dome. Furthermore, we conclude that FFCO 2 emissions monitoring in the LA megacity requires FFCO 2 emissions modelling with ~1 km resolution because coarser-resolution emissions modelling tends to overestimate the observational constraints on the emissions estimates.« less

  18. Nowcasting for a high-resolution weather radar network

    NASA Astrophysics Data System (ADS)

    Ruzanski, Evan

    Short-term prediction (nowcasting) of high-impact weather events can lead to significant improvement in warnings and advisories and is of great practical importance. Nowcasting using weather radar reflectivity data has been shown to be particularly useful. The Collaborative Adaptive Sensing of the Atmosphere (CASA) radar network provides high-resolution reflectivity data amenable to producing valuable nowcasts. The high-resolution nature of CASA data requires the use of an efficient nowcasting approach, which necessitated the development of the Dynamic Adaptive Radar Tracking of Storms (DARTS) and sinc kernel-based advection nowcasting methodology. This methodology was implemented operationally in the CASA Distributed Collaborative Adaptive Sensing (DCAS) system in a robust and efficient manner necessitated by the high-resolution nature of CASA data and distributed nature of the environment in which the nowcasting system operates. Nowcasts up to 10 min to support emergency manager decision-making and 1--5 min to steer the CASA radar nodes to better observe the advecting storm patterns for forecasters and researchers are currently provided by this system. Results of nowcasting performance during the 2009 CASA IP experiment are presented. Additionally, currently state-of-the-art scale-based filtering methods were adapted and evaluated for use in the CASA DCAS to provide a scale-based analysis of nowcasting. DARTS was also incorporated in the Weather Support to Deicing Decision Making system to provide more accurate and efficient snow water equivalent nowcasts for aircraft deicing decision support relative to the radar-based nowcasting method currently used in the operational system. Results of an evaluation using data collected from 2007--2008 by the Weather Service Radar-1988 Doppler (WSR-88D) located near Denver, Colorado, and the National Center for Atmospheric Research Marshall Test Site near Boulder, Colorado, are presented. DARTS was also used to study the short-term predictability of precipitation patterns depicted by high-resolution reflectivity data observed at microalpha (0.2--2 km) to mesobeta (20--200 km) scales by the CASA radar network. Additionally, DARTS was used to investigate the performance of nowcasting rainfall fields derived from specific differential phase estimates, which have been shown to provide more accurate and robust rainfall estimates compared to those made from radar reflectivity data.

  19. Prospects for measuring supermassive black hole masses with future extremely large telescopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Do, Tuan; Wright, Shelley A.; Barth, Aaron J.

    2014-04-01

    The next generation of giant-segmented mirror telescopes (>20 m) will enable us to observe galactic nuclei at much higher angular resolution and sensitivity than ever before. These capabilities will introduce a revolutionary shift in our understanding of the origin and evolution of supermassive black holes by enabling more precise black hole mass measurements in a mass range that is unreachable today. We present simulations and predictions of the observations of nuclei that will be made with the Thirty Meter Telescope (TMT) and the adaptive optics assisted integral-field spectrograph IRIS, which is capable of diffraction-limited spectroscopy from Z band (0.9 μm)more » to K band (2.2 μm). These simulations, for the first time, use realistic values for the sky, telescope, adaptive optics system, and instrument to determine the expected signal-to-noise ratio of a range of possible targets spanning intermediate mass black holes of ∼10{sup 4} M {sub ☉} to the most massive black holes known today of >10{sup 10} M {sub ☉}. We find that IRIS will be able to observe Milky Way mass black holes out the distance of the Virgo Cluster, and will allow us to observe many more of the brightest cluster galaxies where the most massive black holes are thought to reside. We also evaluate how well the kinematic moments of the velocity distributions can be constrained at the different spectral resolutions and plate scales designed for IRIS. We find that a spectral resolution of ∼8000 will be necessary to measure the masses of intermediate mass black holes. By simulating the observations of galaxies found in Sloan Digital Sky Survey DR7, we find that over 10{sup 5} massive black holes will be observable at distances between 0.005 < z < 0.18 with the estimated sensitivity and angular resolution provided by access to Z-band (0.9 μm) spectroscopy from IRIS and the TMT adaptive optics system. These observations will provide the most accurate dynamical measurements of black hole masses to enable the study of the demography of massive black holes, address the origin of the M {sub BH} – σ and M {sub BH} – L relationships, and evolution of black holes through cosmic time.« less

  20. A Novel Observation-Guided Approach for Evaluating Mesoscale Convective Systems Simulated by the DOE ACME Model

    NASA Astrophysics Data System (ADS)

    Feng, Z.; Ma, P. L.; Hardin, J. C.; Houze, R.

    2017-12-01

    Mesoscale convective systems (MCSs) are the largest type of convective storms that develop when convection aggregates and induces mesoscale circulation features. Over North America, MCSs contribute over 60% of the total warm-season precipitation and over half of the extreme daily precipitation in the central U.S. Our recent study (Feng et al. 2016) found that the observed increases in springtime total and extreme rainfall in this region are dominated by increased frequency and intensity of long-lived MCSs*. To date, global climate models typically do not run at a resolution high enough to explicitly simulate individual convective elements and may not have adequate process representations for MCSs, resulting in a large deficiency in projecting changes of the frequency of extreme precipitation events in future climate. In this study, we developed a novel observation-guided approach specifically designed to evaluate simulated MCSs in the Department of Energy's climate model, Accelerated Climate Modeling for Energy (ACME). The ACME model has advanced treatments for convection and subgrid variability and for this study is run at 25 km and 100 km grid spacings. We constructed a robust MCS database consisting of over 500 MCSs from 3 warm-season observations by applying a feature-tracking algorithm to 4-km resolution merged geostationary satellite and 3-D NEXRAD radar network data over the Continental US. This high-resolution MCS database is then down-sampled to the 25 and 100 km ACME grids to re-characterize key MCS properties. The feature-tracking algorithm is adapted with the adjusted characteristics to identify MCSs from ACME model simulations. We demonstrate that this new analysis framework is useful for evaluating ACME's warm-season precipitation statistics associated with MCSs, and provides insights into the model process representations related to extreme precipitation events for future improvement. *Feng, Z., L. R. Leung, S. Hagos, R. A. Houze, C. D. Burleyson, and K. Balaguru (2016), More frequent intense and long-lived storms dominate the springtime trend in central US rainfall, Nat Commun, 7, 13429, doi: 10.1038/ncomms13429.

  1. Resolution requirements for aero-optical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mani, Ali; Wang Meng; Moin, Parviz

    2008-11-10

    Analytical criteria are developed to estimate the error of aero-optical computations due to inadequate spatial resolution of refractive index fields in high Reynolds number flow simulations. The unresolved turbulence structures are assumed to be locally isotropic and at low turbulent Mach number. Based on the Kolmogorov spectrum for the unresolved structures, the computational error of the optical path length is estimated and linked to the resulting error in the computed far-field optical irradiance. It is shown that in the high Reynolds number limit, for a given geometry and Mach number, the spatial resolution required to capture aero-optics within a pre-specifiedmore » error margin does not scale with Reynolds number. In typical aero-optical applications this resolution requirement is much lower than the resolution required for direct numerical simulation, and therefore, a typical large-eddy simulation can capture the aero-optical effects. The analysis is extended to complex turbulent flow simulations in which non-uniform grid spacings are used to better resolve the local turbulence structures. As a demonstration, the analysis is used to estimate the error of aero-optical computation for an optical beam passing through turbulent wake of flow over a cylinder.« less

  2. Real-time haptic cutting of high-resolution soft tissues.

    PubMed

    Wu, Jun; Westermann, Rüdiger; Dick, Christian

    2014-01-01

    We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.

  3. Dual-conjugate adaptive optics for wide-field high-resolution retinal imaging.

    PubMed

    Thaung, Jörgen; Knutsson, Per; Popovic, Zoran; Owner-Petersen, Mette

    2009-03-16

    We present analysis and preliminary laboratory testing of a real-time dual-conjugate adaptive optics (DCAO) instrument for ophthalmology that will enable wide-field high resolution imaging of the retina in vivo. The setup comprises five retinal guide stars (GS) and two deformable mirrors (DM), one conjugate to the pupil and one conjugate to a plane close to the retina. The DCAO instrument has a closed-loop wavefront sensing wavelength of 834 nm and an imaging wavelength of 575 nm. It incorporates an array of collimator lenses to spatially filter the light from all guide stars using one adjustable iris, and images the Hartmann patterns of multiple reference sources on a single detector. Zemax simulations were performed at 834 nm and 575 nm with the Navarro 99 and the Liou- Brennan eye models. Two correction alternatives were evaluated; conventional single conjugate AO (SCAO, using one GS and a pupil DM) and DCAO (using multiple GS and two DM). Zemax simulations at 575 nm based on the Navarro 99 eye model show that the diameter of the corrected field of view for diffraction-limited imaging (Strehl >or= 0.8) increases from 1.5 deg with SCAO to 6.5 deg using DCAO. The increase for the less stringent condition of a wavefront error of 1 rad or less (Strehl >or= 0.37) is from 3 deg with SCAO to approximately 7.4 deg using DCAO. Corresponding results for the Liou-Brennan eye model are 3.1 deg (SCAO) and 8.2 deg (DCAO) for Strehl >or= 0.8, and 4.8 deg (SCAO) and 9.6 deg (DCAO) for Strehl >or= 0.37. Potential gain in corrected field of view with DCAO is confirmed both by laboratory experiments on a model eye and by preliminary in vivo imaging of a human eye. (c) 2009 Optical Society of America

  4. 2010 bathymetric survey and digital elevation model of Corte Madera Bay, California

    USGS Publications Warehouse

    Foxgrover, Amy C.; Finlayson, David P.; Jaffe, Bruce E.; Takekawa, John Y.; Thorne, Karen M.; Spragens, Kyle A.

    2011-01-01

    A high-resolution bathymetric survey of Corte Madera Bay, California, was collected in early 2010 in support of a collaborative research project initiated by the San Francisco Bay Conservation and Development Commission and funded by the U.S. Environmental Protection Agency. The primary objective of the Innovative Wetland Adaptation in the Lower Corte Madera Creek Watershed Project is to develop shoreline adaptation strategies to future sea-level rise based upon sound science. Fundamental to this research was the development of an of an up-to-date, high-resolution digital elevation model (DEM) extending from the subtidal environment through the surrounding intertidal marsh. We provide bathymetric data collected by the U.S. Geological Survey and have merged the bathymetry with a 1-m resolution aerial lidar data set that was collected by the National Oceanic and Atmospheric Administration during the same time period to create a seamless, high-resolution DEM of Corte Madera Bay and the surrounding topography. The bathymetric and DEM surfaces are provided at both 1 m and 10 m resolutions formatted as both X, Y, Z text files and ESRI Arc ASCII files, which are accompanied by Federal Geographic Data Committee compliant metadata.

  5. The fusion of satellite and UAV data: simulation of high spatial resolution band

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  6. Approach to simultaneously denoise and invert backscatter and extinction from photon-limited atmospheric lidar observations.

    PubMed

    Marais, Willem J; Holz, Robert E; Hu, Yu Hen; Kuehn, Ralph E; Eloranta, Edwin E; Willett, Rebecca M

    2016-10-10

    Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.

  7. Simulations of Madden-Julian Oscillation in High Resolution Atmospheric General Circulation Model

    NASA Astrophysics Data System (ADS)

    Deng, Liping; Stenchikov, Georgiy; McCabe, Matthew; Bangalath, HamzaKunhu; Raj, Jerry; Osipov, Sergey

    2014-05-01

    The simulation of tropical signals, especially the Madden-Julian Oscillation (MJO), is one of the major deficiencies in current numerical models. The unrealistic features in the MJO simulations include the weak amplitude, more power at higher frequencies, displacement of the temporal and spatial distributions, eastward propagation speed being too fast, and a lack of coherent structure for the eastward propagation from the Indian Ocean to the Pacific (e.g., Slingo et al. 1996). While some improvement in simulating MJO variance and coherent eastward propagation has been attributed to model physics, model mean background state and air-sea interaction, studies have shown that the model resolution, especially for higher horizontal resolution, may play an important role in producing a more realistic simulation of MJO (e.g., Sperber et al. 2005). In this study, we employ unique high-resolution (25-km) simulations conducted using the Geophysical Fluid Dynamics Laboratory global High Resolution Atmospheric Model (HIRAM) to evaluate the MJO simulation against the European Center for Medium-range Weather Forecasts (ECMWF) Interim re-analysis (ERAI) dataset. We specifically focus on the ability of the model to represent the MJO related amplitude, spatial distribution, eastward propagation, and horizontal and vertical structures. Additionally, as the HIRAM output covers not only an historic period (1979-2012) but also future period (2012-2050), the impact of future climate change related to the MJO is illustrated. The possible changes in intensity and frequency of extreme weather and climate events (e.g., strong wind and heavy rainfall) in the western Pacific, the Indian Ocean and the Middle East North Africa (MENA) region are highlighted.

  8. Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain

    NASA Astrophysics Data System (ADS)

    Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.

    2018-04-01

    The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  9. Adaptive Microwave Staring Correlated Imaging for Targets Appearing in Discrete Clusters.

    PubMed

    Tian, Chao; Jiang, Zheng; Chen, Weidong; Wang, Dongjin

    2017-10-21

    Microwave staring correlated imaging (MSCI) can achieve ultra-high resolution in real aperture staring radar imaging using the correlated imaging process (CIP) under all-weather and all-day circumstances. The CIP must combine the received echo signal with the temporal-spatial stochastic radiation field. However, a precondition of the CIP is that the continuous imaging region must be discretized to a fine grid, and the measurement matrix should be accurately computed, which makes the imaging process highly complex when the MSCI system observes a wide area. This paper proposes an adaptive imaging approach for the targets in discrete clusters to reduce the complexity of the CIP. The approach is divided into two main stages. First, as discrete clustered targets are distributed in different range strips in the imaging region, the transmitters of the MSCI emit narrow-pulse waveforms to separate the echoes of the targets in different strips in the time domain; using spectral entropy, a modified method robust against noise is put forward to detect the echoes of the discrete clustered targets, based on which the strips with targets can be adaptively located. Second, in a strip with targets, the matched filter reconstruction algorithm is used to locate the regions with targets, and only the regions of interest are discretized to a fine grid; sparse recovery is used, and the band exclusion is used to maintain the non-correlation of the dictionary. Simulation results are presented to demonstrate that the proposed approach can accurately and adaptively locate the regions with targets and obtain high-quality reconstructed images.

  10. High-Resolution Structure and Mechanism of an F/V-Hybrid Rotor Ring in a Na+-coupled ATP Synthase

    PubMed Central

    Matthies, Doreen; Zhou, Wenchang; Klyszejko, Adriana L.; Anselmi, Claudio; Yildiz, Özkan; Brandt, Karsten; Müller, Volker; Faraldo-Gómez, José D.; Meier, Thomas

    2014-01-01

    All rotary ATPases catalyze the interconversion of ATP and ADP-Pi through a mechanism that is coupled to the transmembrane flow of H+ or Na+. Physiologically, however, F/A-type enzymes specialize in ATP synthesis driven by downhill ion diffusion, while eukaryotic V-type ATPases function as ion pumps. To begin to rationalize the molecular basis for this functional differentiation, we solved the crystal structure of the Na+-driven membrane rotor of the Acetobacterium woodii ATP synthase, at 2.1 Å resolution. Unlike known structures, this rotor ring is a 9:1 heteromer of F- and V-type c-subunits, and therefore features a hybrid configuration of ion-binding sites along its circumference. Molecular and kinetic simulations are used to dissect the mechanisms of Na+ recognition and rotation of this c-ring, and to explain the functional implications of the V-type c-subunit. These structural and mechanistic insights indicate an evolutionary path between synthases and pumps involving adaptations in the rotor ring. PMID:25381992

  11. Coronagraphic mask design using Hermite functions.

    PubMed

    Cagigal, Manuel P; Canales, Vidal F; Valle, Pedro J; Oti, José E

    2009-10-26

    We introduce a stellar coronagraph that uses a coronagraphic mask described by a Hermite function or a combination of them. It allows the detection of exoplanets providing both deep starlight extinction and high angular resolution. This angular resolution depends on the order of the Hermite function used. An analysis of the coronagraph performance is carried out for different even order masks. Numerical simulations of the ideal case, with no phase errors and perfect telescope pointing, show that on-axis starlight is reduced to very low intensity levels corresponding to a gain of at least 25 magnitudes (10(-10) light intensity reduction). The coronagraphic throughput depends on the Hermite function or combination selected. The proposed mask series presents the same advantages of band limited masks along with the benefit of reducing the light diffracted by the mask border thanks to its particular shape. Nevertheless, for direct detection of Earth-like exoplanets it requires the use of adaptive optics facilities for compensating the perturbations introduced by the atmosphere and by the optical system.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Gang

    Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less

  13. Automated segmentation of retinal pigment epithelium cells in fluorescence adaptive optics images.

    PubMed

    Rangel-Fonseca, Piero; Gómez-Vieyra, Armando; Malacara-Hernández, Daniel; Wilson, Mario C; Williams, David R; Rossi, Ethan A

    2013-12-01

    Adaptive optics (AO) imaging methods allow the histological characteristics of retinal cell mosaics, such as photoreceptors and retinal pigment epithelium (RPE) cells, to be studied in vivo. The high-resolution images obtained with ophthalmic AO imaging devices are rich with information that is difficult and/or tedious to quantify using manual methods. Thus, robust, automated analysis tools that can provide reproducible quantitative information about the cellular mosaics under examination are required. Automated algorithms have been developed to detect the position of individual photoreceptor cells; however, most of these methods are not well suited for characterizing the RPE mosaic. We have developed an algorithm for RPE cell segmentation and show its performance here on simulated and real fluorescence AO images of the RPE mosaic. Algorithm performance was compared to manual cell identification and yielded better than 91% correspondence. This method can be used to segment RPE cells for morphometric analysis of the RPE mosaic and speed the analysis of both healthy and diseased RPE mosaics.

  14. Towards Direct Simulation of Future Tropical Cyclone Statistics in a High-Resolution Global Atmospheric Model

    DOE PAGES

    Wehner, Michael F.; Bala, G.; Duffy, Phillip; ...

    2010-01-01

    We present a set of high-resolution global atmospheric general circulation model (AGCM) simulations focusing on the model's ability to represent tropical storms and their statistics. We find that the model produces storms of hurricane strength with realistic dynamical features. We also find that tropical storm statistics are reasonable, both globally and in the north Atlantic, when compared to recent observations. The sensitivity of simulated tropical storm statistics to increases in sea surface temperature (SST) is also investigated, revealing that a credible late 21st century SST increase produced increases in simulated tropical storm numbers and intensities in all ocean basins. Whilemore » this paper supports previous high-resolution model and theoretical findings that the frequency of very intense storms will increase in a warmer climate, it differs notably from previous medium and high-resolution model studies that show a global reduction in total tropical storm frequency. However, we are quick to point out that this particular model finding remains speculative due to a lack of radiative forcing changes in our time-slice experiments as well as a focus on the Northern hemisphere tropical storm seasons.« less

  15. Supersampling multiframe blind deconvolution resolution enhancement of adaptive optics compensated imagery of low earth orbit satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2002-09-01

    We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO- compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide- field-of-view (FOV) modes.

  16. Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction

    PubMed Central

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2015-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895

  17. Nested high-resolution large-eddy simulations in WRF to support wind power

    NASA Astrophysics Data System (ADS)

    Mirocha, J.; Kirkil, G.; Kosovic, B.; Lundquist, J. K.

    2009-12-01

    The WRF model’s grid nesting capability provides a potentially powerful framework for simulating flow over a wide range of scales. One such application is computation of realistic inflow boundary conditions for large eddy simulations (LES) by nesting LES domains within mesoscale domains. While nesting has been widely and successfully applied at GCM to mesoscale resolutions, the WRF model’s nesting behavior at the high-resolution (Δx < 1000m) end of the spectrum is less well understood. Nesting LES within msoscale domains can significantly improve turbulent flow prediction at the scale of a wind park, providing a basis for superior site characterization, or for improved simulation of turbulent inflows encountered by turbines. We investigate WRF’s grid nesting capability at high mesh resolutions using nested mesoscale and large-eddy simulations. We examine the spatial scales required for flow structures to equilibrate to the finer mesh as flow enters a nest, and how the process depends on several parameters, including grid resolution, turbulence subfilter stress models, relaxation zones at nest interfaces, flow velocities, surface roughnesses, terrain complexity and atmospheric stability. Guidance on appropriate domain sizes and turbulence models for LES in light of these results is provided This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 LLNL-ABS-416482

  18. Structure of High Latitude Currents in Magnetosphere-Ionosphere Models

    NASA Astrophysics Data System (ADS)

    Wiltberger, M.; Rigler, E. J.; Merkin, V.; Lyon, J. G.

    2017-03-01

    Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model we examine the structure of the high latitude field-aligned current patterns. Each resolution was run for the entire Whole Heliosphere Interval which contained two high speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results obtained from the Weimer 2005 computing using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and narrow. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths also results in the cross polar cap potential (CPCP) pattern being concentrated in higher latitudes. Current-voltage relationships between the R1 and CPCP are quite similar at the higher resolution indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.

  19. Structure of high latitude currents in global magnetospheric-ionospheric models

    USGS Publications Warehouse

    Wiltberger, M; Rigler, E. J.; Merkin, V; Lyon, J. G

    2016-01-01

    Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model we examine the structure of the high latitude field-aligned current patterns. Each resolution was run for the entire Whole Heliosphere Interval which contained two high speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results obtained from the Weimer 2005 computing using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and narrow. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths also results in the cross polar cap potential (CPCP) pattern being concentrated in higher latitudes. Current-voltage relationships between the R1 and CPCP are quite similar at the higher resolution indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.

  20. Computational adaptive optics for broadband optical interferometric tomography of biological tissue

    NASA Astrophysics Data System (ADS)

    Boppart, Stephen A.

    2015-03-01

    High-resolution real-time tomography of biological tissues is important for many areas of biological investigations and medical applications. Cellular level optical tomography, however, has been challenging because of the compromise between transverse imaging resolution and depth-of-field, the system and sample aberrations that may be present, and the low imaging sensitivity deep in scattering tissues. The use of computed optical imaging techniques has the potential to address several of these long-standing limitations and challenges. Two related techniques are interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO). Through three-dimensional Fourierdomain resampling, in combination with high-speed OCT, ISAM can be used to achieve high-resolution in vivo tomography with enhanced depth sensitivity over a depth-of-field extended by more than an order-of-magnitude, in realtime. Subsequently, aberration correction with CAO can be performed in a tomogram, rather than to the optical beam of a broadband optical interferometry system. Based on principles of Fourier optics, aberration correction with CAO is performed on a virtual pupil using Zernike polynomials, offering the potential to augment or even replace the more complicated and expensive adaptive optics hardware with algorithms implemented on a standard desktop computer. Interferometric tomographic reconstructions are characterized with tissue phantoms containing sub-resolution scattering particles, and in both ex vivo and in vivo biological tissue. This review will collectively establish the foundation for high-speed volumetric cellular-level optical interferometric tomography in living tissues.

  1. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  2. Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion

    NASA Astrophysics Data System (ADS)

    Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison

    2016-11-01

    Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.

  3. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  4. Evaluation of unrestrained replica-exchange simulations using dynamic walkers in temperature space for protein structure refinement.

    PubMed

    Olson, Mark A; Lee, Michael S

    2014-01-01

    A central problem of computational structural biology is the refinement of modeled protein structures taken from either comparative modeling or knowledge-based methods. Simulations are commonly used to achieve higher resolution of the structures at the all-atom level, yet methodologies that consistently yield accurate results remain elusive. In this work, we provide an assessment of an adaptive temperature-based replica exchange simulation method where the temperature clients dynamically walk in temperature space to enrich their population and exchanges near steep energetic barriers. This approach is compared to earlier work of applying the conventional method of static temperature clients to refine a dataset of conformational decoys. Our results show that, while an adaptive method has many theoretical advantages over a static distribution of client temperatures, only limited improvement was gained from this strategy in excursions of the downhill refinement regime leading to an increase in the fraction of native contacts. To illustrate the sampling differences between the two simulation methods, energy landscapes are presented along with their temperature client profiles.

  5. Simulating Future Changes in Spatio-temporal Precipitation by Identifying and Characterizing Individual Rainstorm Events

    NASA Astrophysics Data System (ADS)

    Chang, W.; Stein, M.; Wang, J.; Kotamarthi, V. R.; Moyer, E. J.

    2015-12-01

    A growing body of literature suggests that human-induced climate change may cause significant changes in precipitation patterns, which could in turn influence future flood levels and frequencies and water supply and management practices. Although climate models produce full three-dimensional simulations of precipitation, analyses of model precipitation have focused either on time-averaged distributions or on individual timeseries with no spatial information. We describe here a new approach based on identifying and characterizing individual rainstorms in either data or model output. Our approach enables us to readily characterize important spatio-temporal aspects of rainstorms including initiation location, intensity (mean and patterns), spatial extent, duration, and trajectory. We apply this technique to high-resolution precipitation over the continental U.S. both from radar-based observations (NCEP Stage IV QPE product, 1-hourly, 4 km spatial resolution) and from model runs with dynamical downscaling (WRF regional climate model, 3-hourly, 12 km spatial resolution). In the model studies we investigate the changes in storm characteristics under a business-as-usual warming scenario to 2100 (RCP 8.5). We find that in these model runs, rainstorm intensity increases as expected with rising temperatures (approximately 7%/K, following increased atmospheric moisture content), while total precipitation increases by a lesser amount (3%/K), consistent with other studies. We identify for the first time the necessary compensating mechanism: in these model runs, individual precipitation events become smaller. Other aspects are approximately unchanged in the warmer climate. Because these spatio-temporal changes in rainfall patterns would impact regional hydrology, it is important that they be accurately incorporated into any impacts assessment. For this purpose we have developed a methodology for producing scenarios of future precipitation that combine observational data and model-projected changes. We statistically describe the future changes in rainstorm characteristics suggested by the WRF model and apply those changes to observational data. The resulting high spatial and temporal resolution scenarios have immediate applications for impacts assessment and adaptation studies.

  6. Separation of Evans and Hiro currents in VDE of tokamak plasma

    NASA Astrophysics Data System (ADS)

    Galkin, Sergei A.; Svidzinski, V. A.; Zakharov, L. E.

    2014-10-01

    Progress on the Disruption Simulation Code (DSC-3D) development and benchmarking will be presented. The DSC-3D is one-fluid nonlinear time-dependent MHD code, which utilizes fully 3D toroidal geometry for the first wall, pure vacuum and plasma itself, with adaptation to the moving plasma boundary and accurate resolution of the plasma surface current. Suppression of fast magnetosonic scale by the plasma inertia neglecting will be demonstrated. Due to code adaptive nature, self-consistent plasma surface current modeling during non-linear dynamics of the Vertical Displacement Event (VDE) is accurately provided. Separation of the plasma surface current on Evans and Hiro currents during simulation of fully developed VDE, then the plasma touches in-vessel tiles, will be discussed. Work is supported by the US DOE SBIR Grant # DE-SC0004487.

  7. High resolution observations: The state of the art and beyond

    NASA Technical Reports Server (NTRS)

    Title, A.; Tarbell, T.; Shine, R.; Topka, K.; Frank, Z.

    1992-01-01

    The meaning of high resolution and its scientific importance with regard to solar observations is discussed. The state of the art is reviewed, looking into Solar Optical Universal Polarimeter (SOUP) observations, image selection techniques, and adaptive optics. It is concluded that until there are observations in space, complete understanding of processes in the solar photosphere, chromosphere, transition region, and corona will be impossible. The importance of high resolution is considered with regard to solar surface and convection, solar photosphere inside and outside magnetic fields, and sunspot geometry.

  8. Simulations of kinetic electrostatic electron nonlinear (KEEN) waves with variable velocity resolution grids and high-order time-splitting

    NASA Astrophysics Data System (ADS)

    Afeyan, Bedros; Casas, Fernando; Crouseilles, Nicolas; Dodhy, Adila; Faou, Erwan; Mehrenberger, Michel; Sonnendrücker, Eric

    2014-10-01

    KEEN waves are non-stationary, nonlinear, self-organized asymptotic states in Vlasov plasmas. They lie outside the precepts of linear theory or perturbative analysis, unlike electron plasma waves or ion acoustic waves. Steady state, nonlinear constructs such as BGK modes also do not apply. The range in velocity that is strongly perturbed by KEEN waves depends on the amplitude and duration of the ponderomotive force generated by two crossing laser beams, for instance, used to drive them. Smaller amplitude drives manage to devolve into multiple highly-localized vorticlets, after the drive is turned off, and may eventually succeed to coalesce into KEEN waves. Fragmentation once the drive stops, and potential eventual remerger, is a hallmark of the weakly driven cases. A fully formed (more strongly driven) KEEN wave has one dominant vortical core. But it also involves fine scale complex dynamics due to shedding and merging of smaller vortical structures with the main one. Shedding and merging of vorticlets are involved in either case, but at different rates and with different relative importance. The narrow velocity range in which one must maintain sufficient resolution in the weakly driven cases, challenges fixed velocity grid numerical schemes. What is needed is the capability of resolving locally in velocity while maintaining a coarse grid outside the highly perturbed region of phase space. We here report on a new Semi-Lagrangian Vlasov-Poisson solver based on conservative non-uniform cubic splines in velocity that tackles this problem head on. An additional feature of our approach is the use of a new high-order time-splitting scheme which allows much longer simulations per computational effort. This is needed for low amplitude runs. There, global coherent structures take a long time to set up, such as KEEN waves, if they do so at all. The new code's performance is compared to uniform grid simulations and the advantages are quantified. The birth pains associated with weakly driven KEEN waves are captured in these simulations. Canonical KEEN waves with ample drive are also treated using these advanced techniques. They will allow the efficient simulation of KEEN waves in multiple dimensions, which will be tackled next, as well as generalizations to Vlasov-Maxwell codes. These are essential for pursuing the impact of KEEN waves in high energy density plasmas and in inertial confinement fusion applications. More generally, one needs a fully-adaptive grid-in-phase-space method which could handle all small vorticlet dynamics whether pealing off or remerging. Such fully adaptive grids would have to be computed sparsely in order to be viable. This two-velocity grid method is a concrete and fruitful step in that direction. Contribution to the Topical Issue "Theory and Applications of the Vlasov Equation", edited by Francesco Pegoraro, Francesco Califano, Giovanni Manfredi and Philip J. Morrison.

  9. Mesosacle eddies in a high resolution OGCM and coupled ocean-atmosphere GCM

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Liu, H.; Lin, P.

    2017-12-01

    The present study described high-resolution climate modeling efforts including oceanic, atmospheric and coupled general circulation model (GCM) at the state key laboratory of numerical modeling for atmospheric sciences and geophysical fluid dynamics (LASG), Institute of Atmospheric Physics (IAP). The high-resolution OGCM is established based on the latest version of the LASG/IAP Climate system Ocean Model (LICOM2.1), but its horizontal resolution and vertical resolution are increased to 1/10° and 55 layers, respectively. Forced by the surface fluxes from the reanalysis and observed data, the model has been integrated for approximately more than 80 model years. Compared with the simulation of the coarse-resolution OGCM, the eddy-resolving OGCM not only better simulates the spatial-temporal features of mesoscale eddies and the paths and positions of western boundary currents but also reproduces the large meander of the Kuroshio Current and its interannual variability. Another aspect, namely, the complex structures of equatorial Pacific currents and currents in the coastal ocean of China, are better captured due to the increased horizontal and vertical resolution. Then we coupled the high resolution OGCM to NCAR CAM4 with 25km resolution, in which the mesoscale air-sea interaction processes are better captured.

  10. Does Explosive Nuclear Burning Occur in Tidal Disruption Events of White Dwarfs by Intermediate-mass Black Holes?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanikawa, Ataru; Sato, Yushi; Hachisu, Izumi

    We investigate nucleosynthesis in tidal disruption events (TDEs) of white dwarfs (WDs) by intermediate-mass black holes. We consider various types of WDs with different masses and compositions by means of three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations. We model these WDs with different numbers of SPH particles, N , from a few 10{sup 4} to a few 10{sup 7} in order to check mass resolution convergence, where SPH simulations with N > 10{sup 7} (or a space resolution of several 10{sup 6} cm) have unprecedentedly high resolution in this kind of simulation. We find that nuclear reactions become less activemore » with increasing N and that these nuclear reactions are excited by spurious heating due to low resolution. Moreover, we find no shock wave generation. In order to investigate the reason for the absence of a shock wave, we additionally perform one-dimensional (1D) SPH and mesh-based simulations with a space resolution ranging from 10{sup 4} to 10{sup 7} cm, using a characteristic flow structure extracted from the 3D SPH simulations. We find shock waves in these 1D high-resolution simulations, one of which triggers a detonation wave. However, we must be careful of the fact that, if the shock wave emerged in an outer region, it could not trigger the detonation wave due to low density. Note that the 1D initial conditions lack accuracy to precisely determine where a shock wave emerges. We need to perform 3D simulations with ≲10{sup 6} cm space resolution in order to conclude that WD TDEs become optical transients powered by radioactive nuclei.« less

  11. High resolution simulations of orographic flow over a complex terrain on the Southeast coast of Brazil

    NASA Astrophysics Data System (ADS)

    Chou, S. C.; Zolino, M. M.; Gomes, J. L.; Bustamante, J. F.; Lima-e-Silva, P. P.

    2012-04-01

    The Eta Model is used operationally by CPTEC to produce weather forecasts over South America since 1997. The model has gone through upgrades. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The Eta Model was configured, with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain and coastline. Mountains can rise up to about 700m. The region suffers frequent events of floods and landslides. The objective of this work is to evaluate high resolution simulations of wind and temperature in this complex area. Verification of model runs uses observations taken from the nuclear power plant. Accurate near-surface wind direction and magnitude are needed for the plant emergency plan and winds are highly sensitive to model spatial resolution and atmospheric stability. Verification of two cases during summer shows that model has clear diurnal cycle signal for wind in that region. The area is characterized by weak winds which makes the simulation more difficult. The simulated wind magnitude is about 1.5m/s, which is close to observations of about 2m/s; however, the observed change of wind direction of the sea breeze is fast whereas it is slow in the simulations. Nighttime katabatic flow is captured by the simulations. Comparison against Eta-5km runs show that the valley circulation is better described in the 2-km resolution run. Simulated temperatures follow closely the observed diurnal cycle. Experiments improving some surface conditions such as the surface temperature and land cover show simulation error reduction and improved diurnal cycle.

  12. High resolution regional climate simulation of the Hawaiian Islands - Validation of the historical run from 2003 to 2012

    NASA Astrophysics Data System (ADS)

    Xue, L.; Newman, A. J.; Ikeda, K.; Rasmussen, R.; Clark, M. P.; Monaghan, A. J.

    2016-12-01

    A high-resolution (a 1.5 km grid spacing domain nested within a 4.5 km grid spacing domain) 10-year regional climate simulation over the entire Hawaiian archipelago is being conducted at the National Center for Atmospheric Research (NCAR) using the Weather Research and Forecasting (WRF) model version 3.7.1. Numerical sensitivity simulations of the Hawaiian Rainband Project (HaRP, a filed experiment from July to August in 1990) showed that the simulated precipitation properties are sensitive to initial and lateral boundary conditions, sea surface temperature (SST), land surface models, vertical resolution and cloud droplet concentration. The validations of model simulated statistics of the trade wind inversion, temperature, wind field, cloud cover, and precipitation over the islands against various observations from soundings, satellites, weather stations and rain gauges during the period from 2003 to 2012 will be presented at the meeting.

  13. Computational Models of Protein Kinematics and Dynamics: Beyond Simulation

    PubMed Central

    Gipson, Bryant; Hsu, David; Kavraki, Lydia E.; Latombe, Jean-Claude

    2016-01-01

    Physics-based simulation represents a powerful method for investigating the time-varying behavior of dynamic protein systems at high spatial and temporal resolution. Such simulations, however, can be prohibitively difficult or lengthy for large proteins or when probing the lower-resolution, long-timescale behaviors of proteins generally. Importantly, not all questions about a protein system require full space and time resolution to produce an informative answer. For instance, by avoiding the simulation of uncorrelated, high-frequency atomic movements, a larger, domain-level picture of protein dynamics can be revealed. The purpose of this review is to highlight the growing body of complementary work that goes beyond simulation. In particular, this review focuses on methods that address kinematics and dynamics, as well as those that address larger organizational questions and can quickly yield useful information about the long-timescale behavior of a protein. PMID:22524225

  14. Middle atmosphere simulated with high vertical and horizontal resolution versions of a GCM: Improvements in the cold pole bias and generation of a QBO-like oscillation in the tropics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, K.; Wilson, R.J.; Hemler, R.S.

    1999-11-15

    The large-scale circulation in the Geophysical Fluid Dynamics Laboratory SKYHI troposphere-stratosphere-mesosphere finite-difference general circulation model is examined as a function of vertical and horizontal resolution. The experiments examined include one with horizontal grid spacing of {approximately}35 km and another with {approximately}100 km horizontal grid spacing but very high vertical resolution (160 levels between the ground and about 85 km). The simulation of the middle-atmospheric zonal-mean winds and temperatures in the extratropics is found to be very sensitive to horizontal resolution. For example, in the early Southern Hemisphere winter the South Pole near 1 mb in the model is colder thanmore » observed, but the bias is reduced with improved horizontal resolution (from {approximately}70 C in a version with {approximately}300 km grid spacing to less than 10 C in the {approximately}35 km version). The extratropical simulation is found to be only slightly affected by enhancements of the vertical resolution. By contrast, the tropical middle-atmospheric simulation is extremely dependent on the vertical resolution employed. With level spacing in the lower stratosphere {approximately}1.5 km, the lower stratospheric zonal-mean zonal winds in the equatorial region are nearly constant in time. When the vertical resolution is doubled, the simulated stratospheric zonal winds exhibit a strong equatorially centered oscillation with downward propagation of the wind reversals and with formation of strong vertical shear layers. This appears to be a spontaneous internally generated oscillation and closely resembles the observed QBO in many respects, although the simulated oscillation has a period less than half that of the real QBO.« less

  15. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  16. The viability of ADVANTG deterministic method for synthetic radiography generation

    NASA Astrophysics Data System (ADS)

    Bingham, Andrew; Lee, Hyoung K.

    2018-07-01

    Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.

  17. Speeding up N-body simulations of modified gravity: chameleon screening models

    NASA Astrophysics Data System (ADS)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  18. Adaptive change in self-concept and well-being during conjugal loss in later life.

    PubMed

    Montpetit, Mignon A; Bergeman, C S; Bisconti, Toni L; Rausch, Joseph R

    2006-01-01

    The present study examines the association between the self-concept and adaptation to conjugal loss; the primary aim was to explore whether those individuals high in self-esteem, environmental mastery, and optimism have more adaptive resources with which to ameliorate the detrimental sequelae of bereavement. Analyses were conducted on data collected from 58 widows every four months over a two-year period. One goal of the research was to explore the adequacy of the theoretically chosen operational definition of the self-concept; another goal was to analyze how changes in the level of self-concept components correlated with changes in levels of depression, health, and grief resolution as individuals adjusted to their losses. Analyses revealed that trajectories of depression and grief resolution were more highly related than health to changes in self-concept.

  19. Assessment of summer rainfall forecast skill in the Intra-Americas in GFDL high and low-resolution models

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Lakshmi; Muñoz, Ángel G.; Vecchi, Gabriel A.; Msadek, Rym; Wittenberg, Andrew T.; Stern, Bill; Gudgel, Rich; Zeng, Fanrong

    2018-05-01

    The Caribbean low-level jet (CLLJ) is an important component of the atmospheric circulation over the Intra-Americas Sea (IAS) which impacts the weather and climate both locally and remotely. It influences the rainfall variability in the Caribbean, Central America, northern South America, the tropical Pacific and the continental Unites States through the transport of moisture. We make use of high-resolution coupled and uncoupled models from the Geophysical Fluid Dynamics Laboratory (GFDL) to investigate the simulation of the CLLJ and its teleconnections and further compare with low-resolution models. The high-resolution coupled model FLOR shows improvements in the simulation of the CLLJ and its teleconnections with rainfall and SST over the IAS compared to the low-resolution coupled model CM2.1. The CLLJ is better represented in uncoupled models (AM2.1 and AM2.5) forced with observed sea-surface temperatures (SSTs), emphasizing the role of SSTs in the simulation of the CLLJ. Further, we determine the forecast skill for observed rainfall using both high- and low-resolution predictions of rainfall and SSTs for the July-August-September season. We determine the role of statistical correction of model biases, coupling and horizontal resolution on the forecast skill. Statistical correction dramatically improves area-averaged forecast skill. But the analysis of spatial distribution in skill indicates that the improvement in skill after statistical correction is region dependent. Forecast skill is sensitive to coupling in parts of the Caribbean, Central and northern South America, and it is mostly insensitive over North America. Comparison of forecast skill between high and low-resolution coupled models does not show any dramatic difference. However, uncoupled models show improvement in the area-averaged skill in the high-resolution atmospheric model compared to lower resolution model. Understanding and improving the forecast skill over the IAS has important implications for highly vulnerable nations in the region.

  20. Assimilation of Sea Surface Temperature in a doubly, two-way nested primitive equation model of the Ligurian Sea

    NASA Astrophysics Data System (ADS)

    Barth, A.; Alvera-Azcarate, A.; Rixen, M.; Beckers, J.-M.; Testut, C.-E.; Brankart, J.-M.; Brasseur, P.

    2003-04-01

    The GHER 3D primitive equation model is implemented with three different resolutions: a low resolution model (1/4^o) covering the whole Mediterranean Sea, an intermediate resolution model (1/20^o) of the Liguro-Provençal basin and a high resolution model (1/60^o) simulating the fine mesoscale structures in the Ligurian Sea. Boundary conditions and the averaged fields (feedback) are exchanged between two successive nesting levels. The model of the Ligurian Sea is also coupled with the assimilation package SESAM. It allows to assimilate satellite data and in situ observations using the local adaptative SEEK (Singular Evolutive Extended Kalman) filter. Instead of evolving the error space by the numerically expensive Lyapunov equation, a simplified algebraic equation depending on the misfit between observation and model forecast is used. Starting from the 1st January 1998 the low and intermediate resolution models are spun up for 18 months. The initial conditions for the Ligurian Sea are interpolated from the intermediate resolution model. The three models are then integrated until August 1999. During this period AVHRR Sea Surface Temperature of the Ligurian Sea is assimilated. The results are validated by using CTD and XBT profiles of the SIRENA cruise from the SACLANT Center. The overall objective of this study is pre-operational. It should help to identify limitations and weaknesses of forecasting methods and to suggest improvements of existing operational models.

  1. Simulation of High-Resolution Magnetic Resonance Images on the IBM Blue Gene/L Supercomputer Using SIMRI

    DOE PAGES

    Baum, K. G.; Menezes, G.; Helguera, M.

    2011-01-01

    Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256 3 voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.

  2. Simulation of High-Resolution Magnetic Resonance Images on the IBM Blue Gene/L Supercomputer Using SIMRI.

    PubMed

    Baum, K G; Menezes, G; Helguera, M

    2011-01-01

    Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256(3) voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.

  3. The North American Regional Climate Change Assessment Program (NARCCAP): Status and results

    NASA Astrophysics Data System (ADS)

    Gutowski, W. J.

    2009-12-01

    NARCCAP is a multi-institutional program that is investigating systematically the uncertainties in regional scale simulations of contemporary climate and projections of future climate. NARCCAP is supported by multiple federal agencies. NARCCAP is producing an ensemble of high-resolution climate-change scenarios by nesting multiple RCMs in reanalyses and multiple atmosphere-ocean GCM simulations of contemporary and future-scenario climates. The RCM domains cover the contiguous U.S., northern Mexico, and most of Canada. The simulation suite also includes time-slice, high resolution GCMs that use sea-surface temperatures from parent atmosphere-ocean GCMs. The baseline resolution of the RCMs and time-slice GCMs is 50 km. Simulations use three sources of boundary conditions: National Centers for Environmental Prediction (NCEP)/Department of Energy (DOE) AMIP-II Reanalysis, GCMs simulating contemporary climate and GCMs using the A2 SRES emission scenario for the twenty-first century. Simulations cover 1979-2004 and 2038-2060, with the first 3 years discarded for spin-up. The resulting RCM and time-slice simulations offer opportunity for extensive analysis of RCM simulations as well as a basis for multiple high-resolution climate scenarios for climate change impacts assessments. Geophysical statisticians are developing measures of uncertainty from the ensemble. To enable very high-resolution simulations of specific regions, both RCM and high-resolution time-slice simulations are saving output needed for further downscaling. All output is publically available to the climate analysis and the climate impacts assessment community, through an archiving and data-distribution plan. Some initial results show that the models closely reproduce ENSO-related precipitation variations in coastal California, where the correlation between the simulated and observed monthly time series exceeds 0.94 for all models. The strong El Nino events of 1982-83 and 1997-98 are well reproduced for the Pacific coastal region of the U.S. in all models. ENSO signals are less well reproduced in other regions. The models also produce well extreme monthly precipitation in coastal California and the Upper Midwest. Model performance tends to deteriorate from west to east across the domain, or roughly from the inflow boundary toward the outflow boundary. This deterioration with distance from the inflow boundary is ameliorated to some extent in models formulated such that large-scale information is included in the model solution, whether implemented by spectral nudging or by use of a perturbation form of the governing equations.

  4. Super-resolution imaging using multi- electrode CMUTs: theoretical design and simulation using point targets.

    PubMed

    You, Wei; Cretu, Edmond; Rohling, Robert

    2013-11-01

    This paper investigates a low computational cost, super-resolution ultrasound imaging method that leverages the asymmetric vibration mode of CMUTs. Instead of focusing on the broadband received signal on the entire CMUT membrane, we utilize the differential signal received on the left and right part of the membrane obtained by a multi-electrode CMUT structure. The differential signal reflects the asymmetric vibration mode of the CMUT cell excited by the nonuniform acoustic pressure field impinging on the membrane, and has a resonant component in immersion. To improve the resolution, we propose an imaging method as follows: a set of manifold matrices of CMUT responses for multiple focal directions are constructed off-line with a grid of hypothetical point targets. During the subsequent imaging process, the array sequentially steers to multiple angles, and the amplitudes (weights) of all hypothetical targets at each angle are estimated in a maximum a posteriori (MAP) process with the manifold matrix corresponding to that angle. Then, the weight vector undergoes a directional pruning process to remove the false estimation at other angles caused by the side lobe energy. Ultrasound imaging simulation is performed on ring and linear arrays with a simulation program adapted with a multi-electrode CMUT structure capable of obtaining both average and differential received signals. Because the differential signals from all receiving channels form a more distinctive temporal pattern than the average signals, better MAP estimation results are expected than using the average signals. The imaging simulation shows that using differential signals alone or in combination with the average signals produces better lateral resolution than the traditional phased array or using the average signals alone. This study is an exploration into the potential benefits of asymmetric CMUT responses for super-resolution imaging.

  5. Real-time control system for adaptive resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flath, L; An, J; Brase, J

    2000-07-24

    Sustained operation of high average power solid-state lasers currently requires an adaptive resonator to produce the optimal beam quality. We describe the architecture of a real-time adaptive control system for correcting intra-cavity aberrations in a heat capacity laser. Image data collected from a wavefront sensor are processed and used to control phase with a high-spatial-resolution deformable mirror. Our controller takes advantage of recent developments in low-cost, high-performance processor technology. A desktop-based computational engine and object-oriented software architecture replaces the high-cost rack-mount embedded computers of previous systems.

  6. High accuracy binary black hole simulations with an extended wave zone

    NASA Astrophysics Data System (ADS)

    Pollney, Denis; Reisswig, Christian; Schnetter, Erik; Dorband, Nils; Diener, Peter

    2011-02-01

    We present results from a new code for binary black hole evolutions using the moving-puncture approach, implementing finite differences in generalized coordinates, and allowing the spacetime to be covered with multiple communicating nonsingular coordinate patches. Here we consider a regular Cartesian near-zone, with adapted spherical grids covering the wave zone. The efficiencies resulting from the use of adapted coordinates allow us to maintain sufficient grid resolution to an artificial outer boundary location which is causally disconnected from the measurement. For the well-studied test case of the inspiral of an equal-mass nonspinning binary (evolved for more than 8 orbits before merger), we determine the phase and amplitude to numerical accuracies better than 0.010% and 0.090% during inspiral, respectively, and 0.003% and 0.153% during merger. The waveforms, including the resolved higher harmonics, are convergent and can be consistently extrapolated to r→∞ throughout the simulation, including the merger and ringdown. Ringdown frequencies for these modes (to (ℓ,m)=(6,6)) match perturbative calculations to within 0.01%, providing a strong confirmation that the remnant settles to a Kerr black hole with irreducible mass Mirr=0.884355±20×10-6 and spin Sf/Mf2=0.686923±10×10-6.

  7. The Prodiguer Messaging Platform

    NASA Astrophysics Data System (ADS)

    Denvil, S.; Greenslade, M. A.; Carenton, N.; Levavasseur, G.; Raciazek, J.

    2015-12-01

    CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French global climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output are some of the complexities that CONVERGENCE aims to resolve.At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of French High Performance Computing (HPC) environments. The IPSL's simulation execution runtime libIGCM (library for IPSL Global Climate Modeling group) has recently been enhanced so as to support hitherto impossible realtime use cases such as simulation monitoring, data publication, metrics collection, simulation control, visualizations … etc. At the core of this enhancement is Prodiguer: an AMQP (Advanced Message Queue Protocol) based event driven asynchronous distributed messaging platform. libIGCM now dispatches copious amounts of information, in the form of messages, to the platform for remote processing by Prodiguer software agents at IPSL servers in Paris. Such processing takes several forms: Persisting message content to database(s); Launching rollback jobs upon simulation failure; Notifying downstream applications; Automation of visualization pipelines; We will describe and/or demonstrate the platform's: Technical implementation; Inherent ease of scalability; Inherent adaptiveness in respect to supervising simulations; Web portal receiving simulation notifications in realtime.

  8. a Methodology to Adapt Photogrammetric Models to Virtual Reality for Oculus Gear VR

    NASA Astrophysics Data System (ADS)

    Colmenero Fdez, A.

    2017-11-01

    In this paper, we will expose the process of adapting a high resolution model (laser and photogrammetry) into a virtual reality application for mobile phones. It is a virtual archeology project carried out on the site of Lugo's Mitreo, Spain.

  9. High-resolution observations of the near-surface wind field over an isolated mountain and in a steep river canyon

    Treesearch

    B. W. Butler; N. S. Wagenbrenner; J. M. Forthofer; B. K. Lamb; K. S. Shannon; D. Finn; R. M. Eckman; K. Clawson; L. Bradshaw; P. Sopko; S. Beard; D. Jimenez; C. Wold; M. Vosburgh

    2015-01-01

    A number of numerical wind flow models have been developed for simulating wind flow at relatively fine spatial resolutions (e.g., 100 m); however, there are very limited observational data available for evaluating these high-resolution models. This study presents high-resolution surface wind data sets collected from an isolated mountain and a steep river canyon. The...

  10. Adaptive Modeling and Real-Time Simulation

    DTIC Science & Technology

    1984-01-01

    34 Artificial Inteligence , Vol. 13, pp. 27-39 (1980). Describes circumscription which is just the assumption that everything that is known to have a particular... Artificial Intelligence Truth Maintenance Planning Resolution Modeling Wcrld Models ~ .. ~2.. ASSTR AT (Coninue n evrse sieIf necesaran Identfy by...represents a marriage of (1) the procedural-network st, planning technology developed in artificial intelligence with (2) the PERT/CPM technology developed in

  11. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  12. Challenges in the development of very high resolution Earth System Models for climate science

    NASA Astrophysics Data System (ADS)

    Rasch, Philip J.; Xie, Shaocheng; Ma, Po-Lun; Lin, Wuyin; Wan, Hui; Qian, Yun

    2017-04-01

    The authors represent the 20+ members of the ACME atmosphere development team. The US Department of Energy (DOE) has, like many other organizations around the world, identified the need for an Earth System Model capable of rapid completion of decade to century length simulations at very high (vertical and horizontal) resolution with good climate fidelity. Two years ago DOE initiated a multi-institution effort called ACME (Accelerated Climate Modeling for Energy) to meet this an extraordinary challenge, targeting a model eventually capable of running at 10-25km horizontal and 20-400m vertical resolution through the troposphere on exascale computational platforms at speeds sufficient to complete 5+ simulated years per day. I will outline the challenges our team has encountered in development of the atmosphere component of this model, and the strategies we have been using for tuning and debugging a model that we can barely afford to run on today's computational platforms. These strategies include: 1) evaluation at lower resolutions; 2) ensembles of short simulations to explore parameter space, and perform rough tuning and evaluation; 3) use of regionally refined versions of the model for probing high resolution model behavior at less expense; 4) use of "auto-tuning" methodologies for model tuning; and 5) brute force long climate simulations.

  13. Adding the third dimension on adaptive optics retina imager thanks to full-field optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Blavier, Marie; Blanco, Leonardo; Glanc, Marie; Pouplard, Florence; Tick, Sarah; Maksimovic, Ivan; Mugnier, Laurent; Chènegros, Guillaume; Rousset, Gérard; Lacombe, François; Pâques, Michel; Le Gargasson, Jean-François; Sahel, José-Alain

    2009-02-01

    Retinal pathologies, like ARMD or glaucoma, need to be early detected, requiring imaging instruments with resolution at a cellular scale. However, in vivo retinal cells studies and early diagnoses are severely limited by the lack of resolution on eye-fundus images from classical ophthalmologic instruments. We built a 2D retina imager using Adaptive Optics to improve lateral resolution. This imager is currently used in clinical environment. We are currently developing a time domain full-field optical coherence tomograph. The first step was to conceive the images reconstruction algorithms and validation was realized on non-biological samples. Ex vivo retina are currently being imaged. The final step will consist in coupling both setups to acquire high resolution retina cross-sections.

  14. Adaptive temporal refinement in injection molding

    NASA Astrophysics Data System (ADS)

    Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek

    2018-05-01

    Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.

  15. Spatial adaptive sampling in multiscale simulation

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.

    2014-07-01

    In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.

  16. Applying multi-resolution numerical methods to geodynamics

    NASA Astrophysics Data System (ADS)

    Davies, David Rhodri

    Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled structured grid solution strategies, the unstructured techniques utilized in 2-D would throw away the regular grid and, with it, the major benefits of the current solution algorithms. Alternative avenues towards multi-resolution must therefore be sought. A non-uniform structured method that produces similar advantages to unstructured grids is introduced here, in the context of the pre-existing 3-D spherical mantle dynamics code, TERRA. The method, based upon the multigrid refinement techniques employed in the field of computational engineering, is used to refine and solve on a radially non-uniform grid. It maintains the key benefits of TERRA's current configuration, whilst also overcoming many of its limitations. Highly efficient solutions to non-uniform problems are obtained. The scheme is highly resourceful in terms RAM, meaning that one can attempt calculations that would otherwise be impractical. In addition, the solution algorithm reduces the CPU-time needed to solve a given problem. Validation tests illustrate that the approach is accurate and robust. Furthermore, by being conceptually simple and straightforward to implement, the method negates the need to reformulate large sections of code. The technique is applied to highly advanced 3-D spherical mantle convection models. Due to its resourcefulness in terms of RAM, the modified code allows one to efficiently resolve thermal boundary layers at the dynamical regime of Earth's mantle. The simulations presented are therefore at superior vigor to the highest attained, to date, in 3-D spherical geometry, achieving Rayleigh numbers of order 109. Upwelling structures are examined, focussing upon the nature of deep mantle plumes. Previous studies have shown long-lived, anchored, coherent upwelling plumes to be a feature of low to moderate vigor convection. Since more vigorous convection traditionally shows greater time-dependence, the fixity of upwellings would not logically be expected for non-layered convection at higher vigors. However, such configurations have recently been observed. With hot-spots widely-regarded as the surface expression of deep mantle plumes, it is of great importance to ascertain whether or not these conclusions are valid at the dynamical regime of Earth's mantle. Results demonstrate that at these high vigors, steady plumes do arise. However, they do not dominate the planform as in lower vigor cases: they coexist with mobile and ephemeral plumes and display ranging characteristics, which are consistent with hot-spot observations on Earth. Those plumes that do remain steady alter in intensity throughout the simulation, strengthening and weakening over time. Such behavior is caused by an irregular supply of cold material to the core-mantle boundary region, suggesting that subducting slabs are partially responsible for episodic plume magmatism on Earth. With this in mind, the influence of the upper boundary condition upon the planform of mantle convection is further examined. With the modified code, the CPU-time needed to solve a given problem is reduced and, hence, several simulations can be run efficiently, allowing a relatively rapid parameter space mapping of various upper boundary conditions. Results, in accordance with the investigations on upwelling structures, demonstrate that the surface exerts a profound control upon internal dynamics, manifesting itself not only in convective structures, but also in thermal profiles, Nusselt numbers and velocity patterns. Since the majority of geodynamical simulations incorporate a surface condition that is not at all representative of Earth, this is a worrying, yet important conclusion. By failing to address the surface appropriately, geodynamical models, regardless of their sophistication, cannot be truly applicable to Earth. In summary, the techniques developed herein, in both 2- and 3-D, are extremely practical and highly efficient, yielding significant advantages for geodynamical simulations. Indeed, they allow one to solve problems that would otherwise be unfeasible.

  17. An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation

    PubMed Central

    Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin

    2014-01-01

    In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035

  18. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  19. Data collection and simulation of high range resolution laser radar for surface mine detection

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove; Chevalier, Tomas; Larsson, Håkan

    2006-05-01

    Rapid and efficient detection of surface mines, IED's (Improvised Explosive Devices) and UXO (Unexploded Ordnance) is of high priority in military conflicts. High range resolution laser radars combined with passive hyper/multispectral sensors offer an interesting concept to help solving this problem. This paper reports on laser radar data collection of various surface mines in different types of terrain. In order to evaluate the capability of 3D imaging for detecting and classifying the objects of interest a scanning laser radar was used to scan mines and surrounding terrain with high angular and range resolution. These data were then fed into a laser radar model capable of generating range waveforms for a variety of system parameters and combinations of different targets and backgrounds. We can thus simulate a potential system by down sampling to relevant pixel sizes and laser/receiver characteristics. Data, simulations and examples will be presented.

  20. A Tool for Creating Regionally Calibrated High-Resolution Land Cover Data Sets for the West African Sahel: Using Machine Learning to Scale Up Hand-Classified Maps in a Data-Sparse Environment

    NASA Astrophysics Data System (ADS)

    Van Gordon, M.; Van Gordon, S.; Min, A.; Sullivan, J.; Weiner, Z.; Tappan, G. G.

    2017-12-01

    Using support vector machine (SVM) learning and high-accuracy hand-classified maps, we have developed a publicly available land cover classification tool for the West African Sahel. Our classifier produces high-resolution and regionally calibrated land cover maps for the Sahel, representing a significant contribution to the data available for this region. Global land cover products are unreliable for the Sahel, and accurate land cover data for the region are sparse. To address this gap, the U.S. Geological Survey and the Regional Center for Agriculture, Hydrology and Meteorology (AGRHYMET) in Niger produced high-quality land cover maps for the region via hand-classification of Landsat images. This method produces highly accurate maps, but the time and labor required constrain the spatial and temporal resolution of the data products. By using these hand-classified maps alongside SVM techniques, we successfully increase the resolution of the land cover maps by 1-2 orders of magnitude, from 2km-decadal resolution to 30m-annual resolution. These high-resolution regionally calibrated land cover datasets, along with the classifier we developed to produce them, lay the foundation for major advances in studies of land surface processes in the region. These datasets will provide more accurate inputs for food security modeling, hydrologic modeling, analyses of land cover change and climate change adaptation efforts. The land cover classification tool we have developed will be publicly available for use in creating additional West Africa land cover datasets with future remote sensing data and can be adapted for use in other parts of the world.

Top