An Overview of Numerical Weather Prediction on Various Scales
NASA Astrophysics Data System (ADS)
Bao, J.-W.
2009-04-01
The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.
Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes
NASA Astrophysics Data System (ADS)
Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.
2015-12-01
Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.
NASA Astrophysics Data System (ADS)
Zhang, Jiaying; Gang, Tie; Ye, Chaofeng; Cong, Sen
2018-04-01
Linear-chirp-Golay (LCG)-coded excitation combined with pulse compression is proposed in this paper to improve the time resolution and suppress sidelobe in ultrasonic testing. The LCG-coded excitation is binary complementary pair Golay signal with linear-chirp signal applied on every sub pulse. Compared with conventional excitation which is a common ultrasonic testing method using a brief narrow pulse as exciting signal, the performances of LCG-coded excitation, in terms of time resolution improvement and sidelobe suppression, are studied via numerical and experimental investigations. The numerical simulations are implemented using Matlab K-wave toolbox. It is seen from the simulation results that time resolution of LCG excitation is 35.5% higher and peak sidelobe level (PSL) is 57.6 dB lower than linear-chirp excitation with 2.4 MHz chirp bandwidth and 3 μs time duration. In the B-scan experiment, time resolution of LCG excitation is higher and PSL is lower than conventional brief pulse excitation and chirp excitation. In terms of time resolution, LCG-coded signal has better performance than chirp signal. Moreover, the impact of chirp bandwidth on LCG-coded signal is less than that on chirp signal. In addition, the sidelobe of LCG-coded signal is lower than that of chirp signal with pulse compression.
The optics of microscope image formation.
Wolf, David E
2013-01-01
Although geometric optics gives a good understanding of how the microscope works, it fails in one critical area, which is explaining the origin of microscope resolution. To accomplish this, one must consider the microscope from the viewpoint of physical optics. This chapter describes the theory of the microscope-relating resolution to the highest spatial frequency that a microscope can collect. The chapter illustrates how Huygens' principle or construction can be used to explain the propagation of a plane wave. It is shown that this limit increases with increasing numerical aperture (NA). As a corollary to this, resolution increases with decreasing wavelength because of how NA depends on wavelength. The resolution is higher for blue light than red light. Resolution is dependent on contrast, and the higher the contrast, the higher the resolution. This last point relates to issues of signal-to-noise and dynamic range. The use of video and new digital cameras has necessitated redefining classical limits such as those of Rayleigh's criterion. Copyright © 2007 Elsevier Inc. All rights reserved.
NUMERICAL SIMULATIONS OF CORONAL HEATING THROUGH FOOTPOINT BRAIDING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansteen, V.; Pontieu, B. De; Carlsson, M.
2015-10-01
Advanced three-dimensional (3D) radiative MHD simulations now reproduce many properties of the outer solar atmosphere. When including a domain from the convection zone into the corona, a hot chromosphere and corona are self-consistently maintained. Here we study two realistic models, with different simulated areas, magnetic field strength and topology, and numerical resolution. These are compared in order to characterize the heating in the 3D-MHD simulations which self-consistently maintains the structure of the atmosphere. We analyze the heating at both large and small scales and find that heating is episodic and highly structured in space, but occurs along loop-shaped structures, andmore » moves along with the magnetic field. On large scales we find that the heating per particle is maximal near the transition region and that widely distributed opposite-polarity field in the photosphere leads to a greater heating scale height in the corona. On smaller scales, heating is concentrated in current sheets, the thicknesses of which are set by the numerical resolution. Some current sheets fragment in time, this process occurring more readily in the higher-resolution model leading to spatially highly intermittent heating. The large-scale heating structures are found to fade in less than about five minutes, while the smaller, local, heating shows timescales of the order of two minutes in one model and one minutes in the other, higher-resolution, model.« less
SIL-STED microscopy technique enhancing super-resolution of fluorescence microscopy
NASA Astrophysics Data System (ADS)
Park, No-Cheol; Lim, Geon; Lee, Won-sup; Moon, Hyungbae; Choi, Guk-Jong; Park, Young-Pil
2017-08-01
We have characterized a new type STED microscope which combines a high numerical aperture (NA) optical head with a solid immersion lens (SIL), and we call it as SIL-STED microscope. The advantage of a SIL-STED microscope is that its high NA of the SIL makes it superior to a general STED microscope in lateral resolution, thus overcoming the optical diffraction limit at the macromolecular level and enabling advanced super-resolution imaging of cell surface or cell membrane structure and function Do. This study presents the first implementation of higher NA illumination in a STED microscope limiting the fluorescence lateral resolution to about 40 nm. The refractive index of the SIL which is made of material KTaO3 is about 2.23 and 2.20 at a wavelength of 633 nm and 780 nm which are used for excitation and depletion in STED imaging, respectively. Based on the vector diffraction theory, the electric field focused by the SILSTED microscope is numerically calculated so that the numerical results of the point dispersion function of the microscope and the expected resolution could be analyzed. For further investigation, fluorescence imaging of nano size fluorescent beads is fulfilled to show improved performance of the technique.
Spacecraft Charging Calculations: NASCAP-2K and SEE Spacecraft Charging Handbook
NASA Technical Reports Server (NTRS)
Davis, V. A.; Neergaard, L. F.; Mandell, M. J.; Katz, I.; Gardner, B. M.; Hilton, J. M.; Minor, J.
2002-01-01
For fifteen years NASA and the Air Force Charging Analyzer Program for Geosynchronous Orbits (NASCAP/GEO) has been the workhorse of spacecraft charging calculations. Two new tools, the Space Environment and Effects (SEE) Spacecraft Charging Handbook (recently released), and Nascap-2K (under development), use improved numeric techniques and modern user interfaces to tackle the same problem. The SEE Spacecraft Charging Handbook provides first-order, lower-resolution solutions while Nascap-2K provides higher resolution results appropriate for detailed analysis. This paper illustrates how the improvements in the numeric techniques affect the results.
Numerical simulations of compressible mixing layers
NASA Technical Reports Server (NTRS)
Normand, Xavier
1990-01-01
Direct numerical simulations of two-dimensional temporally growing compressible mixing layers are presented. The Kelvin-Helmholtz instability is initially excited by a white-noise perturbation superimposed onto a hyperbolic tangent meanflow profile. The linear regime is studied at low resolution in the case of two flows of equal temperatures, for convective Mach numbers from 0.1 to 1 and for different values of the Reynolds number. At higher resolution, the complete evolution of a two-eddy mixing layer between two flows of different temperatures is simulated at moderate Reynolds number. Similarities and differences between flows of equal convective Mach numbers are discussed.
NASA Astrophysics Data System (ADS)
Petrou, Zisis I.; Xian, Yang; Tian, YingLi
2018-04-01
Estimation of sea ice motion at fine scales is important for a number of regional and local level applications, including modeling of sea ice distribution, ocean-atmosphere and climate dynamics, as well as safe navigation and sea operations. In this study, we propose an optical flow and super-resolution approach to accurately estimate motion from remote sensing images at a higher spatial resolution than the original data. First, an external example learning-based super-resolution method is applied on the original images to generate higher resolution versions. Then, an optical flow approach is applied on the higher resolution images, identifying sparse correspondences and interpolating them to extract a dense motion vector field with continuous values and subpixel accuracies. Our proposed approach is successfully evaluated on passive microwave, optical, and Synthetic Aperture Radar data, proving appropriate for multi-sensor applications and different spatial resolutions. The approach estimates motion with similar or higher accuracy than the original data, while increasing the spatial resolution of up to eight times. In addition, the adopted optical flow component outperforms a state-of-the-art pattern matching method. Overall, the proposed approach results in accurate motion vectors with unprecedented spatial resolutions of up to 1.5 km for passive microwave data covering the entire Arctic and 20 m for radar data, and proves promising for numerous scientific and operational applications.
2011-01-01
Background Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement’s ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. Methods The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. Results In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Conclusions Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations. PMID:22166145
Diedrich, Karl T; Roberts, John A; Schmidt, Richard H; Kang, Chang-Ki; Cho, Zang-Hee; Parker, Dennis L
2011-10-18
Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement's ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations.
A. M. S. Smith; N. A. Drake; M. J. Wooster; A. T. Hudak; Z. A. Holden; C. J. Gibbons
2007-01-01
Accurate production of regional burned area maps are necessary to reduce uncertainty in emission estimates from African savannah fires. Numerous methods have been developed that map burned and unburned surfaces. These methods are typically applied to coarse spatial resolution (1 km) data to produce regional estimates of the area burned, while higher spatial resolution...
Hyperlens-array-implemented optical microscopy
NASA Astrophysics Data System (ADS)
Iwanaga, Masanobu
2014-08-01
Limit of resolution of conventional optical microscopes has never reached below 100 nm under visible light illumination. We show that numerically designed high-transmittance hyperlens array (HLA) is implemented in an optical microscope and works in practice for achieving one-shot-recording optical images of in-situ placed objects with sub 50 nm resolution in lateral direction. Direct resolution test employing well-defined nanopatterns proves that the HLA-implemented imaging is super-resolution optical microscopy, which works even under nW/mm2 visible illumination for objects. The HLA implementation makes the resolution of conventional microscopes one-scale higher, leading to the 1/10 illumination wavelength range, that is, mesoscopic range.
The Extended Pulsar Magnetosphere
NASA Technical Reports Server (NTRS)
Constantinos, Kalapotharakos; Demosthenes, Kazanas; Ioannis, Contopoulos
2012-01-01
We present the structure of the 3D ideal MHD pulsar magnetosphere to a radius ten times that of the light cylinder, a distance about an order of magnitude larger than any previous such numerical treatment. Its overall structure exhibits a stable, smooth, well-defined undulating current sheet which approaches the kinematic split monopole solution of Bogovalov 1999 only after a careful introduction of diffusivity even in the highest resolution simulations. It also exhibits an intriguing spiral region at the crossing of two zero charge surfaces on the current sheet, which shows a destabilizing behavior more prominent in higher resolution simulations. We discuss the possibility that this region is physically (and not numerically) unstable. Finally, we present the spiral pulsar antenna radiation pattern.
NASA Astrophysics Data System (ADS)
Ogawa, Masahiko; Shidoji, Kazunori
2011-03-01
High-resolution stereoscopic images are effective for use in virtual reality and teleoperation systems. However, the higher the image resolution, the higher is the cost of computer processing and communication. To reduce this cost, numerous earlier studies have suggested the use of multi-resolution images, which have high resolution in region of interests and low resolution in other areas. However, observers can perceive unpleasant sensations and incorrect depth because they can see low-resolution areas in their field of vision. In this study, we conducted an experiment to research the relationship between the viewing field and the perception of image resolution, and determined respective thresholds of image-resolution perception for various positions of the viewing field. The results showed that participants could not distinguish between the high-resolution stimulus and the decreased stimulus, 63 ppi, at positions more than 8 deg outside the gaze point. Moreover, with positions shifted a further 11 and 13 deg from the gaze point, participants could not distinguish between the high-resolution stimulus and the decreased stimuli whose resolution densities were 42 and 25 ppi. Hence, we will propose the composition of multi-resolution images in which observers do not perceive unpleasant sensations and incorrect depth with data reduction (compression).
Improving PET spatial resolution and detectability for prostate cancer imaging
NASA Astrophysics Data System (ADS)
Bal, H.; Guerin, L.; Casey, M. E.; Conti, M.; Eriksson, L.; Michel, C.; Fanti, S.; Pettinato, C.; Adler, S.; Choyke, P.
2014-08-01
Prostate cancer, one of the most common forms of cancer among men, can benefit from recent improvements in positron emission tomography (PET) technology. In particular, better spatial resolution, lower noise and higher detectability of small lesions could be greatly beneficial for early diagnosis and could provide a strong support for guiding biopsy and surgery. In this article, the impact of improved PET instrumentation with superior spatial resolution and high sensitivity are discussed, together with the latest development in PET technology: resolution recovery and time-of-flight reconstruction. Using simulated cancer lesions, inserted in clinical PET images obtained with conventional protocols, we show that visual identification of the lesions and detectability via numerical observers can already be improved using state of the art PET reconstruction methods. This was achieved using both resolution recovery and time-of-flight reconstruction, and a high resolution image with 2 mm pixel size. Channelized Hotelling numerical observers showed an increase in the area under the LROC curve from 0.52 to 0.58. In addition, a relationship between the simulated input activity and the area under the LROC curve showed that the minimum detectable activity was reduced by more than 23%.
The importance of vertical resolution in the free troposphere for modeling intercontinental plumes
NASA Astrophysics Data System (ADS)
Zhuang, Jiawei; Jacob, Daniel J.; Eastham, Sebastian D.
2018-05-01
Chemical plumes in the free troposphere can preserve their identity for more than a week as they are transported on intercontinental scales. Current global models cannot reproduce this transport. The plumes dilute far too rapidly due to numerical diffusion in sheared flow. We show how model accuracy can be limited by either horizontal resolution (Δx) or vertical resolution (Δz). Balancing horizontal and vertical numerical diffusion, and weighing computational cost, implies an optimal grid resolution ratio (Δx / Δz)opt ˜ 1000 for simulating the plumes. This is considerably higher than current global models (Δx / Δz ˜ 20) and explains the rapid plume dilution in the models as caused by insufficient vertical resolution. Plume simulations with the Geophysical Fluid Dynamics Laboratory Finite-Volume Cubed-Sphere Dynamical Core (GFDL-FV3) over a range of horizontal and vertical grid resolutions confirm this limiting behavior. Our highest-resolution simulation (Δx ≈ 25 km, Δz ≈ 80 m) preserves the maximum mixing ratio in the plume to within 35 % after 8 days in strongly sheared flow, a drastic improvement over current models. Adding free tropospheric vertical levels in global models is computationally inexpensive and would also improve the simulation of water vapor.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.
Macro-actor execution on multilevel data-driven architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Najjar, W.
1988-12-31
The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.
NASA Astrophysics Data System (ADS)
Jeon, Wonju; Lee, Sang-Hee
2012-12-01
In our previous study, we defined the branch length similarity (BLS) entropy for a simple network consisting of a single node and numerous branches. As the first application of this entropy to characterize shapes, the BLS entropy profiles of 20 battle tank shapes were calculated from simple networks created by connecting pixels in the boundary of the shape. The profiles successfully characterized the tank shapes through a comparison of their BLS entropy profiles. Following the application, this entropy was used to characterize human's emotional faces, such as happiness and sad, and to measure the degree of complexity for termite tunnel networks. These applications indirectly indicate that the BLS entropy profile can be a useful tool to characterize networks and shapes. However, the ability of the BLS entropy in the characterization depends on the image resolution because the entropy is determined by the number of nodes for the boundary of a shape. Higher resolution means more nodes. If the entropy is to be widely used in the scientific community, the effect of the resolution on the entropy profile should be understood. In the present study, we mathematically investigated the BLS entropy profile of a shape with infinite resolution and numerically investigated the variation in the pattern of the entropy profile caused by changes in the resolution change in the case of finite resolution.
Nonlinear ultrasonic imaging with X wave
NASA Astrophysics Data System (ADS)
Du, Hongwei; Lu, Wei; Feng, Huanqing
2009-10-01
X wave has a large depth of field and may have important application in ultrasonic imaging to provide high frame rate (HFR). However, the HFR system suffers from lower spatial resolution. In this paper, a study of nonlinear imaging with X wave is presented to improve the resolution. A theoretical description of realizable nonlinear X wave is reported. The nonlinear field is simulated by solving the KZK nonlinear wave equation with a time-domain difference method. The results show that the second harmonic field of X wave has narrower mainlobe and lower sidelobes than the fundamental field. In order to evaluate the imaging effect with X wave, an imaging model involving numerical calculation of the KZK equation, Rayleigh-Sommerfeld integral, band-pass filtering and envelope detection is constructed to obtain 2D fundamental and second harmonic images of scatters in tissue-like medium. The results indicate that if X wave is used, the harmonic image has higher spatial resolution throughout the entire imaging region than the fundamental image, but higher sidelobes occur as compared to conventional focus imaging. A HFR imaging method with higher spatial resolution is thus feasible provided an apodization method is used to suppress sidelobes.
Global tropospheric ozone modeling: Quantifying errors due to grid resolution
NASA Astrophysics Data System (ADS)
Wild, Oliver; Prather, Michael J.
2006-06-01
Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Kumar, Sujay V.; Krikishen, Jayanthi; Jedlovec, Gary J.
2011-01-01
It is hypothesized that high-resolution, accurate representations of surface properties such as soil moisture and sea surface temperature are necessary to improve simulations of summertime pulse-type convective precipitation in high resolution models. This paper presents model verification results of a case study period from June-August 2008 over the Southeastern U.S. using the Weather Research and Forecasting numerical weather prediction model. Experimental simulations initialized with high-resolution land surface fields from the NASA Land Information System (LIS) and sea surface temperature (SST) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) are compared to a set of control simulations initialized with interpolated fields from the National Centers for Environmental Prediction 12-km North American Mesoscale model. The LIS land surface and MODIS SSTs provide a more detailed surface initialization at a resolution comparable to the 4-km model grid spacing. Soil moisture from the LIS spin-up run is shown to respond better to the extreme rainfall of Tropical Storm Fay in August 2008 over the Florida peninsula. The LIS has slightly lower errors and higher anomaly correlations in the top soil layer, but exhibits a stronger dry bias in the root zone. The model sensitivity to the alternative surface initial conditions is examined for a sample case, showing that the LIS/MODIS data substantially impact surface and boundary layer properties.
Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution
Bishara, Waheb; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan
2010-01-01
We demonstrate lensfree holographic microscopy on a chip to achieve ~0.6 µm spatial resolution corresponding to a numerical aperture of ~0.5 over a large field-of-view of ~24 mm2. By using partially coherent illumination from a large aperture (~50 µm), we acquire lower resolution lensfree in-line holograms of the objects with unit fringe magnification. For each lensfree hologram, the pixel size at the sensor chip limits the spatial resolution of the reconstructed image. To circumvent this limitation, we implement a sub-pixel shifting based super-resolution algorithm to effectively recover much higher resolution digital holograms of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area, which is also equivalent to the imaging field-of-view (24 mm2) due to unit magnification. We demonstrate the success of this pixel super-resolution approach by imaging patterned transparent substrates, blood smear samples, as well as Caenoharbditis Elegans. PMID:20588977
Numerical MHD study for plasmoid instability in uniform resistivity
NASA Astrophysics Data System (ADS)
Shimizu, Tohru; Kondoh, Koji; Zenitani, Seiji
2017-11-01
The plasmoid instability (PI) caused in uniform resistivity is numerically studied with a MHD numerical code of HLLD scheme. It is shown that the PI observed in numerical studies may often include numerical (non-physical) tearing instability caused by the numerical dissipations. By increasing the numerical resolutions, the numerical tearing instability gradually disappears and the physical tearing instability remains. Hence, the convergence of the numerical results is observed. Note that the reconnection rate observed in the numerical tearing instability can be higher than that of the physical tearing instability. On the other hand, regardless of the numerical and physical tearing instabilities, the tearing instability can be classified into symmetric and asymmetric tearing instability. The symmetric tearing instability tends to occur when the thinning of current sheet is stopped by the physical or numerical dissipations, often resulting in the drastic changes in plasmoid chain's structure and its activity. In this paper, by eliminating the numerical tearing instability, we could not specify the critical Lundquist number Sc beyond which PI is fully developed. It suggests that Sc does not exist, at least around S = 105.
On a turbulent wall model to predict hemolysis numerically in medical devices
NASA Astrophysics Data System (ADS)
Lee, Seunghun; Chang, Minwook; Kang, Seongwon; Hur, Nahmkeon; Kim, Wonjung
2017-11-01
Analyzing degradation of red blood cells is very important for medical devices with blood flows. The blood shear stress has been recognized as the most dominant factor for hemolysis in medical devices. Compared to laminar flows, turbulent flows have higher shear stress values in the regions near the wall. In case of predicting hemolysis numerically, this phenomenon can require a very fine mesh and large computational resources. In order to resolve this issue, the purpose of this study is to develop a turbulent wall model to predict the hemolysis more efficiently. In order to decrease the numerical error of hemolysis prediction in a coarse grid resolution, we divided the computational domain into two regions and applied different approaches to each region. In the near-wall region with a steep velocity gradient, an analytic approach using modeled velocity profile is applied to reduce a numerical error to allow a coarse grid resolution. We adopt the Van Driest law as a model for the mean velocity profile. In a region far from the wall, a regular numerical discretization is applied. The proposed turbulent wall model is evaluated for a few turbulent flows inside a cannula and centrifugal pumps. The results present that the proposed turbulent wall model for hemolysis improves the computational efficiency significantly for engineering applications. Corresponding author.
Ozone Production in Global Tropospheric Models: Quantifying Errors due to Grid Resolution
NASA Astrophysics Data System (ADS)
Wild, O.; Prather, M. J.
2005-12-01
Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the Western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes at a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63 and T106 resolution is likewise monotonic but still indicates large errors at 120~km scales, suggesting that T106 resolution is still too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over East Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution, but subsequent ozone production in the free troposphere is less significantly affected.
On Computations of Duct Acoustics with Near Cut-Off Frequency
NASA Technical Reports Server (NTRS)
Dong, Thomas Z.; Povinelli, Louis A.
1997-01-01
The cut-off is a unique feature associated with duct acoustics due to the presence of duct walls. A study of this cut-off effect on the computations of duct acoustics is performed in the present work. The results show that the computation of duct acoustic modes near cut-off requires higher numerical resolutions than others to avoid being numerically cut off. Duct acoustic problems in Category 2 are solved by the DRP finite difference scheme with the selective artificial damping method and results are presented and compared to reference solutions.
Dictionary-based image reconstruction for superresolution in integrated circuit imaging.
Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim
2015-06-01
Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.
Inversion of high frequency surface waves with fundamental and higher modes
Xia, J.; Miller, R.D.; Park, C.B.; Tian, G.
2003-01-01
The phase velocity of Rayleigh-waves of a layered earth model is a function of frequency and four groups of earth parameters: compressional (P)-wave velocity, shear (S)-wave velocity, density, and thickness of layers. For the fundamental mode of Rayleigh waves, analysis of the Jacobian matrix for high frequencies (2-40 Hz) provides a measure of dispersion curve sensitivity to earth model parameters. S-wave velocities are the dominant influence of the four earth model parameters. This thesis is true for higher modes of high frequency Rayleigh waves as well. Our numerical modeling by analysis of the Jacobian matrix supports at least two quite exciting higher mode properties. First, for fundamental and higher mode Rayleigh wave data with the same wavelength, higher modes can "see" deeper than the fundamental mode. Second, higher mode data can increase the resolution of the inverted S-wave velocities. Real world examples show that the inversion process can be stabilized and resolution of the S-wave velocity model can be improved when simultaneously inverting the fundamental and higher mode data. ?? 2002 Elsevier Science B.V. All rights reserved.
Dust devil characteristics and associated dust entrainment based on large-eddy simulations
NASA Astrophysics Data System (ADS)
Klose, Martina; Kwidzinski, Nick; Shao, Yaping
2015-04-01
The characteristics of dust devils, such as occurrence frequency, lifetime, size, and intensity, are usually inferred from in situ field measurements and remote sensing. Numerical models, e.g. large-eddy simulation (LES) models, have also been established as a tool to investigate dust devils and their structures. However, most LES models do not contain a dust module. Here, we present results from simulations using the WRF-LES model coupled to the convective turbulent dust emission (CTDE) scheme of Klose et al. (2014). The scheme describes the stochastic process of aerodynamic dust entrainment in the absence of saltation. It therefore allows for dust emission even below the threshold friction velocity for saltation. Numerical experiments have been conducted for different atmospheric stability and background wind conditions at 10 m horizontal resolution. A dust devil tracking algorithm is used to identify dust devils in the simulation results. The detected dust devils are statistically analyzed with regard to e.g. radius, pressure drop, lifetime, and turbulent wind speeds. An additional simulation with higher horizontal resolution (2 m) is conducted for conditions, which are especially favorable for dust devil development, i.e. unstable atmospheric stratification and weak mean winds. The higher resolution enables the identification of smaller dust devils and a more detailed structure analysis. Dust emission fluxes, dust concentrations, and dust mass budgets are calculated from the simulations. The results are compared to field observations reported in literature.
Optical Analysis of an Ultra-High resolution Two-Mirror Soft X-Ray Microscope
NASA Technical Reports Server (NTRS)
Shealy, David L.; Wang, Cheng; Hoover, Richard B.
1994-01-01
This work has summarized for a Schwarzschild microscope some relationships between numerical aperture (NA), magnification, diameter of the primary mirror, radius of curvature of the secondary mirror, and the total length of the microscope. To achieve resolutions better than a spherical Schwarzschild microscope of 3.3 Lambda for a perfectly aligned and fabricated system. it is necessary to use aspherical surfaces to control higher-order aberrations. For an NA of 0.35, the aspherical Head microscope provides diffraction limited resolution of 1.4 Lambda where the aspherical surfaces differ from the best fit spherical surface by approximately 1 micrometer. However, the angle of incidence varies significantly over the primary and the secondary mirrors, which will require graded multilayer coatings to operate near peak reflectivities. For higher numerical apertures, the variation of the angle of incidence over the secondary mirror surface becomes a serious problem which must be solved before multilayer coatings can be used for this application. Tolerance analysis of the spherical Schwarzschild microscope has shown that water window operations will require 2-3 times tighter tolerances to achieve a similar performance for operations with 130 A radiation. Surface contour errors have been shown to have a significant impact on the MTF and must be controlled to a peak-to-valley variation of 50-100 A and a frequency of 8 periods over the surface of a mirror.
Resolution requirements for numerical simulations of transition
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Krist, Steven E.; Hussaini, M. Yousuff
1989-01-01
The resolution requirements for direct numerical simulations of transition to turbulence are investigated. A reliable resolution criterion is determined from the results of several detailed simulations of channel and boundary-layer transition.
Cigada, Alfredo; Lurati, Massimiliano; Ripamonti, Francesco; Vanali, Marcello
2008-12-01
This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique. Beamforming main disadvantages are a poor spatial resolution, at low frequency, and the spatial aliasing problem, at higher frequency, leading to the identification of false sources. The idea is to move the microphone array during the measurement operation. In this paper, the proposed approach is theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatial aliasing. A number of different array configurations are numerically investigated together with the most important parameters governing this measurement technique. A set of numerical results concerning the case of a planar rotating array is shown, together with a first experimental validation of the method.
McDonnell, Liam A; Heeren, Ron M A; de Lange, Robert P J; Fletcher, Ian W
2006-09-01
To expand the role of high spatial resolution secondary ion mass spectrometry (SIMS) in biological studies, numerous developments have been reported in recent years for enhancing the molecular ion yield of high mass molecules. These include both surface modification, including matrix-enhanced SIMS and metal-assisted SIMS, and polyatomic primary ions. Using rat brain tissue sections and a bismuth primary ion gun able to produce atomic and polyatomic primary ions, we report here how the sensitivity enhancements provided by these developments are additive. Combined surface modification and polyatomic primary ions provided approximately 15.8 times more signal than using atomic primary ions on the raw sample, whereas surface modification and polyatomic primary ions yield approximately 3.8 and approximately 8.4 times more signal. This higher sensitivity is used to generate chemically specific images of higher mass biomolecules using a single molecular ion peak.
NASA Astrophysics Data System (ADS)
Calmet, Isabelle; Mestayer, Patrice G.; van Eijk, Alexander M. J.; Herlédant, Olivier
2018-04-01
We complete the analysis of the data obtained during the experimental campaign around the semi circular bay of Quiberon, France, during two weeks in June 2006 (see Part 1). A reanalysis of numerical simulations performed with the Advanced Regional Prediction System model is presented. Three nested computational domains with increasing horizontal resolution down to 100 m, and a vertical resolution of 10 m at the lowest level, are used to reproduce the local-scale variations of the breeze close to the water surface of the bay. The Weather Research and Forecasting mesoscale model is used to assimilate the meteorological data. Comparisons of the simulations with the experimental data obtained at three sites reveal a good agreement of the flow over the bay and around the Quiberon peninsula during the daytime periods of sea-breeze development and weakening. In conditions of offshore synoptic flow, the simulations demonstrate that the semi-circular shape of the bay induces a corresponding circular shape in the offshore zones of stagnant flow preceding the sea-breeze onset, which move further offshore thereafter. The higher-resolution simulations are successful in reproducing the small-scale impacts of the peninsula and local coasts (breeze deviations, wakes, flow divergences), and in demonstrating the complexity of the breeze fields close to the surface over the bay. Our reanalysis also provides guidance for numerical simulation strategies for analyzing the structure and evolution of the near-surface breeze over a semi-circular bay, and for forecasting important flow details for use in upcoming sailing competitions.
Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods
NASA Astrophysics Data System (ADS)
Kozdon, J. E.; Wilcox, L.
2013-12-01
Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.
Climate simulations and projections with a super-parameterized climate model
Stan, Cristiana; Xu, Li
2014-07-01
The mean climate and its variability are analyzed in a suite of numerical experiments with a fully coupled general circulation model in which subgrid-scale moist convection is explicitly represented through embedded 2D cloud-system resolving models. Control simulations forced by the present day, fixed atmospheric carbon dioxide concentration are conducted using two horizontal resolutions and validated against observations and reanalyses. The mean state simulated by the higher resolution configuration has smaller biases. Climate variability also shows some sensitivity to resolution but not as uniform as in the case of mean state. The interannual and seasonal variability are better represented in themore » simulation at lower resolution whereas the subseasonal variability is more accurate in the higher resolution simulation. The equilibrium climate sensitivity of the model is estimated from a simulation forced by an abrupt quadrupling of the atmospheric carbon dioxide concentration. The equilibrium climate sensitivity temperature of the model is 2.77 °C, and this value is slightly smaller than the mean value (3.37 °C) of contemporary models using conventional representation of cloud processes. As a result, the climate change simulation forced by the representative concentration pathway 8.5 scenario projects an increase in the frequency of severe droughts over most of the North America.« less
Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia; ...
2016-01-01
Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia
Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less
Subpixel target detection and enhancement in hyperspectral images
NASA Astrophysics Data System (ADS)
Tiwari, K. C.; Arora, M.; Singh, D.
2011-06-01
Hyperspectral data due to its higher information content afforded by higher spectral resolution is increasingly being used for various remote sensing applications including information extraction at subpixel level. There is however usually a lack of matching fine spatial resolution data particularly for target detection applications. Thus, there always exists a tradeoff between the spectral and spatial resolutions due to considerations of type of application, its cost and other associated analytical and computational complexities. Typically whenever an object, either manmade, natural or any ground cover class (called target, endmembers, components or class) gets spectrally resolved but not spatially, mixed pixels in the image result. Thus, numerous manmade and/or natural disparate substances may occur inside such mixed pixels giving rise to mixed pixel classification or subpixel target detection problems. Various spectral unmixing models such as Linear Mixture Modeling (LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the endmember spectrum and their corresponding abundance fractions inside the pixel. It, however, does not provide spatial distribution of these abundance fractions within a pixel. This limits the applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse Euclidean distance based super-resolution mapping method has been presented that achieves subpixel target detection in hyperspectral images by adjusting spatial distribution of abundance fraction within a pixel. Results obtained at different resolutions indicate that super-resolution mapping may effectively aid subpixel target detection.
Obtaining high-resolution velocity spectra using weighted semblance
NASA Astrophysics Data System (ADS)
Ebrahimi, Saleh; Kahoo, Amin Roshandel; Porsani, Milton J.; Kalateh, Ali Nejati
2017-02-01
Velocity analysis employs coherency measurement along a hyperbolic or non-hyperbolic trajectory time window to build velocity spectra. Accuracy and resolution are strictly related to the method of coherency measurements. Semblance, the most common coherence measure, has poor resolution velocity which affects one's ability to distinguish and pick distinct peaks. Increase the resolution of the semblance velocity spectra causes the accuracy of estimated velocity for normal moveout correction and stacking is improved. The low resolution of semblance spectra depends on its low sensitivity to velocity changes. In this paper, we present a new weighted semblance method that ensures high-resolution velocity spectra. To increase the resolution of semblance spectra, we introduce two weighting functions based on the first to second singular values ratio of the time window and the position of the seismic wavelet in the time window to the semblance equation. We test the method on both synthetic and real field data to compare the resolution of weighted and conventional semblance methods. Numerical examples with synthetic and real seismic data indicate that the new proposed weighted semblance method provides higher resolution than conventional semblance and can separate the reflectors which are mixed in the semblance spectrum.
2016-10-27
Today's VIS image is of Palikir Crater in Terra Sirenum. The inner rim of the crater is dissected with numerous gullies. In higher resolution images from other imagers these gullies are the location of changing linea, which appear to grow and retreat as seasons change. Orbit Number: 65311 Latitude: -41.6177 Longitude: 202.206 Instrument: VIS Captured: 2016-09-03 13:12 http://photojournal.jpl.nasa.gov/catalog/PIA21152
Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.
Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P
2015-02-20
Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.
NASA Astrophysics Data System (ADS)
Choi, Hyun-Jung; Lee, Hwa Woon; Jeon, Won-Bae; Lee, Soon-Hwan
2012-01-01
This study evaluated an atmospheric and air quality model of the spatial variability in low-level coastal winds and ozone concentration, which are affected by sea surface temperature (SST) forcing with different thermal gradients. Several numerical experiments examined the effect of sea surface SST forcing on the coastal atmosphere and air quality. In this study, the RAMS-CAMx model was used to estimate the sensitivity to two different resolutions of SST forcing during the episode day as well as to simulate the low-level coastal winds and ozone concentration over a complex coastal area. The regional model reproduced the qualitative effect of SST forcing and thermal gradients on the coastal flow. The high-resolution SST derived from NGSST-O (New Generation Sea Surface Temperature Open Ocean) forcing to resolve the warm SST appeared to enhance the mean response of low-level winds to coastal regions. These wind variations have important implications for coastal air quality. A higher ozone concentration was forecasted when SST data with a high resolution was used with the appropriate limitation of temperature, regional wind circulation, vertical mixing height and nocturnal boundary layer (NBL) near coastal areas.
Joint denoising, demosaicing, and chromatic aberration correction for UHD video
NASA Astrophysics Data System (ADS)
Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank
2017-09-01
High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
NASA Astrophysics Data System (ADS)
Boxi, Lin; Chao, Yan; Shusheng, Chen
2017-10-01
This work focuses on the numerical dissipation features of high-order flux reconstruction (FR) method combined with different numerical fluxes in turbulence flows. The famous Roe and AUSM+ numerical fluxes together with their corresponding low-dissipation enhanced versions (LMRoe, SLAU2) and higher resolution variants (HR-LMRoe, HR-SLAU2) are incorporated into FR framework, and the dissipation interplay of these combinations is investigated in implicit large eddy simulation. The numerical dissipation stemming from these convective numerical fluxes is quantified by simulating the inviscid Gresho vortex, the transitional Taylor-Green vortex and the homogenous decaying isotropic turbulence. The results suggest that low-dissipation enhanced versions are preferential both in high-order and low-order cases to their original forms, while the use of HR-SLAU2 has marginal improvements and the HR-LMRoe leads to degenerated solution with high-order. In high-order the effects of numerical fluxes are reduced, and their viscosity may not be dissipative enough to provide physically consistent turbulence when under-resolved.
Normal modes of the world's oceans: A numerical investigation using Proudman functions
NASA Technical Reports Server (NTRS)
Sanchez, Braulio V.; Morrow, Dennis
1993-01-01
The numerical modeling of the normal modes of the global oceans is addressed. The results of such modeling could be expected to serve as a guide in the analysis of observations and measurements intended to detect these modes. The numerical computation of normal modes of the global oceans is a field in which several investigations have obtained results during the past 15 years. The results seem to be model-dependent to an unsatisfactory extent. Some modeling areas, such as higher resolution of the bathymetry, inclusion of self-attraction and loading, the role of the Arctic Ocean, and systematic testing by means of diagnostic models are addressed. The results show that the present state of the art is such that a final solution to the normal mode problem still lies in the future. The numerical experiments show where some of the difficulties are and give some insight as to how to proceed in the future.
Nahmani, Marc; Lanahan, Conor; DeRosier, David; Turrigiano, Gina G.
2017-01-01
Superresolution microscopy has fundamentally altered our ability to resolve subcellular proteins, but improving on these techniques to study dense structures composed of single-molecule-sized elements has been a challenge. One possible approach to enhance superresolution precision is to use cryogenic fluorescent imaging, reported to reduce fluorescent protein bleaching rates, thereby increasing the precision of superresolution imaging. Here, we describe an approach to cryogenic photoactivated localization microscopy (cPALM) that permits the use of a room-temperature high-numerical-aperture objective lens to image frozen samples in their native state. We find that cPALM increases photon yields and show that this approach can be used to enhance the effective resolution of two photoactivatable/switchable fluorophore-labeled structures in the same frozen sample. This higher resolution, two-color extension of the cPALM technique will expand the accessibility of this approach to a range of laboratories interested in more precise reconstructions of complex subcellular targets. PMID:28348224
NASA Astrophysics Data System (ADS)
Mukherjee, A.; Shankar, D.; Chatterjee, Abhisek; Vinayachandran, P. N.
2018-06-01
We simulate the East India Coastal Current (EICC) using two numerical models (resolution 0.1° × 0.1°), an oceanic general circulation model (OGCM) called Modular Ocean Model and a simpler, linear, continuously stratified (LCS) model, and compare the simulated current with observations from moorings equipped with acoustic Doppler current profilers deployed on the continental slope in the western Bay of Bengal (BoB). We also carry out numerical experiments to analyse the processes. Both models simulate well the annual cycle of the EICC, but the performance degrades for the intra-annual and intraseasonal components. In a model-resolution experiment, both models (run at a coarser resolution of 0.25° × 0.25°) simulate well the currents in the equatorial Indian Ocean (EIO), but the performance of the high-resolution LCS model as well as the coarse-resolution OGCM, which is good in the EICC regime, degrades in the eastern and northern BoB. An experiment on forcing mechanisms shows that the annual EICC is largely forced by the local alongshore winds in the western BoB and remote forcing due to Ekman pumping over the BoB, but forcing from the EIO has a strong impact on the intra-annual EICC. At intraseasonal periods, local (equatorial) forcing dominates in the south (north) because the Kelvin wave propagates equatorward in the western BoB. A stratification experiment with the LCS model shows that changing the background stratification from EIO to BoB leads to a stronger surface EICC owing to strong coupling of higher order vertical modes with wind forcing for the BoB profiles. These high-order modes, which lead to energy propagating down into the ocean in the form of beams, are important only for the current and do not contribute significantly to the sea level.
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
Sensitivity of an Antarctic Ice Sheet Model to Sub-Ice-Shelf Melting
NASA Astrophysics Data System (ADS)
Lipscomb, W. H.; Leguy, G.; Urban, N. M.; Berdahl, M.
2017-12-01
Theory and observations suggest that marine-based sectors of the Antarctic ice sheet could retreat rapidly under ocean warming and increased melting beneath ice shelves. Numerical models of marine ice sheets vary widely in sensitivity, depending on grid resolution and the parameterization of key processes (e.g., calving and hydrofracture). Here we study the sensitivity of the Antarctic ice sheet to ocean warming and sub-shelf melting in standalone simulations of the Community Ice Sheet Model (CISM). Melt rates either are prescribed based on observations and high-resolution ocean model output, or are derived from a plume model forced by idealized ocean temperature profiles. In CISM, we vary the model resolution (between 1 and 8 km), Stokes approximation (shallow-shelf, depth-integrated higher-order, or 3D higher-order) and calving scheme to create an ensemble of plausible responses to sub-shelf melting. This work supports a broader goal of building statistical and reduced models that can translate large-scale Earth-system model projections to changes in Antarctic ocean temperatures and ice sheet discharge, thus better quantifying uncertainty in Antarctic-sourced sea-level rise.
The dimension of attractors underlying periodic turbulent Poiseuille flow
NASA Technical Reports Server (NTRS)
Keefe, Laurence; Moin, Parviz; Kim, John
1992-01-01
A lower bound on the Liapunov dimenison, D-lambda, of the attractor underlying turbulent, periodic Poiseuille flow at a pressure-gradient Reynolds number of 3200 is calculated, on the basis of a coarse-grained (16x33x8) numerical solution, to be approximately 352. Comparison of Liapunov exponent spectra from this and a higher-resolution (16x33x16) simulation on the same spatial domain shows these spectra to have a universal shape when properly scaled. On the basis of these scaling properties, and a partial exponent spectrum from a still higher-resolution (32x33x32) simulation, it is argued that the actual dimension of the attractor underlying motion of the given computational domain is approximately 780. It is suggested that this periodic turbulent shear flow is deterministic chaos, and that a strange attractor does underly solutions to the Navier-Stokes equations in such flows.
Mesoscale Numerical Simulations of the IAS Circulation
NASA Astrophysics Data System (ADS)
Mooers, C. N.; Ko, D.
2008-05-01
Real-time nowcasts and forecasts of the IAS circulation have been made for several years with mesoscale resolution using the Navy Coastal Ocean Model (NCOM) implemented for the IAS. It is commonly called IASNFS and is driven by the lower resolution Global NCOM on the open boundaries, synoptic atmospheric forcing obtained from the Navy Global Atmospheric Prediction System (NOGAPS), and assimilated satellite-derived sea surface height anomalies and sea surface temperature. Here, examples of the model output are demonstrated; e.g., Gulf of Mexico Loop Current eddy shedding events and the meandering Caribbean Current jet and associated eddies. Overall, IASNFS is ready for further analysis, application to a variety of studies, and downscaling to even higher resolution shelf models. Its output fields are available online through NOAA's National Coastal Data Development Center (NCDDC), located at the Stennis Space Center.
Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution
Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.
2016-01-01
We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939
Performance Modeling of an Airborne Raman Water Vapor Lidar
NASA Technical Reports Server (NTRS)
Whiteman, D. N.; Schwemmer, G.; Berkoff, T.; Plotkin, H.; Ramos-Izquierdo, L.; Pappalardo, G.
2000-01-01
A sophisticated Raman lidar numerical model had been developed. The model has been used to simulate the performance of two ground-based Raman water vapor lidar systems. After tuning the model using these ground-based measurements, the model is used to simulate the water vapor measurement capability of an airborne Raman lidar under both day-and night-time conditions for a wide range of water vapor conditions. The results indicate that, under many circumstances, the daytime measurements possess comparable resolution to an existing airborne differential absorption water vapor lidar while the nighttime measurement have higher resolution. In addition, a Raman lidar is capable of measurements not possible using a differential absorption system.
A Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF)
NASA Astrophysics Data System (ADS)
Trotta, Francesco; Fenu, Elisa; Pinardi, Nadia; Bruciaferri, Diego; Giacomelli, Luca; Federico, Ivan; Coppini, Giovanni
2016-11-01
We present a numerical platform named Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF). The platform is developed for short-time forecasts and is designed to be embedded in any region of the large-scale Mediterranean Forecasting System (MFS) via downscaling. We employ CTD data collected during a campaign around the Elba island to calibrate and validate SURF. The model requires an initial spin up period of a few days in order to adapt the initial interpolated fields and the subsequent solutions to the higher-resolution nested grids adopted by SURF. Through a comparison with the CTD data, we quantify the improvement obtained by SURF model compared to the coarse-resolution MFS model.
NASA Astrophysics Data System (ADS)
Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki
2017-12-01
This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.
Numerical Simulations of Vortex Generator Vanes and Jets on a Flat Plate
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Yao, Chung-Sheng; Lin, John C.
2002-01-01
Numerical simulations of a single low-profile vortex generator vane, which is only a small fraction of the boundary-layer thickness, and a vortex generating jet have been performed for flows over a flat plate. The numerical simulations were computed by solving the steady-state solution to the Reynolds-averaged Navier-Stokes equations. The vortex generating vane results were evaluated by comparing the strength and trajectory of the streamwise vortex to experimental particle image velocimetry measurements. From the numerical simulations of the vane case, it was observed that the Shear-Stress Transport (SST) turbulence model resulted in a better prediction of the streamwise peak vorticity and trajectory when compared to the Spalart-Allmaras (SA) turbulence model. It is shown in this investigation that the estimation of the turbulent eddy viscosity near the vortex core, for both the vane and jet simulations, was higher for the SA model when compared to the SST model. Even though the numerical simulations of the vortex generating vane were able to predict the trajectory of the stream-wise vortex, the initial magnitude and decay of the peak streamwise vorticity were significantly under predicted. A comparison of the positive circulation associated with the streamwise vortex showed that while the numerical simulations produced a more diffused vortex, the vortex strength compared very well to the experimental observations. A grid resolution study for the vortex generating vane was also performed showing that the diffusion of the vortex was not a result of insufficient grid resolution. Comparisons were also made between a fully modeled trapezoidal vane with finite thickness to a simply modeled rectangular thin vane. The comparisons showed that the simply modeled rectangular vane produced a streamwise vortex which had a strength and trajectory very similar to the fully modeled trapezoidal vane.
High numerical aperture projection system for extreme ultraviolet projection lithography
Hudyma, Russell M.
2000-01-01
An optical system is described that is compatible with extreme ultraviolet radiation and comprises five reflective elements for projecting a mask image onto a substrate. The five optical elements are characterized in order from object to image as concave, convex, concave, convex, and concave mirrors. The optical system is particularly suited for ring field, step and scan lithography methods. The invention uses aspheric mirrors to minimize static distortion and balance the static distortion across the ring field width which effectively minimizes dynamic distortion. The present invention allows for higher device density because the optical system has improved resolution that results from the high numerical aperture, which is at least 0.14.
Multibeam interferometric illumination as the primary source of resolution in optical microscopy
NASA Astrophysics Data System (ADS)
Ryu, J.; Hong, S. S.; Horn, B. K. P.; Freeman, D. M.; Mermelstein, M. S.
2006-04-01
High-resolution images of a fluorescent target were obtained using a low-resolution optical detector by illuminating the target with interference patterns produced with 31 coherent beams. The beams were arranged in a cone with 78° half angle to produce illumination patterns consistent with a numerical aperture of 0.98. High-resolution images were constructed from low-resolution images taken with 930 different illumination patterns. Results for optical detectors with numerical apertures of 0.1 and 0.2 were similar, demonstrating that the resolution is primarily determined by the illuminator and not by the low-resolution detector. Furthermore, the long working distance, large depth of field, and large field of view of the low-resolution detector are preserved.
Dual-TRACER: High resolution fMRI with constrained evolution reconstruction.
Li, Xuesong; Ma, Xiaodong; Li, Lyu; Zhang, Zhe; Zhang, Xue; Tong, Yan; Wang, Lihong; Sen Song; Guo, Hua
2018-01-01
fMRI with high spatial resolution is beneficial for studies in psychology and neuroscience, but is limited by various factors such as prolonged imaging time, low signal to noise ratio and scarcity of advanced facilities. Compressed Sensing (CS) based methods for accelerating fMRI data acquisition are promising. Other advanced algorithms like k-t FOCUSS or PICCS have been developed to improve performance. This study aims to investigate a new method, Dual-TRACER, based on Temporal Resolution Acceleration with Constrained Evolution Reconstruction (TRACER), for accelerating fMRI acquisitions using golden angle variable density spiral. Both numerical simulations and in vivo experiments at 3T were conducted to evaluate and characterize this method. Results show that Dual-TRACER can provide functional images with a high spatial resolution (1×1mm 2 ) under an acceleration factor of 20 while maintaining hemodynamic signals well. Compared with other investigated methods, dual-TRACER provides a better signal recovery, higher fMRI sensitivity and more reliable activation detection. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Leguy, G.; Lipscomb, W. H.; Asay-Davis, X.
2017-12-01
Ice sheets and ice shelves are linked by the transition zone, the region where the grounded ice lifts off the bedrock and begins to float. Adequate resolution of the transition zone is necessary for numerically accurate ice sheet-ice shelf simulations. In previous work we have shown that by using a simple parameterization of the basal hydrology, a smoother transition in basal water pressure between floating and grounded ice improves the numerical accuracy of a one-dimensional vertically integrated fixed-grid model. We used a set of experiments based on the Marine Ice Sheet Model Intercomparison Project (MISMIP) to show that reliable grounding-line dynamics at resolutions 1 km is achievable. In this presentation we use the Community Ice Sheet Model (CISM) to demonstrate how the representation of basal lubrication impacts three-dimensional models using the MISMIP-3D and MISMIP+ experiments. To this end we will compare three different Stokes approximations: the Shallow Shelf Approximation (SSA), a depth-integrated higher-order approximation, and the Blatter-Pattyn model. The results from our one-dimensional model carry over to the 3-D models; a resolution of 1 km (and in some cases 2 km) remains sufficient to accurately simulate grounding-line dynamics.
Contact microspherical nanoscopy: from fundamentals to biomedical applications
NASA Astrophysics Data System (ADS)
Astratov, V. N.; Maslov, A. V.; Brettin, A.; Blanchette, K. F.; Nesmelov, Y. E.; Limberopoulos, N. I.; Walker, D. E.; Urbas, A. M.
2017-02-01
The mechanisms of super-resolution imaging by contact microspherical or microcylindrical nanoscopy remain an enigmatic question since these lenses neither have an ability to amplify the near-fields like in the case of far-field superlens, nor they have a hyperbolic dispersion similar to hyperlenses. In this work, we present results along two lines. First, we performed numerical modeling of super-resolution properties of two-dimensional (2-D) circular lens in the limit of wavelength-scale diameters, λ <= D <= 2λ, and relatively high indices of refraction, n=2. Our preliminary results on imaging point dipoles indicate that the resolution is generally close to λ/4 however on resonance with whispering gallery modes it may be slightly higher. Second, experimentally, we used actin protein filaments for the resolution quantification in microspherical nanoscopy. The critical feature of our approach is based on using arrayed cladding layer with strong localized surface plasmon resonances. This layer is used for enhancing plasmonic near-field illumination of our objects. In combination with the magnification of virtual image, this technique resulted in the lateral resolution of actin protein filaments on the order of λ/7.
NASA Astrophysics Data System (ADS)
Chapman, Steven W.; Parker, Beth L.; Sale, Tom C.; Doner, Lee Ann
2012-08-01
It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making.
Tsunami hazard maps of spanish coast at national scale from seismic sources
NASA Astrophysics Data System (ADS)
Aniel-Quiroga, Íñigo; González, Mauricio; Álvarez-Gómez, José Antonio; García, Pablo
2017-04-01
Tsunamis are a moderately frequent phenomenon in the NEAM (North East Atlantic and Mediterranean) region, and consequently in Spain, as historic and recent events have affected this area. I.e., the 1755 earthquake and tsunami affected the Spanish Atlantic coasts of Huelva and Cadiz and the 2003 Boumerdés earthquake triggered a tsunami that reached Balearic island coast in less than 45 minutes. The risk in Spain is real and, its population and tourism rate makes it vulnerable to this kind of catastrophic events. The Indian Ocean tsunami in 2004 and the tsunami in Japan in 2011 launched the worldwide development and application of tsunami risk reduction measures that have been taken as a priority in this field. On November 20th 2015 the directive of the Spanish civil protection agency on planning under the emergency of tsunami was presented. As part of the Spanish National Security strategy, this document specifies the structure of the action plans at different levels: National, regional and local. In this sense, the first step is the proper evaluation of the tsunami hazard at National scale. This work deals with the assessment of the tsunami hazard in Spain, by means of numerical simulations, focused on the elaboration of tsunami hazard maps at National scale. To get this, following a deterministic approach, the seismic structures whose earthquakes could generate the worst tsunamis affecting the coast of Spain have been compiled and characterized. These worst sources have been propagated numerically along a reconstructed bathymetry, built from the best resolution available data. This high-resolution bathymetry was joined with a 25-m resolution DTM, to generate continuous offshore-onshore space, allowing the calculation of the flooded areas prompted by each selected source. The numerical model applied for the calculation of the tsunami propagations was COMCOT. The maps resulting from the numerical simulations show not only the tsunami amplitude at coastal areas but also the run-up and inundation length from the coastline. The run-up has been calculated with numerical model, complemented with an alternative method, based on interpolation on a tsunami run-up database created ad hoc. These estimated variables allow the identification of the most affected areas in case of tsunami and they are also the base for the local authorities to evaluate the necessity of new higher resolution studies at local scale on specific areas.
Isotropic three-dimensional T2 mapping of knee cartilage: Development and validation.
Colotti, Roberto; Omoumi, Patrick; Bonanno, Gabriele; Ledoux, Jean-Baptiste; van Heeswijk, Ruud B
2018-02-01
1) To implement a higher-resolution isotropic 3D T 2 mapping technique that uses sequential T 2 -prepared segmented gradient-recalled echo (Iso3DGRE) images for knee cartilage evaluation, and 2) to validate it both in vitro and in vivo in healthy volunteers and patients with knee osteoarthritis. The Iso3DGRE sequence with an isotropic 0.6 mm spatial resolution was developed on a clinical 3T MR scanner. Numerical simulations were performed to optimize the pulse sequence parameters. A phantom study was performed to validate the T 2 estimation accuracy. The repeatability of the sequence was assessed in healthy volunteers (n = 7). T 2 values were compared with those from a clinical standard 2D multislice multiecho (MSME) T 2 mapping sequence in knees of healthy volunteers (n = 13) and in patients with knee osteoarthritis (OA, n = 5). The numerical simulations resulted in 100 excitations per segment and an optimal radiofrequency (RF) excitation angle of 15°. The phantom study demonstrated a good correlation of the technique with the reference standard (slope 0.9 ± 0.05, intercept 0.2 ± 1.7 msec, R 2 ≥ 0.99). Repeated measurements of cartilage T 2 values in healthy volunteers showed a coefficient of variation of 5.6%. Both Iso3DGRE and MSME techniques found significantly higher cartilage T 2 values (P < 0.03) in OA patients. Iso3DGRE precision was equal to that of the MSME T 2 mapping in healthy volunteers, and significantly higher in OA (P = 0.01). This study successfully demonstrated that high-resolution isotropic 3D T 2 mapping for knee cartilage characterization is feasible, accurate, repeatable, and precise. The technique allows for multiplanar reformatting and thus T 2 quantification in any plane of interest. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:362-371. © 2017 International Society for Magnetic Resonance in Medicine.
Numerical analysis of scalar dissipation length-scales and their scaling properties
NASA Astrophysics Data System (ADS)
Vaishnavi, Pankaj; Kronenburg, Andreas
2006-11-01
Scalar dissipation rate, χ, is fundamental to the description of scalar-mixing in turbulent non-premixed combustion. Most contributions to the statistics for χ come from the finest turbulent mixing-scales and thus its adequate characterisation requires good resolution. Reliable χ-measurement is complicated by the trade-off between higher resolution and greater signal-to-noise ratio. Thus, the present numerical study utilises the error-free mixture fraction, Z, and fluid mechanical data from the turbulent reacting jet DNS of Pantano (2004). The aim is to quantify the resolution requirements for χ-measurement in terms of easily measurable properties of the flow like the integral-scale Reynolds number, Reδ, using spectral and spatial-filtering [cf. Barlow and Karpetis (2005)] analyses. Analysis of the 1-D cross-stream dissipation spectra enables the estimation of the dissipation length scales. It is shown that these spectrally-computed scales follow the expected Kolmogorov scaling with Reδ-0.75 . The work also involves local smoothening of the instantaneous χ-field over a non-overlapping spatial-interval (filter-width, wf), to study the smoothened χ-value as a function of wf, as wf is extrapolated to the smallest scale of interest. The dissipation length-scales thus captured show a stringent Reδ-1 scaling, compared to the usual Kolmogorov-type. This concurs with the criterion of 'resolution adequacy' of the DNS, as set out by Sreenivasan (2004) using the theory of multi-fractals.
Immersion Gratings for Infrared High-resolution Spectroscopy
NASA Astrophysics Data System (ADS)
Sarugaku, Yuki; Ikeda, Yuji; Kobayashi, Naoto; Kaji, Sayumi; Sukegawa, Takashi; Sugiyama, Shigeru; Nakagawa, Takao; Arasaki, Takayuki; Kondo, Sohei; Nakanishi, Kenshi; Yasui, Chikako; Kawakita, Hideyo
2016-10-01
High-resolution spectroscopy in the infrared wavelength range is essential for observations of minor isotopologues, such as HDO for water, and prebiotic organic molecules like hydrocarbons/P-bearing molecules because numerous vibrational molecular bands (including non-polar molecules) are located in this wavelength range. High spectral resolution enables us to detect weak lines without spectral line confusion. This technique has been widely used in planetary sciences, e.g., cometary coma (H2O, CO, and organic molecules), the martian atmosphere (CH4, CO2, H2O and HDO), and the upper atmosphere of gas giants (H3+ and organic molecules such as C2H6). Spectrographs with higher resolution (and higher sensitivity) still have a potential to provide a plenty of findings. However, because the size of spectrographs scales with the spectral resolution, it is difficult to realize it.Immersion grating (IG), which is a diffraction grating wherein the diffraction surface is immersed in a material with a high refractive index (n > 2), provides n times higher spectral resolution compared to a reflective grating of the same size. Because IG reduces the size of spectrograph to 1/n compared to the spectrograph with the same spectral resolution using a conventional reflective grating, it is widely acknowledged as a key optical device to realize compact spectrographs with high spectral resolution.Recently, we succeeded in fabricating a CdZnTe immersion grating with the theoretically predicted diffraction efficiency by machining process using an ultrahigh-precision five-axis processing machine developed by Canon Inc. Using the same technique, we completed a practical germanium (Ge) immersion grating with both a reflection coating on the grating surface and the an AR coating on the entrance surface. It is noteworthy that the wide wavelength range from 2 to 20 um can be covered by the two immersion gratings.In this paper, we present the performances and the applications of the immersion gratings, including the development of a long-NIR (2-5um) high-resolution (R=80,000) spectrograph with Ge-immersion grating, VINROUGE, which is a prototype for the TMT MIR instrument.
Experimental and Numerical Study of Nozzle Plume Impingement on Spacecraft Surfaces
NASA Astrophysics Data System (ADS)
Ketsdever, A. D.; Lilly, T. C.; Gimelshein, S. F.; Alexeenko, A. A.
2005-05-01
An experimental and numerical effort was undertaken to assess the effects of a cold gas (To=300K) nozzle plume impinging on a simulated spacecraft surface. The nozzle flow impingement is investigated experimentally using a nano-Newton resolution force balance and numerically using the Direct Simulation Monte Carlo (DSMC) numerical technique. The Reynolds number range investigated in this study is from 0.5 to approximately 900 using helium and nitrogen propellants. The thrust produced by the nozzle was first assessed on a force balance to provide a baseline case. Subsequently, an aluminum plate was attached to the same force balance at various angles from 0° (parallel to the plume flow) to 10°. For low Reynolds number helium flow, a 16.5% decrease in thrust was measured for the plate at 0° relative to the free plume expansion case. For low Reynolds number nitrogen flow, the difference was found to be 12%. The thrust degradation was found to decrease at higher Reynolds numbers and larger plate angles.
Numerical correction of distorted images in full-field optical coherence tomography
NASA Astrophysics Data System (ADS)
Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha
2012-03-01
We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft.
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1985-01-01
Methods being used to increase the horizontal and vertical resolution and to implement more sophisticated parameterization schemes for general circulation models (GCM) run on newer, more powerful computers are described. Attention is focused on the NASA-Goddard Laboratory for Atmospherics fourth order GCM. A new planetary boundary layer (PBL) model has been developed which features explicit resolution of two or more layers. Numerical models are presented for parameterizing the turbulent vertical heat, momentum and moisture fluxes at the earth's surface and between the layers in the PBL model. An extended Monin-Obhukov similarity scheme is applied to express the relationships between the lowest levels of the GCM and the surface fluxes. On-line weather prediction experiments are to be run to test the effects of the higher resolution thereby obtained for dynamic atmospheric processes.
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution
Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; ...
2016-01-01
Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single moleculemore » super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.« less
Performance analysis of multiple PRF technique for ambiguity resolution
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
For short wavelength spaceborne synthetic aperture radar (SAR), ambiguity in Doppler centroid estimation occurs when the azimuth squint angle uncertainty is larger than the azimuth antenna beamwidth. Multiple pulse recurrence frequency (PRF) hopping is a technique developed to resolve the ambiguity by operating the radar in different PRF's in the pre-imaging sequence. Performance analysis results of the multiple PRF technique are presented, given the constraints of the attitude bound, the drift rate uncertainty, and the arbitrary numerical values of PRF's. The algorithm performance is derived in terms of the probability of correct ambiguity resolution. Examples, using the Shuttle Imaging Radar-C (SIR-C) and X-SAR parameters, demonstrate that the probability of correct ambiguity resolution obtained by the multiple PRF technique is greater than 95 percent and 80 percent for the SIR-C and X-SAR applications, respectively. The success rate is significantly higher than that achieved by the range cross correlation technique.
NASA Astrophysics Data System (ADS)
Li, Hao; Liu, Wenzhong; Zhang, Hao F.
2015-10-01
Rodent models are indispensable in studying various retinal diseases. Noninvasive, high-resolution retinal imaging of rodent models is highly desired for longitudinally investigating the pathogenesis and therapeutic strategies. However, due to severe aberrations, the retinal image quality in rodents can be much worse than that in humans. We numerically and experimentally investigated the influence of chromatic aberration and optical illumination bandwidth on retinal imaging. We confirmed that the rat retinal image quality decreased with increasing illumination bandwidth. We achieved the retinal image resolution of 10 μm using a 19 nm illumination bandwidth centered at 580 nm in a home-built fundus camera. Furthermore, we observed higher chromatic aberration in albino rat eyes than in pigmented rat eyes. This study provides a design guide for high-resolution fundus camera for rodents. Our method is also beneficial to dispersion compensation in multiwavelength retinal imaging applications.
Wedi, Nils P
2014-06-28
The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Analysis of two dimensional signals via curvelet transform
NASA Astrophysics Data System (ADS)
Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.
2007-04-01
This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R
2017-11-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.
High resolution microphotonic needle for endoscopic imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Tadayon, Mohammad Amin; Mohanty, Aseema; Roberts, Samantha P.; Barbosa, Felippe; Lipson, Michal
2017-02-01
GRIN (Graded index) lens have revolutionized micro endoscopy enabling deep tissue imaging with high resolution. The challenges of traditional GRIN lenses are their large size (when compared with the field of view) and their limited resolution. This is because of the relatively weak NA in standard graded index lenses. Here we introduce a novel micro-needle platform for endoscopy with much higher resolution than traditional GRIN lenses and a FOV that corresponds to the whole cross section of the needle. The platform is based on polymeric (SU-8) waveguide integrated with a microlens micro fabricated on a silicon substrate using a unique molding process. Due to the high index of refraction of the material the NA of the needle is much higher than traditional GRIN lenses. We tested the probe in a fluorescent dye solution (19.6 µM Alexa Flour 647 solution) and measured a numerical aperture of 0.25, focal length of about 175 µm and minimal spot size of about 1.6 µm. We show that the platform can image a sample with the field of view corresponding to the cross sectional area of the waveguide (80x100 µm2). The waveguide size can in principle be modified to vary size of the imaging field of view. This demonstration, combined with our previous work demonstrating our ability to implant the high NA needle in a live animal, shows that the proposed system can be used for deep tissue imaging with very high resolution and high field of view.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Draxl, Caroline; Hopson, Thomas
Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less
Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye
2003-10-01
A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
Use of MODIS Cloud Top Pressure to Improve Assimilation Yields of AIRS Radiances in GSI
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi
2014-01-01
Radiances from hyperspectral sounders such as the Atmospheric Infrared Sounder (AIRS) are routinely assimilated both globally and regionally in operational numerical weather prediction (NWP) systems using the Gridpoint Statistical Interpolation (GSI) data assimilation system. However, only thinned, cloud-free radiances from a 281-channel subset are used, so the overall percentage of these observations that are assimilated is somewhere on the order of 5%. Cloud checks are performed within GSI to determine which channels peak above cloud top; inaccuracies may lead to less assimilated radiances or introduction of biases from cloud-contaminated radiances.Relatively large footprint from AIRS may not optimally represent small-scale cloud features that might be better resolved by higher-resolution imagers like the Moderate Resolution Imaging Spectroradiometer (MODIS). Objective of this project is to "swap" the MODIS-derived cloud top pressure (CTP) for that designated by the AIRS-only quality control within GSI to test the hypothesis that better representation of cloud features will result in higher assimilated radiance yields and improved forecasts.
Chapman, Steven W; Parker, Beth L; Sale, Tom C; Doner, Lee Ann
2012-08-01
It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin
2014-10-01
Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.
Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy
NASA Astrophysics Data System (ADS)
Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan
2016-03-01
Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.
Compartmentalized Low-Rank Recovery for High-Resolution Lipid Unsuppressed MRSI
Bhattacharya, Ipshita; Jacob, Mathews
2017-01-01
Purpose To introduce a novel algorithm for the recovery of high-resolution magnetic resonance spectroscopic imaging (MRSI) data with minimal lipid leakage artifacts, from dual-density spiral acquisition. Methods The reconstruction of MRSI data from dual-density spiral data is formulated as a compartmental low-rank recovery problem. The MRSI dataset is modeled as the sum of metabolite and lipid signals, each of which is support limited to the brain and extracranial regions, respectively, in addition to being orthogonal to each other. The reconstruction problem is formulated as an optimization problem, which is solved using iterative reweighted nuclear norm minimization. Results The comparisons of the scheme against dual-resolution reconstruction algorithm on numerical phantom and in vivo datasets demonstrate the ability of the scheme to provide higher spatial resolution and lower lipid leakage artifacts. The experiments demonstrate the ability of the scheme to recover the metabolite maps, from lipid unsuppressed datasets with echo time (TE)=55 ms. Conclusion The proposed reconstruction method and data acquisition strategy provide an efficient way to achieve high-resolution metabolite maps without lipid suppression. This algorithm would be beneficial for fast metabolic mapping and extension to multislice acquisitions. PMID:27851875
Atmospheric blocking in the Climate SPHINX simulations: the role of orography and resolution
NASA Astrophysics Data System (ADS)
Davini, Paolo; Corti, Susanna; D'Andrea, Fabio; Riviere, Gwendal; von Hardenberg, Jost
2017-04-01
The representation of atmospheric blocking in numerical simulations, especially over the Euro-Atlantic region, still represents a main concern for the climate modelling community. We here discuss the Northern Hemisphere winter atmospheric blocking representation in a set of 30-year simulations which has been performed in the framework of the PRACE project "Climate SPHINX". Simulations were run using the EC-Earth Global Climate Model with several ensemble members at 5 different horizontal resolutions (ranging from 125 km to 16 km). Results show that the negative bias in blocking frequency over Europe becomes negligible at resolutions of about 40 km and finer. However, the blocking duration is still underestimated by 1-2 days, suggesting that the correct blocking frequencies are achieved with an overestimation of the number of blocking onsets. The reasons leading to such improvements are then discussed, highlighting the role of orography in shaping the Atlantic jet stream: at higher resolution the jet is weaker and less penetrating over Europe, favoring the breaking of synoptic Rossby waves over the Atlantic stationary ridge and thus increasing the simulated blocking frequency.
Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion
NASA Astrophysics Data System (ADS)
Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas
2014-12-01
The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.
Triangulation-based 3D surveying borescope
NASA Astrophysics Data System (ADS)
Pulwer, S.; Steglich, P.; Villringer, C.; Bauer, J.; Burger, M.; Franz, M.; Grieshober, K.; Wirth, F.; Blondeau, J.; Rautenberg, J.; Mouti, S.; Schrader, S.
2016-04-01
In this work, a measurement concept based on triangulation was developed for borescopic 3D-surveying of surface defects. The integration of such measurement system into a borescope environment requires excellent space utilization. The triangulation angle, the projected pattern, the numerical apertures of the optical system, and the viewing angle were calculated using partial coherence imaging and geometric optical raytracing methods. Additionally, optical aberrations and defocus were considered by the integration of Zernike polynomial coefficients. The measurement system is able to measure objects with a size of 50 μm in all dimensions with an accuracy of +/- 5 μm. To manage the issue of a low depth of field while using an optical high resolution system, a wavelength dependent aperture was integrated. Thereby, we are able to control depth of field and resolution of the optical system and can use the borescope in measurement mode with high resolution and low depth of field or in inspection mode with low resolution and higher depth of field. First measurements of a demonstrator system are in good agreement with our simulations.
Visualizing the root-PDL-bone interface using high-resolution microtomography
NASA Astrophysics Data System (ADS)
Dalstra, Michel; Cattaneo, Paolo M.; Herzen, Julia; Beckmann, Felix
2008-08-01
The root/periodontal ligament/bone (RPB) interface is important for a correct understanding of the load transfer mechanism of masticatory forces and orthodontic loads. It is the aim of this study to assess the three-dimensional structure of the RPB interface using high-resolution microtomography. A human posterior jaw segment, obtained at autopsy from a 22-year old male donor was first scanned using a tomograph at the HASYLAB/DESY synchrotron facility (Hamburg, Germany) at 31μm resolution. Afterwards the first molar and its surrounding bone were removed with a 10mm hollow core drill. From this cylindrical sample smaller samples were drilled out in the buccolingual direction with a 1.5mm hollow core drill. These samples were scanned at 4μm resolution. The scans of the entire segment showed alveolar bone with a thin lamina dura, supported by an intricate trabecular network. Although featuring numerous openings between the PDL and the bone marrow on the other side to allow blood vessels to transverse, the lamina dura seems smooth at this resolution. First at high resolution, however, it becomes evident that it is irregular with bony spiculae and pitted surfaces. Therefore the stresses in the bone during physiological or orthodontic loading are much higher than expected from a smooth continuous alveolus.
Design and analysis of a fast, two-mirror soft-x-ray microscope
NASA Technical Reports Server (NTRS)
Shealy, D. L.; Wang, C.; Jiang, W.; Jin, L.; Hoover, R. B.
1992-01-01
During the past several years, a number of investigators have addressed the design, analysis, fabrication, and testing of spherical Schwarzschild microscopes for soft-x-ray applications using multilayer coatings. Some of these systems have demonstrated diffraction limited resolution for small numerical apertures. Rigorously aplanatic, two-aspherical mirror Head microscopes can provide near diffraction limited resolution for very large numerical apertures. The relationships between the numerical aperture, mirror radii and diameters, magnifications, and total system length for Schwarzschild microscope configurations are summarized. Also, an analysis of the characteristics of the Head-Schwarzschild surfaces will be reported. The numerical surface data predicted by the Head equations were fit by a variety of functions and analyzed by conventional optical design codes. Efforts have been made to determine whether current optical substrate and multilayer coating technologies will permit construction of a very fast Head microscope which can provide resolution approaching that of the wavelength of the incident radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.
Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single moleculemore » super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.« less
Simulation of modern climate with the new version of the INM RAS climate model
NASA Astrophysics Data System (ADS)
Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykosov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Yakovlev, N. G.
2017-03-01
The INMCM5.0 numerical model of the Earth's climate system is presented, which is an evolution from the previous version, INMCM4.0. A higher vertical resolution for the stratosphere is applied in the atmospheric block. Also, we raised the upper boundary of the calculating area, added the aerosol block, modified parameterization of clouds and condensation, and increased the horizontal resolution in the ocean block. The program implementation of the model was also updated. We consider the simulation of the current climate using the new version of the model. Attention is focused on reducing systematic errors as compared to the previous version, reproducing phenomena that could not be simulated correctly in the previous version, and modeling the problems that remain unresolved.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R.
2017-01-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues. PMID:29188089
High resolution global flood hazard map from physically-based hydrologic and hydraulic models.
NASA Astrophysics Data System (ADS)
Begnudelli, L.; Kaheil, Y.; McCollum, J.
2017-12-01
The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak corresponds to the return period corresponding to the hazard map being produced (e.g. 100 years, 500 years). Each numerical simulation models one river reach, except for the longest reaches which are split in smaller parts. Here we show results for selected river basins worldwide.
The impact of mesoscale convective systems on global precipitation: A modeling study
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo
2017-04-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.
NASA Astrophysics Data System (ADS)
Harders, Rieka; Ranero, Cesar R.; Weinrebe, Wilhelm; von Huene, Roland
2014-05-01
Subduction of kms-tall and tens-of-km wide seamounts cause important landsliding events at subduction zones around the word. Along the Middle America Trench, previous work based on regional swath bathymetry maps (with 100 m grids) and multichannel seismic images have shown that seamount subduction produces large-scale slumping and sliding. Some of the mass wasting event may have been catastrophic and numerical modeling has indicated that they may have produced important local tsunamis. We have re-evaluated the structure of several active submarine landlide complexes caused by large seamount subduction using side scan sonar data. The comparison of the side scan sonar data to local high-resolution bathymetry grids indicates that the backscatter data has a resolution that is somewhat similar to that produced by a 10 m bathymetry grid. Although this is an arbitrary comparison, the side scan sonar data provides comparatively much higher resolution information than the previously used regional multibeam bathymetry. We have mapped the geometry and relief of the head and side walls of the complexes, the distribution of scars and the different sediment deposits to produce a new interpretation of the modes of landsliding during subduction of large seamounts. The new higher resolution information shows that landsliding processes are considerably more complex than formerly assumed. Landslides are of notably smaller dimensions that the lower resolution data had previously appear to indicate. However, significantly large events may have occur far more often than earlier interpretations had inferred representing a more common threat that previously assumed.
Simulations of Madden-Julian Oscillation in High Resolution Atmospheric General Circulation Model
NASA Astrophysics Data System (ADS)
Deng, Liping; Stenchikov, Georgiy; McCabe, Matthew; Bangalath, HamzaKunhu; Raj, Jerry; Osipov, Sergey
2014-05-01
The simulation of tropical signals, especially the Madden-Julian Oscillation (MJO), is one of the major deficiencies in current numerical models. The unrealistic features in the MJO simulations include the weak amplitude, more power at higher frequencies, displacement of the temporal and spatial distributions, eastward propagation speed being too fast, and a lack of coherent structure for the eastward propagation from the Indian Ocean to the Pacific (e.g., Slingo et al. 1996). While some improvement in simulating MJO variance and coherent eastward propagation has been attributed to model physics, model mean background state and air-sea interaction, studies have shown that the model resolution, especially for higher horizontal resolution, may play an important role in producing a more realistic simulation of MJO (e.g., Sperber et al. 2005). In this study, we employ unique high-resolution (25-km) simulations conducted using the Geophysical Fluid Dynamics Laboratory global High Resolution Atmospheric Model (HIRAM) to evaluate the MJO simulation against the European Center for Medium-range Weather Forecasts (ECMWF) Interim re-analysis (ERAI) dataset. We specifically focus on the ability of the model to represent the MJO related amplitude, spatial distribution, eastward propagation, and horizontal and vertical structures. Additionally, as the HIRAM output covers not only an historic period (1979-2012) but also future period (2012-2050), the impact of future climate change related to the MJO is illustrated. The possible changes in intensity and frequency of extreme weather and climate events (e.g., strong wind and heavy rainfall) in the western Pacific, the Indian Ocean and the Middle East North Africa (MENA) region are highlighted.
Numerical solutions of the semiclassical Boltzmann ellipsoidal-statistical kinetic model equation
Yang, Jaw-Yen; Yan, Chin-Yuan; Huang, Juan-Chen; Li, Zhihui
2014-01-01
Computations of rarefied gas dynamical flows governed by the semiclassical Boltzmann ellipsoidal-statistical (ES) kinetic model equation using an accurate numerical method are presented. The semiclassical ES model was derived through the maximum entropy principle and conserves not only the mass, momentum and energy, but also contains additional higher order moments that differ from the standard quantum distributions. A different decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. The numerical method in phase space combines the discrete-ordinate method in momentum space and the high-resolution shock capturing method in physical space. Numerical solutions of two-dimensional Riemann problems for two configurations covering various degrees of rarefaction are presented and various contours of the quantities unique to this new model are illustrated. When the relaxation time becomes very small, the main flow features a display similar to that of ideal quantum gas dynamics, and the present solutions are found to be consistent with existing calculations for classical gas. The effect of a parameter that permits an adjustable Prandtl number in the flow is also studied. PMID:25104904
Study of compressible turbulent flows in supersonic environment by large-eddy simulation
NASA Astrophysics Data System (ADS)
Genin, Franklin
The numerical resolution of turbulent flows in high-speed environment is of fundamental importance but remains a very challenging problem. First, the capture of strong discontinuities, typical of high-speed flows, requires the use of shock-capturing schemes, which are not adapted to the resolution of turbulent structures due to their intrinsic dissipation. On the other hand, low-dissipation schemes are unable to resolve shock fronts and other sharp gradients without creating high amplitude numerical oscillations. Second, the nature of turbulence in high-speed flows differs from its incompressible behavior, and, in the context of Large-Eddy Simulation, the subgrid closure must be adapted to the modeling of compressibility effects and shock waves on turbulent flows. The developments described in this thesis are two-fold. First, a state of the art closure approach for LES is extended to model subgrid turbulence in compressible flows. The energy transfers due to compressible turbulence and the diffusion of turbulent kinetic energy by pressure fluctuations are assessed and integrated in the Localized Dynamic ksgs model. Second, a hybrid numerical scheme is developed for the resolution of the LES equations and of the model transport equation, which combines a central scheme for turbulent resolutions to a shock-capturing method. A smoothness parameter is defined and used to switch from the base smooth solver to the upwind scheme in regions of discontinuities. It is shown that the developed hybrid methodology permits a capture of shock/turbulence interactions in direct simulations that agrees well with other reference simulations, and that the LES methodology effectively reproduces the turbulence evolution and physical phenomena involved in the interaction. This numerical approach is then employed to study a problem of practical importance in high-speed mixing. The interaction of two shock waves with a high-speed turbulent shear layer as a mixing augmentation technique is considered. It is shown that the levels of turbulence are increased through the interaction, and that the mixing is significantly improved in this flow configuration. However, the region of increased mixing is found to be localized to a region close to the impact of the shocks, and that the statistical levels of turbulence relax to their undisturbed levels some short distance downstream of the interaction. The present developments are finally applied to a practical configuration relevant to scramjet injection. The normal injection of a sonic jet into a supersonic crossflow is considered numerically, and compared to the results of an experimental study. A fair agreement in the statistics of mean and fluctuating velocity fields is obtained. Furthermore, some of the instantaneous flow structures observed in experimental visualizations are identified in the present simulation. The dynamics of the interaction for the reference case, based on the experimental study, as well as for a case of higher freestream Mach number and a case of higher momentum ratio, are examined. The classical instantaneous vortical structures are identified, and their generation mechanisms, specific to supersonic flow, are highlighted. Furthermore, two vortical structures, recently revealed in low-speed jets in crossflow but never documented for high-speed flows, are identified during the flow evolution.
A cost-effective strategy for nonoscillatory convection without clipping
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Niknafs, H. S.
1990-01-01
Clipping of narrow extrema and distortion of smooth profiles is a well known problem associated with so-called high resolution nonoscillatory convection schemes. A strategy is presented for accurately simulating highly convective flows containing discontinuities such as density fronts or shock waves, without distorting smooth profiles or clipping narrow local extrema. The convection algorithm is based on non-artificially diffusive third-order upwinding in smooth regions, with automatic adaptive stencil expansion to (in principle, arbitrarily) higher order upwinding locally, in regions of rapidly changing gradients. This is highly cost effective because the wider stencil is used only where needed-in isolated narrow regions. A recently developed universal limiter assures sharp monotonic resolution of discontinuities without introducing artificial diffusion or numerical compression. An adaptive discriminator is constructed to distinguish between spurious overshoots and physical peaks; this automatically relaxes the limiter near local turning points, thereby avoiding loss of resolution in narrow extrema. Examples are given for one-dimensional pure convection of scalar profiles at constant velocity.
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.
NASA Astrophysics Data System (ADS)
Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong
2018-03-01
We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.
rpe v5: an emulator for reduced floating-point precision in large numerical simulations
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.
2017-06-01
This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkin, E. R.; Bicknell, G. V., E-mail: parkin@mso.anu.edu.au
Global three-dimensional magnetohydrodynamic (MHD) simulations of turbulent accretion disks are presented which start from fully equilibrium initial conditions in which the magnetic forces are accounted for and the induction equation is satisfied. The local linear theory of the magnetorotational instability (MRI) is used as a predictor of the growth of magnetic field perturbations in the global simulations. The linear growth estimates and global simulations diverge when nonlinear motions-perhaps triggered by the onset of turbulence-upset the velocity perturbations used to excite the MRI. The saturated state is found to be independent of the initially excited MRI mode, showing that once themore » disk has expelled the initially net flux field and settled into quasi-periodic oscillations in the toroidal magnetic flux, the dynamo cycle regulates the global saturation stress level. Furthermore, time-averaged measures of converged turbulence, such as the ratio of magnetic energies, are found to be in agreement with previous works. In particular, the globally averaged stress normalized to the gas pressure <{alpha}{sub P}>bar = 0.034, with notably higher values achieved for simulations with higher azimuthal resolution. Supplementary tests are performed using different numerical algorithms and resolutions. Convergence with resolution during the initial linear MRI growth phase is found for 23-35 cells per scale height (in the vertical direction).« less
Early Earth plume-lid tectonics: A high-resolution 3D numerical modelling approach
NASA Astrophysics Data System (ADS)
Fischer, R.; Gerya, T.
2016-10-01
Geological-geochemical evidence point towards higher mantle potential temperature and a different type of tectonics (global plume-lid tectonics) in the early Earth (>3.2 Ga) compared to the present day (global plate tectonics). In order to investigate tectono-magmatic processes associated with plume-lid tectonics and crustal growth under hotter mantle temperature conditions, we conduct a series of 3D high-resolution magmatic-thermomechanical models with the finite-difference code I3ELVIS. No external plate tectonic forces are applied to isolate 3D effects of various plume-lithosphere and crust-mantle interactions. Results of the numerical experiments show two distinct phases in coupled crust-mantle evolution: (1) a longer (80-100 Myr) and relatively quiet 'growth phase' which is marked by growth of crust and lithosphere, followed by (2) a short (∼20 Myr) and catastrophic 'removal phase', where unstable parts of the crust and mantle lithosphere are removed by eclogitic dripping and later delamination. This modelling suggests that the early Earth plume-lid tectonic regime followed a pattern of episodic growth and removal also called episodic overturn with a periodicity of ∼100 Myr.
Miyazawa, Yasumasa; Guo, Xinyu; Varlamov, Sergey M.; Miyama, Toru; Yoda, Ken; Sato, Katsufumi; Kano, Toshiyuki; Sato, Keiji
2015-01-01
At the present time, ocean current is being operationally monitored mainly by combined use of numerical ocean nowcast/forecast models and satellite remote sensing data. Improvement in the accuracy of the ocean current nowcast/forecast requires additional measurements with higher spatial and temporal resolution as expected from the current observation network. Here we show feasibility of assimilating high-resolution seabird and ship drift data into an operational ocean forecast system. Data assimilation of geostrophic current contained in the observed drift leads to refinement in the gyre mode events of the Tsugaru warm current in the north-eastern sea of Japan represented by the model. Fitting the observed drift to the model depends on ability of the drift representing geostrophic current compared to that representing directly wind driven components. A preferable horizontal scale of 50 km indicated for the seabird drift data assimilation implies their capability of capturing eddies with smaller horizontal scale than the minimum scale of 100 km resolved by the satellite altimetry. The present study actually demonstrates that transdisciplinary approaches combining bio-/ship- logging and numerical modeling could be effective for enhancement in monitoring the ocean current. PMID:26633309
Development of an EMC3-EIRENE Synthetic Imaging Diagnostic
NASA Astrophysics Data System (ADS)
Meyer, William; Allen, Steve; Samuell, Cameron; Lore, Jeremy
2017-10-01
2D and 3D flow measurements are critical for validating numerical codes such as EMC3-EIRENE. Toroidal symmetry assumptions preclude tomographic reconstruction of 3D flows from single camera views. In addition, the resolution of the grids utilized in numerical code models can easily surpass the resolution of physical camera diagnostic geometries. For these reasons we have developed a Synthetic Imaging Diagnostic capability for forward projection comparisons of EMC3-EIRENE model solutions with the line integrated images from the Doppler Coherence Imaging diagnostic on DIII-D. The forward projection matrix is 2.8 Mpixel by 6.4 Mcells for the non-axisymmetric case we present. For flow comparisons, both simple line integral, and field aligned component matrices must be calculated. The calculation of these matrices is a massive embarrassingly parallel problem and performed with a custom dispatcher that allows processing platforms to join mid-problem as they become available, or drop out if resources are needed for higher priority tasks. The matrices are handled using standard sparse matrix techniques. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Office of Fusion Energy Sciences. LLNL-ABS-734800.
Microsphere-assisted super-resolution imaging with enlarged numerical aperture by semi-immersion
NASA Astrophysics Data System (ADS)
Wang, Fengge; Yang, Songlin; Ma, Huifeng; Shen, Ping; Wei, Nan; Wang, Meng; Xia, Yang; Deng, Yun; Ye, Yong-Hong
2018-01-01
Microsphere-assisted imaging is an extraordinary simple technology that can obtain optical super-resolution under white-light illumination. Here, we introduce a method to improve the resolution of a microsphere lens by increasing its numerical aperture. In our proposed structure, BaTiO3 glass (BTG) microsphere lenses are semi-immersed in a S1805 layer with a refractive index of 1.65, and then, the semi-immersed microspheres are fully embedded in an elastomer with an index of 1.4. We experimentally demonstrate that this structure, in combination with a conventional optical microscope, can clearly resolve a two-dimensional 200-nm-diameter hexagonally close-packed (hcp) silica microsphere array. On the contrary, the widely used structure where BTG microsphere lenses are fully immersed in a liquid or elastomer cannot even resolve a 250-nm-diameter hcp silica microsphere array. The improvement in resolution through the proposed structure is due to an increase in the effective numerical aperture by semi-immersing BTG microsphere lenses in a high-refractive-index S1805 layer. Our results will inform on the design of microsphere-based high-resolution imaging systems.
Diffusion impact on atmospheric moisture transport
NASA Astrophysics Data System (ADS)
Moseley, C.; Haerter, J.; Göttel, H.; Hagemann, S.; Jacob, D.
2009-04-01
To ensure numerical stability, many global and regional climate models employ numerical diffusion to dampen short wavelength modes. Terrain following sigma diffusion is known to cause unphysical effects near the surface in orographically structured regions. They can be reduced by applying z-diffusion on geopotential height levels. We investigate the effect of the diffusion scheme on atmospheric moisture transport and precipitation formation at different resolutions in the European region. With respect to a better understanding of diffusion in current and future grid-space global models, current day regional models may serve as the appropriate tool for studies of the impact of diffusion schemes: Results can easily be constrained to a small test region and checked against reliable observations, which often are unavailable on a global scale. Special attention is drawn to the Alps - a region of strong topographic gradients and good observational coverage. Our study is further motivated by the appearance of the "summer drying problem" in South Eastern Europe. This too warm and too dry simulation of climate is common to many regional climate models and also to some global climate models, and remains a permanent unsolved problem in the community. We perform a systematic comparison of the two diffusion-schemes with respect to the hydrological cycle. In particular, we investigate how local meteorological quantities - such as the atmospheric moisture in the region east of the Alps - depend on the spatial model resolution. Higher model resolution would lead to a more accurate representation of the topography and entail larger gradients in the Alps. This could lead to consecutively stronger transport of moisture along the slopes in the case of sigma-diffusion with subsequent orographic precipitation, whereas the effect could be qualitatively different in the case of z-diffusion. For our study, we analyse a sequence of simulations of the regional climate model REMO employing the different diffusion methods over Europe. For these simulations, REMO was forced at the lateral boundaries with ERA40 reanalysis data for a five year period. For our higher resolution simulations we employ the double nesting technique.
Magnetic Fields in Population III Star Formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turk, Matthew J.; Oishi, Jeffrey S.; Abel, Tom
2012-02-22
We study the buildup of magnetic fields during the formation of Population III star-forming regions, by conducting cosmological simulations from realistic initial conditions and varying the Jeans resolution. To investigate this in detail, we start simulations from identical initial conditions, mandating 16, 32 and 64 zones per Jeans length, and studied the variation in their magnetic field amplification. We find that, while compression results in some amplification, turbulent velocity fluctuations driven by the collapse can further amplify an initially weak seed field via dynamo action, provided there is sufficient numerical resolution to capture vortical motions (we find this requirement tomore » be 64 zones per Jeans length, slightly larger than, but consistent with previous work run with more idealized collapse scenarios). We explore saturation of amplification of the magnetic field, which could potentially become dynamically important in subsequent, fully-resolved calculations. We have also identified a relatively surprising phenomena that is purely hydrodynamic: the higher-resolved simulations possess substantially different characteristics, including higher infall-velocity, increased temperatures inside 1000 AU, and decreased molecular hydrogen content in the innermost region. Furthermore, we find that disk formation is suppressed in higher-resolution calculations, at least at the times that we can follow the calculation. We discuss the effect this may have on the buildup of disks over the accretion history of the first clump to form as well as the potential for gravitational instabilities to develop and induce fragmentation.« less
A Study of the Unstable Modes in High Mach Number Gaseous Jets and Shear Layers
NASA Astrophysics Data System (ADS)
Bassett, Gene Marcel
1993-01-01
Instabilities affecting the propagation of supersonic gaseous jets have been studied using high resolution computer simulations with the Piecewise-Parabolic-Method (PPM). These results are discussed in relation to jets from galactic nuclei. These studies involve a detailed treatment of a single section of a very long jet, approximating the dynamics by using periodic boundary conditions. Shear layer simulations have explored the effects of shear layers on the growth of nonlinear instabilities. Convergence of the numerical approximations has been tested by comparing jet simulations with different grid resolutions. The effects of initial conditions and geometry on the dominant disruptive instabilities have also been explored. Simulations of shear layers with a variety of thicknesses, Mach numbers and densities perturbed by incident sound waves imply that the time for the excited kink modes to grow large in amplitude and disrupt the shear layer is taug = (546 +/- 24) (M/4)^{1.7 } (Apert/0.02) ^{-0.4} delta/c, where M is the jet Mach number, delta is the half-width of the shear layer, and A_ {pert} is the perturbation amplitude. For simulations of periodic jets, the initial velocity perturbations set up zig-zag shock patterns inside the jet. In each case a single zig-zag shock pattern (an odd mode) or a double zig-zag shock pattern (an even mode) grows to dominate the flow. The dominant kink instability responsible for these shock patterns moves approximately at the linear resonance velocity, nu_ {mode} = cextnu_ {relative}/(cjet + c_ {ext}). For high resolution simulations (those with 150 or more computational zones across the jet width), the even mode dominates if the even penetration is higher in amplitude initially than the odd perturbation. For low resolution simulations, the odd mode dominates even for a stronger even mode perturbation. In high resolution simulations the jet boundary rolls up and large amounts of external gas are entrained into the jet. In low resolution simulations this entrainment process is impeded by numerical viscosity. The three-dimensional jet simulations behave similarly to two-dimensional jet runs with the same grid resolutions.
Real-time Retrieving Atmospheric Parameters from Multi-GNSS Constellations
NASA Astrophysics Data System (ADS)
Li, X.; Zus, F.; Lu, C.; Dick, G.; Ge, M.; Wickert, J.; Schuh, H.
2016-12-01
The multi-constellation GNSS (e.g. GPS, GLONASS, Galileo, and BeiDou) bring great opportunities and challenges for real-time retrieval of atmospheric parameters for supporting numerical weather prediction (NWP) nowcasting or severe weather event monitoring. In this study, the observations from different GNSS are combined together for atmospheric parameter retrieving based on the real-time precise point positioning technique. The atmospheric parameters retrieved from multi-GNSS observations, including zenith total delay (ZTD), integrated water vapor (IWV), horizontal gradient (especially high-resolution gradient estimates) and slant total delay (STD), are carefully analyzed and evaluated by using the VLBI, radiosonde, water vapor radiometer and numerical weather model to independently validate the performance of individual GNSS and also demonstrate the benefits of multi-constellation GNSS for real-time atmospheric monitoring. Numerous results show that the multi-GNSS processing can provide real-time atmospheric products with higher accuracy, stronger reliability and better distribution, which would be beneficial for atmospheric sounding systems, especially for nowcasting of extreme weather.
Data Assimilation of SMAP Observations and the Impact on Weather Forecasts and Heat Stress
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Case, Jonathan; Blankenship, Clay; Crosson, William; White, Khristopher
2014-01-01
SPoRT produces real-time LIS soil moisture products for situational awareness and local numerical weather prediction over CONUS, Mesoamerica, and East Africa ?Currently interact/collaborate with operational partners on evaluation of soil moisture products ?Drought/fire ?Extreme heat ?Convective initiation ?Flood and water borne diseases ?Initial efforts to assimilate L2 soil moisture observations from SMOS (as a precursor for SMAP) have been successful ?Active/passive blended product from SMAP will be assimilated similarly and higher spatial resolution should improve on local-scale processes
Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping
NASA Technical Reports Server (NTRS)
Suresh, A.; Huynh, H. T.
1997-01-01
A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.
A novel weighted-direction color interpolation
NASA Astrophysics Data System (ADS)
Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng
2013-08-01
A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdikamalov, Ernazar; Ott, Christian D.; Radice, David
2015-07-20
We conduct a series of numerical experiments into the nature of three-dimensional (3D) hydrodynamics in the postbounce stalled-shock phase of core-collapse supernovae using 3D general-relativistic hydrodynamic simulations of a 27 M{sub ⊙} progenitor star with a neutrino leakage/heating scheme. We vary the strength of neutrino heating and find three cases of 3D dynamics: (1) neutrino-driven convection, (2) initially neutrino-driven convection and subsequent development of the standing accretion shock instability (SASI), and (3) SASI-dominated evolution. This confirms previous 3D results of Hanke et al. and Couch and Connor. We carry out simulations with resolutions differing by up to a factor ofmore » ∼4 and demonstrate that low resolution is artificially favorable for explosion in the 3D convection-dominated case since it decreases the efficiency of energy transport to small scales. Low resolution results in higher radial convective fluxes of energy and enthalpy, more fully buoyant mass, and stronger neutrino heating. In the SASI-dominated case, lower resolution damps SASI oscillations. In the convection-dominated case, a quasi-stationary angular kinetic energy spectrum E(ℓ) develops in the heating layer. Like other 3D studies, we find E(ℓ) ∝ℓ{sup −1} in the “inertial range,” while theory and local simulations argue for E(ℓ) ∝ ℓ{sup −5/3}. We argue that current 3D simulations do not resolve the inertial range of turbulence and are affected by numerical viscosity up to the energy-containing scale, creating a “bottleneck” that prevents an efficient turbulent cascade.« less
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...
2016-09-16
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
Ensuring Safety of Navigation: A Three-Tiered Approach
NASA Astrophysics Data System (ADS)
Johnson, S. D.; Thompson, M.; Brazier, D.
2014-12-01
The primary responsibility of the Hydrographic Department at the Naval Oceanographic Office (NAVOCEANO) is to support US Navy surface and sub-surface Safety of Navigation (SoN) requirements. These requirements are interpreted, surveys are conducted, and accurate products are compiled and archived for future exploitation. For a number of years NAVOCEANO has employed a two-tiered data-basing structure to support SoN. The first tier (Data Warehouse, or DWH) provides access to the full-resolution sonar and lidar data. DWH preserves the original data such that any scale product can be built. The second tier (Digital Bathymetric Database - Variable resolution, or DBDB-V) served as the final archive for SoN chart scale, gridded products compiled from source bathymetry. DBDB-V has been incorporated into numerous DoD tactical decision aids and serves as the foundation bathymetry for ocean modeling. With the evolution of higher density survey systems and the addition of high-resolution gridded bathymetry product requirements, a two-tiered model did not provide an efficient solution for SoN. The two-tiered approach required scientists to exploit full-resolution data in order to build any higher resolution product. A new perspective on the archival and exploitation of source data was required. This new perspective has taken the form of a third tier, the Navigation Surface Database (NSDB). NSDB is an SQLite relational database populated with International Hydrographic Organization (IHO), S-102 compliant Bathymetric Attributed Grids (BAGs). BAGs archived within NSDB are developed at the highest resolution that the collection sensor system can support and contain nodal estimates for depth, uncertainty, separation values and metadata. Gridded surface analysis efforts culminate in the generation of the source resolution BAG files and their storage within NSDB. Exploitation of these resources eliminates the time and effort needed to re-grid and re-analyze native source file formats.
Observed and modeled mesoscale variability near the Gulf Stream and Kuroshio Extension
NASA Astrophysics Data System (ADS)
Schmitz, William J.; Holland, William R.
1986-08-01
Our earliest intercomparisons between western North Atlantic data and eddy-resolving two-layer quasi-geostrophic symmetric-double-gyre steady wind-forced numerical model results focused on the amplitudes and largest horizontal scales in patterns of eddy kinetic energy, primarily abyssal. Here, intercomparisons are extended to recent eight-layer model runs and new data which allow expansion of the investigation to the Kuroshio Extension and throughout much of the water column. Two numerical experiments are shown to have realistic zonal, vertical, and temporal eddy scales in the vicinity of the Kuroshio Extension in one case and the Gulf Stream in the other. Model zonal mean speeds are larger than observed, but vertical shears are in general agreement with the data. A longitudinal displacement between the maximum intensity in surface and abyssal eddy fields as observed for the North Atlantic is not found in the model results. The numerical simulations examined are highly idealized, notably with respect to basin shape, topography, wind-forcing, and of course dissipation. Therefore the zero-order agreement between modeled and observed basic characteristics of mid-latitude jets and their associated eddy fields suggests that such properties are predominantly determined by the physical mechanisms which dominate the models, where the fluctuations are the result of instability processes. The comparatively high vertical resolution of the model is needed to compare with new higher-resolution data as well as for dynamical reasons, although the precise number of layers required either kinematically or dynamically (or numerically) has not been determined; we estimate four to six when no attempt is made to account for bottom- or near-surface-intensified phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kupferman, R.
The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.
Latychevskaia, T; Chushkin, Y; Fink, H-W
2016-10-01
In coherent diffractive imaging, the resolution of the reconstructed object is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by postextrapolation of coherent diffraction images, such as diffraction patterns or holograms. We demonstrate that a diffraction pattern can unambiguously be extrapolated from only a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal is linearly proportional to the oversampling ratio. Although there could be in principle other methods to achieve extrapolation, we devote our discussion to employing iterative phase retrieval methods and demonstrate their limits. We present two numerical studies; namely, the extrapolation of diffraction patterns of nonbinary and that of phase objects together with a discussion of the optimal extrapolation procedure. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko
A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less
Taxonomic and Numerical Resolutions of Nepomorpha (Insecta: Heteroptera) in Cerrado Streams
Giehl, Nubia França da Silva; Dias-Silva, Karina; Juen, Leandro; Batista, Joana Darc; Cabette, Helena Soares Ramos
2014-01-01
Transformations of natural landscapes and their biodiversity have become increasingly dramatic and intense, creating a demand for rapid and inexpensive methods to assess and monitor ecosystems, especially the most vulnerable ones, such as aquatic systems. The speed with which surveys can collect, identify, and describe ecological patterns is much slower than that of the loss of biodiversity. Thus, there is a tendency for higher-level taxonomic identification to be used, a practice that is justified by factors such as the cost-benefit ratio, and the lack of taxonomists and reliable information on species distributions and diversity. However, most of these studies do not evaluate the degree of representativeness obtained by different taxonomic resolutions. Given this demand, the present study aims to investigate the congruence between species-level and genus-level data for the infraorder Nepomorpha, based on taxonomic and numerical resolutions. We collected specimens of aquatic Nepomorpha from five streams of first to fourth order of magnitude in the Pindaíba River Basin in the Cerrado of the state of Mato Grosso, Brazil, totaling 20 sites. A principal coordinates analysis (PCoA) applied to the data indicated that species-level and genus-level abundances were relatively similar (>80% similarity), although this similarity was reduced when compared with the presence/absence of genera (R = 0.77). The presence/absence ordinations of species and genera were similar to those recorded for their abundances (R = 0.95 and R = 0.74, respectively). The results indicate that analyses at the genus level may be used instead of species, given a loss of information of 11 to 19%, although congruence is higher when using abundance data instead of presence/absence. This analysis confirms that the use of the genus level data is a safe shortcut for environmental monitoring studies, although this approach must be treated with caution when the objectives include conservation actions, and faunal complementarity and/or inventories. PMID:25083770
Taxonomic and numerical resolutions of nepomorpha (insecta: heteroptera) in cerrado streams.
Giehl, Nubia França da Silva; Dias-Silva, Karina; Juen, Leandro; Batista, Joana Darc; Cabette, Helena Soares Ramos
2014-01-01
Transformations of natural landscapes and their biodiversity have become increasingly dramatic and intense, creating a demand for rapid and inexpensive methods to assess and monitor ecosystems, especially the most vulnerable ones, such as aquatic systems. The speed with which surveys can collect, identify, and describe ecological patterns is much slower than that of the loss of biodiversity. Thus, there is a tendency for higher-level taxonomic identification to be used, a practice that is justified by factors such as the cost-benefit ratio, and the lack of taxonomists and reliable information on species distributions and diversity. However, most of these studies do not evaluate the degree of representativeness obtained by different taxonomic resolutions. Given this demand, the present study aims to investigate the congruence between species-level and genus-level data for the infraorder Nepomorpha, based on taxonomic and numerical resolutions. We collected specimens of aquatic Nepomorpha from five streams of first to fourth order of magnitude in the Pindaíba River Basin in the Cerrado of the state of Mato Grosso, Brazil, totaling 20 sites. A principal coordinates analysis (PCoA) applied to the data indicated that species-level and genus-level abundances were relatively similar (>80% similarity), although this similarity was reduced when compared with the presence/absence of genera (R = 0.77). The presence/absence ordinations of species and genera were similar to those recorded for their abundances (R = 0.95 and R = 0.74, respectively). The results indicate that analyses at the genus level may be used instead of species, given a loss of information of 11 to 19%, although congruence is higher when using abundance data instead of presence/absence. This analysis confirms that the use of the genus level data is a safe shortcut for environmental monitoring studies, although this approach must be treated with caution when the objectives include conservation actions, and faunal complementarity and/or inventories.
NASA Astrophysics Data System (ADS)
Assassi, Charefeddine; Vandermeirsch, Frederic; Morel, Yves; Charria, Guillaume; Theetten, Sébastien; Dussin, Raphaël; Molines, Jean-Marc
2014-05-01
The aim of this study is to better understand the different overriding mechanisms that control the evolution of the temperature in the Bay of Biscay, through realistic simulations over a period of 50 years. Before performing and analyzing our own numerical experiments with a spatial resolution of 4 km, we compared two global simulations, ORCA-G70 and ORCA-GRD100 (¼° resolution) carried out from the ocean circulation model NEMO by the DRAKKAR group with inter-annual climatologies (WOA04, Levitus et al. 2005 and Bobyclim, Michel et al. 2009). Both simulations differ in their vertical resolution (46 levels in G70 and 75 levels in GRD100) and atmospheric forcings. The comparison of the two simulations shows an underestimation of the absolute temperature in GRD100 approximately 0.4°C in the first 300 meters for the entire period of the simulation (1958-2004) compared to G70, although the net air-sea heat flux is significantly higher in GRD100 (2.92 TW for GRD100 and 0.20 TW for G70). Several parameters can explain this apparent contradiction. On one hand, the wind is more intense in GRD100 and can contribute to the heat penetration on depth. On the other hand, the thermal balances at different depths show a great disparity between both simulations, especially in terms of advective transport. However, the temperature anomaly in the two global simulations is very close to observations (climatology) in the first 400 meters. The standard deviation is higher in the mixed layer (0.29°C for both simulations ORCA) and lower in the intermediate layers (0.15°C for G70 and to 0.13°C for GRD100). Moreover, the calculation of the surface linear trend of temperature in GRD100 (0.14°C.decade-1) is closer to the observations W0A04 (0.19°C.decade-1) while it is only (0.10°C.decade-1) in G70. The GRD100 simulation provides a better evolution than G70. These analysis confirm the suitability of the simulation GRD100 to drive a regional numerical experiment at higher resolution in the Bay of Biscay. The first experiments during a short period of our regional model (4 km spatial resolution, based on MARS3D code) showed consistency in the forcings used and give realistic results in temperature. This regional approach will be used to explore and understand the main mechanisms involved in the evolution of the temperature at multi-decadal scales.
Uncertainties in estimates of mortality attributable to ambient PM2.5 in Europe
NASA Astrophysics Data System (ADS)
Kushta, Jonilda; Pozzer, Andrea; Lelieveld, Jos
2018-06-01
The assessment of health impacts associated with airborne particulate matter smaller than 2.5 μm in diameter (PM2.5) relies on aerosol concentrations derived either from monitoring networks, satellite observations, numerical models, or a combination thereof. When global chemistry-transport models are used for estimating PM2.5, their relatively coarse resolution has been implied to lead to underestimation of health impacts in densely populated and industrialized areas. In this study the role of spatial resolution and of vertical layering of a regional air quality model, used to compute PM2.5 impacts on public health and mortality, is investigated. We utilize grid spacings of 100 km and 20 km to calculate annual mean PM2.5 concentrations over Europe, which are in turn applied to the estimation of premature mortality by cardiovascular and respiratory diseases. Using model results at a 100 km grid resolution yields about 535 000 annual premature deaths over the extended European domain (242 000 within the EU-28), while numbers approximately 2.4% higher are derived by using the 20 km resolution. Using the surface (i.e. lowest) layer of the model for PM2.5 yields about 0.6% higher mortality rates compared with PM2.5 averaged over the first 200 m above ground. Further, the calculation of relative risks (RR) from PM2.5, using 0.1 μg m‑3 size resolution bins compared to the commonly used 1 μg m‑3, is associated with ±0.8% uncertainty in estimated deaths. We conclude that model uncertainties contribute a small part of the overall uncertainty expressed by the 95% confidence intervals, which are of the order of ±30%, mostly related to the RR calculations based on epidemiological data.
Junwei Ma; Han Yuan; Sunderam, Sridhar; Besio, Walter; Lei Ding
2017-07-01
Neural activity inside the human brain generate electrical signals that can be detected on the scalp. Electroencephalograph (EEG) is one of the most widely utilized techniques helping physicians and researchers to diagnose and understand various brain diseases. Due to its nature, EEG signals have very high temporal resolution but poor spatial resolution. To achieve higher spatial resolution, a novel tri-polar concentric ring electrode (TCRE) has been developed to directly measure Surface Laplacian (SL). The objective of the present study is to accurately calculate SL for TCRE based on a realistic geometry head model. A locally dense mesh was proposed to represent the head surface, where the local dense parts were to match the small structural components in TCRE. Other areas without dense mesh were used for the purpose of reducing computational load. We conducted computer simulations to evaluate the performance of the proposed mesh and evaluated possible numerical errors as compared with a low-density model. Finally, with achieved accuracy, we presented the computed forward lead field of SL for TCRE for the first time in a realistic geometry head model and demonstrated that it has better spatial resolution than computed SL from classic EEG recordings.
Spectral domain optical coherence tomography with extended depth-of-focus by aperture synthesis
NASA Astrophysics Data System (ADS)
Bo, En; Liu, Linbo
2016-10-01
We developed a spectral domain optical coherence tomography (SD-OCT) with an extended depth-of-focus (DOF) by synthetizing aperture. For a designated Gaussian-shape light source, the lateral resolution was determined by the numerical aperture (NA) of the objective lens and can be approximately maintained over the confocal parameter, which was defined as twice the Rayleigh range. However, the DOF was proportional to the square of the lateral resolution. Consequently, a trade-off existed between the DOF and lateral resolution, and researchers had to weigh and judge which was more important for their research reasonably. In this study, three distinct optical apertures were obtained by imbedding a circular phase spacer in the sample arm. Due to the optical path difference between three distinct apertures caused by the phase spacer, three images were aligned with equal spacing along z-axis vertically. By correcting the optical path difference (OPD) and defocus-induced wavefront curvature, three images with distinct depths were coherently summed together. This system digitally refocused the sample tissue and obtained a brand new image with higher lateral resolution over the confocal parameter when imaging the polystyrene calibration beads.
Blind image fusion for hyperspectral imaging with the directional total variation
NASA Astrophysics Data System (ADS)
Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane
2018-04-01
Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.
Adaptive optics with pupil tracking for high resolution retinal imaging
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-01-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577
Adaptive optics with pupil tracking for high resolution retinal imaging.
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-02-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.
Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion
Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas
2014-01-01
The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.
Numerical Algorithms Based on Biorthogonal Wavelets
NASA Technical Reports Server (NTRS)
Ponenti, Pj.; Liandrat, J.
1996-01-01
Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.
A flux splitting scheme with high-resolution and robustness for discontinuities
NASA Technical Reports Server (NTRS)
Wada, Yasuhiro; Liou, Meng-Sing
1994-01-01
A flux splitting scheme is proposed for the general nonequilibrium flow equations with an aim at removing numerical dissipation of Van-Leer-type flux-vector splittings on a contact discontinuity. The scheme obtained is also recognized as an improved Advection Upwind Splitting Method (AUSM) where a slight numerical overshoot immediately behind the shock is eliminated. The proposed scheme has favorable properties: high-resolution for contact discontinuities; conservation of enthalpy for steady flows; numerical efficiency; applicability to chemically reacting flows. In fact, for a single contact discontinuity, even if it is moving, this scheme gives the numerical flux of the exact solution of the Riemann problem. Various numerical experiments including that of a thermo-chemical nonequilibrium flow were performed, which indicate no oscillation and robustness of the scheme for shock/expansion waves. A cure for carbuncle phenomenon is discussed as well.
2015-09-01
NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios
Guillen Bonilla, José Trinidad; Guillen Bonilla, Alex; Rodríguez Betancourtt, Verónica M.; Guillen Bonilla, Héctor; Casillas Zamora, Antonio
2017-01-01
The application of the sensor optical fibers in the areas of scientific instrumentation and industrial instrumentation is very attractive due to its numerous advantages. In the industry of civil engineering for example, quasi-distributed sensors made with optical fiber are used for reliable strain and temperature measurements. Here, a quasi-distributed sensor in the frequency domain is discussed. The sensor consists of a series of low-finesse Fabry-Perot interferometers where each Fabry-Perot interferometer acts as a local sensor. Fabry-Perot interferometers are formed by pairs of identical low reflective Bragg gratings imprinted in a single mode fiber. All interferometer sensors have different cavity length, provoking frequency-domain multiplexing. The optical signal represents the superposition of all interference patterns which can be decomposed using the Fourier transform. The frequency spectrum was analyzed and sensor’s properties were defined. Following that, a quasi-distributed sensor was numerically simulated. Our sensor simulation considers sensor properties, signal processing, noise system, and instrumentation. The numerical results show the behavior of resolution vs. signal-to-noise ratio. From our results, the Fabry-Perot sensor has high resolution and low resolution. Both resolutions are conceivable because the Fourier Domain Phase Analysis (FDPA) algorithm elaborates two evaluations of Bragg wavelength shift. PMID:28420083
Guillen Bonilla, José Trinidad; Guillen Bonilla, Alex; Rodríguez Betancourtt, Verónica M; Guillen Bonilla, Héctor; Casillas Zamora, Antonio
2017-04-14
The application of the sensor optical fibers in the areas of scientific instrumentation and industrial instrumentation is very attractive due to its numerous advantages. In the industry of civil engineering for example, quasi-distributed sensors made with optical fiber are used for reliable strain and temperature measurements. Here, a quasi-distributed sensor in the frequency domain is discussed. The sensor consists of a series of low-finesse Fabry-Perot interferometers where each Fabry-Perot interferometer acts as a local sensor. Fabry-Perot interferometers are formed by pairs of identical low reflective Bragg gratings imprinted in a single mode fiber. All interferometer sensors have different cavity length, provoking frequency-domain multiplexing. The optical signal represents the superposition of all interference patterns which can be decomposed using the Fourier transform. The frequency spectrum was analyzed and sensor's properties were defined. Following that, a quasi-distributed sensor was numerically simulated. Our sensor simulation considers sensor properties, signal processing, noise system, and instrumentation. The numerical results show the behavior of resolution vs. signal-to-noise ratio. From our results, the Fabry-Perot sensor has high resolution and low resolution. Both resolutions are conceivable because the Fourier Domain Phase Analysis (FDPA) algorithm elaborates two evaluations of Bragg wavelength shift.
Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation
NASA Astrophysics Data System (ADS)
Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo
2015-01-01
Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.
Surfzone alongshore advective accelerations: observations and modeling
NASA Astrophysics Data System (ADS)
Hansen, J.; Raubenheimer, B.; Elgar, S.
2014-12-01
The sources, magnitudes, and impacts of non-linear advective accelerations on alongshore surfzone currents are investigated with observations and a numerical model. Previous numerical modeling results have indicated that advective accelerations are an important contribution to the alongshore force balance, and are required to understand spatial variations in alongshore currents (which may result in spatially variable morphological change). However, most prior observational studies have neglected advective accelerations in the alongshore force balance. Using a numerical model (Delft3D) to predict optimal sensor locations, a dense array of 26 colocated current meters and pressure sensors was deployed between the shoreline and 3-m water depth over a 200 by 115 m region near Duck, NC in fall 2013. The array included 7 cross- and 3 alongshore transects. Here, observational and numerical estimates of the dominant forcing terms in the alongshore balance (pressure and radiation-stress gradients) and the advective acceleration terms will be compared with each other. In addition, the numerical model will be used to examine the force balance, including sources of velocity gradients, at a higher spatial resolution than possible with the instrument array. Preliminary numerical results indicate that at O(10-100 m) alongshore scales, bathymetric variations and the ensuing alongshore variations in the wave field and subsequent forcing are the dominant sources of the modeled velocity gradients and advective accelerations. Additional simulations and analysis of the observations will be presented. Funded by NSF and ASDR&E.
NASA Astrophysics Data System (ADS)
Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.
2018-03-01
Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.
Exploring the nearshore marine wind profile from field measurements and numerical hindcast
NASA Astrophysics Data System (ADS)
del Jesus, F.; Menendez, M.; Guanche, R.; Losada, I.
2012-12-01
Wind power is the predominant offshore renewable energy resource. In the last years, offshore wind farms have become a technically feasible source of electrical power. The economic feasibility of offshore wind farms depends on the quality of the offshore wind conditions compared to that of onshore sites. Installation and maintenance costs must be balanced with more hours and a higher quality of the available resources. European offshore wind development has revealed that the optimum offshore sites are those in which the distance from the coast is limited with high available resource. Due to the growth in the height of the turbines and the complexity of the coast, with interactions between inland wind/coastal orography and ocean winds, there is a need for field measurements and validation of numerical models to understand the marine wind profile near the coast. Moreover, recent studies have pointed out that the logarithmic law describing the vertical wind profile presents limitations. The aim of this work is to characterize the nearshore vertical wind profile in the medium atmosphere boundary layer. Instrumental observations analyzed in this work come from the Idermar project (www.Idermar.es). Three floating masts deployed at different locations on the Cantabrian coast provide wind measurements from a height of 20 to 90 meters. Wind speed and direction are measured as well as several meteorological variables at different heights of the profile. The shortest wind time series has over one year of data. A 20 year high-resolution atmospheric hindcast, using the WRF-ARW model and focusing on hourly offshore wind fields, is also analyzed. Two datasets have been evaluated: a European reanalysis with a ~15 Km spatial resolution, and a hybrid downscaling of wind fields with a spatial resolution of one nautical mile over the northern coast of Spain.. These numerical hindcasts have been validated based on field measurement data. Several parameterizations of the vertical wind profile are evaluated and, based on this work, a particular parameterization of the wind profile is proposed.
NASA Astrophysics Data System (ADS)
Ma, Yulong; Liu, Heping
2017-12-01
Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.
NASA Astrophysics Data System (ADS)
Lovette, J. P.; Lenhardt, W. C.; Blanton, B.; Duncan, J. M.; Stillwell, L.
2017-12-01
The National Water Model (NWM) has provided a novel framework for near real time flood inundation mapping across CONUS at a 10m resolution. In many regions, this spatial scale is quickly being surpassed through the collection of high resolution lidar (1 - 3m). As one of the leading states in data collection for flood inundation mapping, North Carolina is currently improving their previously available 20 ft statewide elevation product to a Quality Level 2 (QL2) product with a nominal point spacing of 0.7 meters. This QL2 elevation product increases the ground points by roughly ten times over the previous statewide lidar product, and by over 250 times when compared to the 10m NED elevation grid. When combining these new lidar data with the discharge estimates from the NWM, we can further improve statewide flood inundation maps and predictions of at-risk areas. In the context of flood risk management, these improved predictions with higher resolution elevation models consistently represent an improvement on coarser products. Additionally, the QL2 lidar also includes coarse land cover classification data for each point return, opening the possibility for expanding analysis beyond the use of only digital elevation models (e.g. improving estimates of surface roughness, identifying anthropogenic features in floodplains, characterizing riparian zones, etc.). Using the NWM Height Above Nearest Drainage approach, we compare flood inundation extents derived from multiple lidar-derived grid resolutions to assess the tradeoff between precision and computational load in North Carolina's coastal river basins. The elevation data distributed through the state's new lidar collection program provide spatial resolutions ranging from 5-50 feet, with most inland areas also including a 3 ft product. Data storage increases by almost two orders of magnitude across this range, as does processing load. In order to further assess the validity of the higher resolution elevation products on flood inundation, we examine the NWM outputs from Hurricane Matthew, which devastated southeastern North Carolina in October 2016. When compared with numerous surveyed high water marks across the coastal plain, this assessment provides insight on the impacts of grid resolution on flood inundation extent.
Numerical Simulation and Mechanical Design for TPS Electron Beam Position Monitors
NASA Astrophysics Data System (ADS)
Hsueh, H. P.; Kuan, C. K.; Ueng, T. S.; Hsiung, G. Y.; Chen, J. R.
2007-01-01
Comprehensive study on the mechanical design and numerical simulation for the high resolution electron beam position monitors are key steps to build the newly proposed 3rd generation synchrotron radiation research facility, Taiwan Photon Source (TPS). With more advanced electromagnetic simulation tool like MAFIA tailored specifically for particle accelerator, the design for the high resolution electron beam position monitors can be tested in such environment before they are experimentally tested. The design goal of our high resolution electron beam position monitors is to get the best resolution through sensitivity and signal optimization. The definitions and differences between resolution and sensitivity of electron beam position monitors will be explained. The design consideration is also explained. Prototype deign has been carried out and the related simulations were also carried out with MAFIA. The results are presented here. Sensitivity as high as 200 in x direction has been achieved in x direction at 500 MHz.
Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging
NASA Astrophysics Data System (ADS)
Micó, Vicente; Zalevsky, Zeev
2010-07-01
Digital in-line holographic microscopy (DIHM) is a modern approach capable of achieving micron-range lateral and depth resolutions in three-dimensional imaging. DIHM in combination with numerical imaging reconstruction uses an extremely simplified setup while retaining the advantages provided by holography with enhanced capabilities derived from algorithmic digital processing. We introduce superresolved DIHM incoming from time and angular multiplexing of the sample spatial frequency information and yielding in the generation of a synthetic aperture (SA). The SA expands the cutoff frequency of the imaging system, allowing submicron resolutions in both transversal and axial directions. The proposed approach can be applied when imaging essentially transparent (low-concentration dilutions) and static (slow dynamics) samples. Validation of the method for both a synthetic object (U.S. Air Force resolution test) to quantify the resolution improvement and a biological specimen (sperm cells biosample) are reported showing the generation of high synthetic numerical aperture values working without lenses.
Multiscale sensorless adaptive optics OCT angiography system for in vivo human retinal imaging.
Ju, Myeong Jin; Heisler, Morgan; Wahl, Daniel; Jian, Yifan; Sarunic, Marinko V
2017-11-01
We present a multiscale sensorless adaptive optics (SAO) OCT system capable of imaging retinal structure and vasculature with various fields-of-view (FOV) and resolutions. Using a single deformable mirror and exploiting the polarization properties of light, the SAO-OCT-A was implemented in a compact and easy to operate system. With the ability to adjust the beam diameter at the pupil, retinal imaging was demonstrated at two different numerical apertures with the same system. The general morphological structure and retinal vasculature could be observed with a few tens of micrometer-scale lateral resolution with conventional OCT and OCT-A scanning protocols with a 1.7-mm-diameter beam incident at the pupil and a large FOV (15 deg× 15 deg). Changing the system to a higher numerical aperture with a 5.0-mm-diameter beam incident at the pupil and the SAO aberration correction, the FOV was reduced to 3 deg× 3 deg for fine detailed imaging of morphological structure and microvasculature such as the photoreceptor mosaic and capillaries. Multiscale functional SAO-OCT imaging was performed on four healthy subjects, demonstrating its functionality and potential for clinical utility. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Mixa, T.; Fritts, D. C.; Laughman, B.; Wang, L.; Kantha, L. H.
2015-12-01
Multiple observations provide compelling evidence that gravity wave dissipation events often occur in multi-scale environments having highly-structured wind and stability profiles extending from the stable boundary layer into the mesosphere and lower thermosphere. Such events tend to be highly localized and thus yield local energy and momentum deposition and efficient secondary gravity wave generation expected to have strong influences at higher altitudes [e.g., Fritts et al., 2013; Baumgarten and Fritts, 2014]. Lidars, radars, and airglow imagers typically cannot achieve the spatial resolution needed to fully quantify these small-scale instability dynamics. Hence, we employ high-resolution modeling to explore these dynamics in representative environments. Specifically, we describe numerical studies of gravity wave packets impinging on a sheet of high stratification and shear and the resulting instabilities and impacts on the gravity wave amplitude and momentum flux for various flow and gravity wave parameters. References: Baumgarten, Gerd, and David C. Fritts (2014). Quantifying Kelvin-Helmholtz instability dynamics observed in noctilucent clouds: 1. Methods and observations. Journal of Geophysical Research: Atmospheres, 119.15, 9324-9337. Fritts, D. C., Wang, L., & Werne, J. A. (2013). Gravity wave-fine structure interactions. Part I: Influences of fine structure form and orientation on flow evolution and instability. Journal of the Atmospheric Sciences, 70(12), 3710-3734.
Analogous on-axis interference topographic phase microscopy (AOITPM).
Xiu, P; Liu, Q; Zhou, X; Xu, Y; Kuang, C; Liu, X
2018-05-01
The refractive index (RI) of a sample as an endogenous contrast agent plays an important role in transparent live cell imaging. In tomographic phase microscopy (TPM), 3D quantitative RI maps can be reconstructed based on the measured projections of the RI in multiple directions. The resolution of the RI maps not only depends on the numerical aperture of the employed objective lens, but also is determined by the accuracy of the quantitative phase of the sample measured at multiple scanning illumination angles. This paper reports an analogous on-axis interference TPM, where the interference angle between the sample and reference beams is kept constant for projections in multiple directions to improve the accuracy of the phase maps and the resolution of RI tomograms. The system has been validated with both silica beads and red blood cells. Compared with conventional TPM, the proposed system acquires quantitative RI maps with higher resolution (420 nm @λ = 633 nm) and signal-to-noise ratio that can be beneficial for live cell imaging in biomedical applications. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
Multi-shot PROPELLER for high-field preclinical MRI
Pandit, Prachi; Qi, Yi; Story, Jennifer; King, Kevin F.; Johnson, G. Allan
2012-01-01
With the development of numerous mouse models of cancer, there is a tremendous need for an appropriate imaging technique to study the disease evolution. High-field T2-weighted imaging using PROPELLER MRI meets this need. The 2-shot PROPELLER technique presented here, provides (a) high spatial resolution, (b) high contrast resolution, and (c) rapid and non-invasive imaging, which enables high-throughput, longitudinal studies in free-breathing mice. Unique data collection and reconstruction makes this method robust against motion artifacts. The 2-shot modification introduced here, retains more high-frequency information and provides higher SNR than conventional single-shot PROPELLER, making this sequence feasible at high-fields, where signal loss is rapid. Results are shown in a liver metastases model to demonstrate the utility of this technique in one of the more challenging regions of the mouse, which is the abdomen. PMID:20572138
Multishot PROPELLER for high-field preclinical MRI.
Pandit, Prachi; Qi, Yi; Story, Jennifer; King, Kevin F; Johnson, G Allan
2010-07-01
With the development of numerous mouse models of cancer, there is a tremendous need for an appropriate imaging technique to study the disease evolution. High-field T(2)-weighted imaging using PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI meets this need. The two-shot PROPELLER technique presented here provides (a) high spatial resolution, (b) high contrast resolution, and (c) rapid and noninvasive imaging, which enables high-throughput, longitudinal studies in free-breathing mice. Unique data collection and reconstruction makes this method robust against motion artifacts. The two-shot modification introduced here retains more high-frequency information and provides higher signal-to-noise ratio than conventional single-shot PROPELLER, making this sequence feasible at high fields, where signal loss is rapid. Results are shown in a liver metastases model to demonstrate the utility of this technique in one of the more challenging regions of the mouse, which is the abdomen. (c) 2010 Wiley-Liss, Inc.
Applying narrowband remote-sensing reflectance models to wideband data.
Lee, Zhongping
2009-06-10
Remote sensing of coastal and inland waters requires sensors to have a high spatial resolution to cover the spatial variation of biogeochemical properties in fine scales. High spatial-resolution sensors, however, are usually equipped with spectral bands that are wide in bandwidth (50 nm or wider). In this study, based on numerical simulations of hyperspectral remote-sensing reflectance of optically-deep waters, and using Landsat band specifics as an example, the impact of a wide spectral channel on remote sensing is analyzed. It is found that simple adoption of a narrowband model may result in >20% underestimation in calculated remote-sensing reflectance, and inversely may result in >20% overestimation in inverted absorption coefficients even under perfect conditions, although smaller (approximately 5%) uncertainties are found for higher absorbing waters. These results provide a cautious note, but also a justification for turbid coastal waters, on applying narrowband models to wideband data.
Guide-star-based computational adaptive optics for broadband interferometric tomography
Adie, Steven G.; Shemonski, Nathan D.; Graf, Benedikt W.; Ahmad, Adeel; Scott Carney, P.; Boppart, Stephen A.
2012-01-01
We present a method for the numerical correction of optical aberrations based on indirect sensing of the scattered wavefront from point-like scatterers (“guide stars”) within a three-dimensional broadband interferometric tomogram. This method enables the correction of high-order monochromatic and chromatic aberrations utilizing guide stars that are revealed after numerical compensation of defocus and low-order aberrations of the optical system. Guide-star-based aberration correction in a silicone phantom with sparse sub-resolution-sized scatterers demonstrates improvement of resolution and signal-to-noise ratio over a large isotome. Results in highly scattering muscle tissue showed improved resolution of fine structure over an extended volume. Guide-star-based computational adaptive optics expands upon the use of image metrics for numerically optimizing the aberration correction in broadband interferometric tomography, and is analogous to phase-conjugation and time-reversal methods for focusing in turbid media. PMID:23284179
NASA Astrophysics Data System (ADS)
Lee, Yueh-Ning; Hennebelle, Patrick
2018-04-01
Context. Understanding the origin of the initial mass function (IMF) of stars is a major problem for the star formation process and beyond. Aim. We investigate the dependence of the peak of the IMF on the physics of the so-called first Larson core, which corresponds to the point where the dust becomes opaque to its own radiation. Methods: We performed numerical simulations of collapsing clouds of 1000 M⊙ for various gas equations of state (eos), paying great attention to the numerical resolution and convergence. The initial conditions of these numerical experiments are varied in the companion paper. We also develop analytical models that we compare to our numerical results. Results: When an isothermal eos is used, we show that the peak of the IMF shifts to lower masses with improved numerical resolution. When an adiabatic eos is employed, numerical convergence is obtained. The peak position varies with the eos, and using an analytical model to infer the mass of the first Larson core, we find that the peak position is about ten times its value. By analyzing the stability of nonlinear density fluctuations in the vicinity of a point mass and then summing over a reasonable density distribution, we find that tidal forces exert a strong stabilizing effect and likely lead to a preferential mass several times higher than that of the first Larson core. Conclusions: We propose that in a sufficiently massive and cold cloud, the peak of the IMF is determined by the thermodynamics of the high-density adiabatic gas as well as the stabilizing influence of tidal forces. The resulting characteristic mass is about ten times the mass of the first Larson core, which altogether leads to a few tenths of solar masses. Since these processes are not related to the large-scale physical conditions and to the environment, our results suggest a possible explanation for the apparent universality of the peak of the IMF.
NASA Astrophysics Data System (ADS)
Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.
2017-11-01
We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.
Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G
2008-11-24
An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.
Stochastic Optimal Prediction with Application to Averaged Euler Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bell, John; Chorin, Alexandre J.; Crutchfield, William
Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.
Application of Numerical Weather Models to Mitigating Atmospheric Artifacts in InSAR
NASA Astrophysics Data System (ADS)
Foster, J. H.; Kealy, J.; Businger, S.; Cherubini, T.; Brooks, B. A.; Albers, S. C.; Lu, Z.; Poland, M. P.; Chen, S.; Mass, C.
2011-12-01
A high-resolution weather "hindcasting" system to model the atmosphere at the time of SAR scene acquisitions has been established to investigate and mitigate the impact of atmospheric water vapor on InSAR deformation maps. Variations in the distributions of water vapor in the atmosphere between SAR acquisitions lead to artifacts in interferograms that can mask real ground motion signals. A database of regional numerical weather prediction model outputs generated by the University of Washington and U.C. Davis for times matching SAR acquisitions was used as "background" for higher resolution analyses of the atmosphere for Mount St Helens volcano in Washington, and Los Angeles in southern California. Using this background, we use LAPS to incrementally incorporate all other available meteorological data sets, including GPS, to explore the impact of additional observations on model accuracy. Our results suggest that, even with significant quantities of contemporaneously measured data, high-resolution atmospheric analyses are unable to model the timing and location of water vapor perturbations accurately enough to produce robust and reliable phase screens that can be directly subtracted from interferograms. Despite this, the analyses are able to reproduce the statistical character of the atmosphere with some confidence, suggesting that, in the absence of unusually dense in-situ measurements (such as is the case with GPS data for Los Angeles), weather analysis can play a valuable role in constraining the power-spectrum expected in an interferogram due to the troposphere. This could be used to provide objective weights to scenes during traditional stacking or to tune the filter parameters in time-series analyses.
Performance Optimization of Marine Science and Numerical Modeling on HPC Cluster
Yang, Dongdong; Yang, Hailong; Wang, Luming; Zhou, Yucong; Zhang, Zhiyuan; Wang, Rui; Liu, Yi
2017-01-01
Marine science and numerical modeling (MASNUM) is widely used in forecasting ocean wave movement, through simulating the variation tendency of the ocean wave. Although efforts have been devoted to improve the performance of MASNUM from various aspects by existing work, there is still large space unexplored for further performance improvement. In this paper, we aim at improving the performance of propagation solver and data access during the simulation, in addition to the efficiency of output I/O and load balance. Our optimizations include several effective techniques such as the algorithm redesign, load distribution optimization, parallel I/O and data access optimization. The experimental results demonstrate that our approach achieves higher performance compared to the state-of-the-art work, about 3.5x speedup without degrading the prediction accuracy. In addition, the parameter sensitivity analysis shows our optimizations are effective under various topography resolutions and output frequencies. PMID:28045972
A Runge-Kutta discontinuous finite element method for high speed flows
NASA Technical Reports Server (NTRS)
Bey, Kim S.; Oden, J. T.
1991-01-01
A Runge-Kutta discontinuous finite element method is developed for hyperbolic systems of conservation laws in two space variables. The discontinuous Galerkin spatial approximation to the conservation laws results in a system of ordinary differential equations which are marched in time using Runge-Kutta methods. Numerical results for the two-dimensional Burger's equation show that the method is (p+1)-order accurate in time and space, where p is the degree of the polynomial approximation of the solution within an element and is capable of capturing shocks over a single element without oscillations. Results for this problem also show that the accuracy of the solution in smooth regions is unaffected by the local projection and that the accuracy in smooth regions increases as p increases. Numerical results for the Euler equations show that the method captures shocks without oscillations and with higher resolution than a first-order scheme.
Resolution power in digital in-line holography
NASA Astrophysics Data System (ADS)
Garcia-Sucerquia, J.; Xu, W.; Jericho, S. K.; Jericho, M. H.; Klages, P.; Kreuzer, H. J.
2006-01-01
Digital in-line holographic microscopy (DIHM) can achieve wavelength resolution both laterally and in depth with the simple optical setup consisting of a laser illuminating a wavelength-sized pinhole and a CCD camera for recording the hologram. The reconstruction is done numerically on the basis of the Kirchhoff-Helmholtz transform which yields a three-dimensional image of the objects throughout the sample volume. Resolution in DIHM depends on several controllable factors or parameters: (1) pinhole size controlling spatial coherence, (2) numerical aperture given by the size and positioning of the recording CCD chip, (3) pixel density and dynamic range controlling fringe resolution and noise level in the hologram and (4) wavelength. We present a detailed study of the individual and combined effects of these factors by doing an analytical analysis coupled with numerical simulations of holograms and their reconstruction. The result of this analysis is a set of criteria, also in the form of graphs, which can be used for the optimum design of the DIHM setup. We will also present a series of experimental results that test and confirm our theoretical analysis. The ultimate resolution to date is the imaging of the motion of submicron spheres and bacteria, a few microns apart, with speeds of hundreds of microns per second.
NASA Astrophysics Data System (ADS)
Deng, Hongping; Mayer, Lucio; Meru, Farzana
2017-09-01
We carry out simulations of gravitationally unstable disks using smoothed particle hydrodynamics (SPH) and the novel Lagrangian meshless finite mass (MFM) scheme in the GIZMO code. Our aim is to understand the cause of the nonconvergence of the cooling boundary for fragmentation reported in the literature. We run SPH simulations with two different artificial viscosity implementations and compare them with MFM, which does not employ any artificial viscosity. With MFM we demonstrate convergence of the critical cooling timescale for fragmentation at {β }{crit}≈ 3. Nonconvergence persists in SPH codes. We show how the nonconvergence problem is caused by artificial fragmentation triggered by excessive dissipation of angular momentum in domains with large velocity derivatives. With increased resolution, such domains become more prominent. Vorticity lags behind density, due to numerical viscous dissipation in these regions, promoting collapse with longer cooling times. Such effect is shown to be dominant over the competing tendency of artificial viscosity to diminish with increasing resolution. When the initial conditions are first relaxed for several orbits, the flow is more regular, with lower shear and vorticity in nonaxisymmetric regions, aiding convergence. Yet MFM is the only method that converges exactly. Our findings are of general interest, as numerical dissipation via artificial viscosity or advection errors can also occur in grid-based codes. Indeed, for the FARGO code values of {β }{crit} significantly higher than our converged estimate have been reported in the literature. Finally, we discuss implications for giant planet formation via disk instability.
Parameter uncertainty in simulations of extreme precipitation and attribution studies.
NASA Astrophysics Data System (ADS)
Timmermans, B.; Collins, W. D.; O'Brien, T. A.; Risser, M. D.
2017-12-01
The attribution of extreme weather events, such as heavy rainfall, to anthropogenic influence involves the analysis of their probability in simulations of climate. The climate models used however, such as the Community Atmosphere Model (CAM), employ approximate physics that gives rise to "parameter uncertainty"—uncertainty about the most accurate or optimal values of numerical parameters within the model. In particular, approximate parameterisations for convective processes are well known to be influential in the simulation of precipitation extremes. Towards examining the impact of this source of uncertainty on attribution studies, we investigate the importance of components—through their associated tuning parameters—of parameterisations relating to deep and shallow convection, and cloud and aerosol microphysics in CAM. We hypothesise that as numerical resolution is increased the change in proportion of variance induced by perturbed parameters associated with the respective components is consistent with the decreasing applicability of the underlying hydrostatic assumptions. For example, that the relative influence of deep convection should diminish as resolution approaches that where convection can be resolved numerically ( 10 km). We quantify the relationship between the relative proportion of variance induced and numerical resolution by conducting computer experiments that examine precipitation extremes over the contiguous U.S. In order to mitigate the enormous computational burden of running ensembles of long climate simulations, we use variable-resolution CAM and employ both extreme value theory and surrogate modelling techniques ("emulators"). We discuss the implications of the relationship between parameterised convective processes and resolution both in the context of attribution studies and progression towards models that fully resolve convection.
NASA Astrophysics Data System (ADS)
Jeon, Seungwan; Park, Jihoon; Kim, Chulhong
2018-02-01
Photoacoustic microscopy (PAM) is a hybrid imaging technology using optical illumination and acoustic detection. PAM is divided into two types: optical-resolution PAM (OR-PAM) and acoustic-resolution photoacoustic microscopy (AR-PAM). Among them, AR-PAM has a great advantage in the penetration depth compared to OR-PAM because ARPAM relies on the acoustic focus, which is much less scattered in biological tissue than optical focus. However, because the acoustic focus is not as tight as the optical focus with a same numerical aperture (NA), the AR-PAM requires acoustic NA higher than optical NA. The high NA of the acoustic focus produces good image quality in the focal zone, but significantly degrades spatial resolution and signal-to-noise ratio (SNR) in the out-of-focal zone. To overcome the problem, synthetic aperture focusing technique (SAFT) has been introduced. SAFT improves the degraded image quality in terms of both SNR and spatial resolution in the out-of-focus zone by calculating the time delay of the corresponding signals and combining them. To extend the dimension of correction effect, several 2D SAFTs have been introduced, but there was a problem that the conventional 2D SAFTs cannot improve the degraded SNR and resolution as 1D SAFT can do. In this study, we proposed a new 2D SAFT that can compensate the distorted signals in x and y directions while maintaining the correction performance as the 1D SAFT.
NASA Astrophysics Data System (ADS)
Zhang, Pengfei; Goswami, Mayank; Pugh, Edward N.; Zawadzki, Robert J.
2016-03-01
Scanning Laser Ophthalmoscopy (SLO) is a very important imaging tool in ophthalmology research. By combing with Adaptive Optics (AO) technique, AO-SLO can correct for ocular aberrations resulting in cellular level resolution, allowing longitudinal studies of single cells morphology in the living eyes. The numerical aperture (NA) sets the optical resolution that can be achieve in the "classical" imaging systems. Mouse eye has more than twice NA of the human eye, thus offering theoretically higher resolution. However, in most SLO based imaging systems the imaging beam size at mouse pupil sets the NA of that instrument, while most of the AO-SLO systems use almost the full NA of the mouse eye. In this report, we first simulated the theoretical resolution that can be achieved in vivo for different imaging beam sizes (different NA), assumingtwo cases: no aberrations and aberrations based on published mouse ocular wavefront data. Then we imaged mouse retinas with our custom build SLO system using different beam sizes to compare these results with theory. Further experiments include comparison of the SLO and AO-SLO systems for imaging different type of fluorescently labeled cells (microglia, ganglion, photoreceptors, etc.). By comparing those results and taking into account systems complexity and ease of use, the benefits and drawbacks of two imaging systems will be discussed.
On controlling nonlinear dissipation in high order filter methods for ideal and non-ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjogreen, B.
2004-01-01
The newly developed adaptive numerical dissipation control in spatially high order filter schemes for the compressible Euler and Navier-Stokes equations has been recently extended to the ideal and non-ideal magnetohydrodynamics (MHD) equations. These filter schemes are applicable to complex unsteady MHD high-speed shock/shear/turbulence problems. They also provide a natural and efficient way for the minimization of Div(B) numerical error. The adaptive numerical dissipation mechanism consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. The numerical dissipation considered consists of high order linear dissipation for the suppression of high frequency oscillation and the nonlinear dissipative portion of high-resolution shock-capturing methods for discontinuity capturing. The applicable nonlinear dissipative portion of high-resolution shock-capturing methods is very general. The objective of this paper is to investigate the performance of three commonly used types of nonlinear numerical dissipation for both the ideal and non-ideal MHD.
USDA-ARS?s Scientific Manuscript database
Many societal applications of soil moisture data products require high spatial resolution and numerical accuracy. Current thermal geostationary satellite sensors (GOES Imager and GOES-R ABI) could produce 2-16km resolution soil moisture proxy data. Passive microwave satellite radiometers (e.g. AMSR...
An integrated model for Jupiter's dynamo action and mean jet dynamics
NASA Astrophysics Data System (ADS)
Gastine, Thomas; Wicht, Johannes; Duarte, Lucia; Heimpel, Moritz
2014-05-01
Data from various space crafts revealed that Jupiter's large scale interior magnetic field is very Earth-like. This is surprising since numerical simulations have demonstrated that, for example, the radial dependence of density, electrical conductivity and other physical properties, which is only mild in the iron cores of terrestrial planets but very drastic in gas planets, can significantly affect the interior dynamics. Jupiter's dynamo action is thought to take place in the deeper envelope where hydrogen, the main constituent of Jupiter's atmosphere, assumes metallic properties. The potential interaction between the observed zonal jets and the deeper dynamo region is an unresolved problem with important consequences for the magnetic field generation. Here we present the first numerical simulation that is based on recent interior models and covers 99% of the planetary radius (below the 1 bar level). A steep decease in the electrical conductivity over the outer 10% in radius allowed us to model both the deeper metallic region and the outer molecular layer in an integrated approach. The magnetic field very closely reproduces Jupiter's known large scale field. A strong equatorial zonal jet remains constrained to the molecular layer while higher latitude jets are suppressed by Lorentz forces. This suggests that Jupiter's higher latitude jets remain shallow and are driven by an additional effect not captured in our deep convection model. The dynamo action of the equatorial jet produces a band of magnetic field located around the equator. The unprecedented magnetic field resolution expected from the Juno mission will allow to resolve this feature allowing a direct detection of the equatorial jet dynamics at depth. Typical secular variation times scales amount to around 750 yr for the dipole contribution but decrease to about 5 yr at the expected Juno resolution (spherical harmonic degree 20). At a nominal mission duration of one year Juno should therefore be able to directly detect secular variation effects in the higher field harmonics.
Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals.
Bhattacharya, Ujjwal; Chaudhuri, B B
2009-03-01
This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents.
Subwavelength resolution Fourier ptychography with hemispherical digital condensers
NASA Astrophysics Data System (ADS)
Pan, An; Zhang, Yan; Li, Maosen; Zhou, Meiling; Lei, Ming; Yao, Baoli
2018-02-01
Fourier ptychography (FP) is a promising computational imaging technique that overcomes the physical space-bandwidth product (SBP) limit of a conventional microscope by applying angular diversity illuminations. However, to date, the effective imaging numerical aperture (NA) achievable with a commercial LED board is still limited to the range of 0.3-0.7 with a 4×/0.1NA objective due to the constraint of planar geometry with weak illumination brightness and attenuated signal-to-noise ratio (SNR). Thus the highest achievable half-pitch resolution is usually constrained between 500-1000 nm, which cannot fulfill some needs of high-resolution biomedical imaging applications. Although it is possible to improve the resolution by using a higher magnification objective with larger NA instead of enlarging the illumination NA, the SBP is suppressed to some extent, making the FP technique less appealing, since the reduction of field-of-view (FOV) is much larger than the improvement of resolution in this FP platform. Herein, in this paper, we initially present a subwavelength resolution Fourier ptychography (SRFP) platform with a hemispherical digital condenser to provide high-angle programmable plane-wave illuminations of 0.95NA, attaining a 4×/0.1NA objective with the final effective imaging performance of 1.05NA at a half-pitch resolution of 244 nm with a wavelength of 465 nm across a wide FOV of 14.60 mm2 , corresponding to an SBP of 245 megapixels. Our work provides an essential step of FP towards high-NA imaging applications without scarfing the FOV, making it more practical and appealing.
NASA Astrophysics Data System (ADS)
Michael, Scott A.; Steiman-Cameron, T.; Durisen, R.; Boley, A.
2008-05-01
Using 3D simulations of a cooling disk undergoing gravitational instabilities (GIs), we compute the effective Shakura and Sunyaev (1973) alphas due to gravitational torques and compare them to predictions from an analytic local theory for thin disks by Gammie (2001). Our goal is to determine how accurately a locally defined alpha can characterize mass and angular momentum transport by GIs in disks. Cases are considered both with cooling by an imposed constant global cooling time (Mejia et al. 2005) and with realistic radiative transfer (Boley et al. 2007). Grid spacing in the azimuthal direction is varied to investigate how the computed alpha is affected by numerical resolution. The azimuthal direction is particularly important, because higher resolution in azimuth allows GI power to spread to higher-order (multi-armed) modes that behave more locally. We find that, in many important respects, the transport of mass and angular momentum by GIs is an intrinsically global phenomenon. Effective alphas are variable on a dynamic time scale over global spatial scales. Nevertheless, preliminary results at the highest resolutions for an imposed cooling time show that our computed alphas, though systematically higher, tend on average to follow Gammie's prediction to within perhaps a factor of two. Our computed alphas include only gravitational stresses, while in Gammie's treatment the effective alpha is due equally to hydrodynamic (Reynolds) and gravitational stresses. So Gammie's prediction may significantly underestimate the true average stresses in a GI-active disk. Our effective alphas appear to be reasonably well converged for 256 and 512 azimuthal zones. We also have a high-resolution simulation under way to test the extent of radial mixing by GIs of gas and its entrained dust for comparison with Stardust observations. Results will be presented if available at the time of the meeting.
Low-Cost Sensor Units for Measuring Urban Air Quality
NASA Astrophysics Data System (ADS)
Popoola, O. A.; Mead, M.; Stewart, G.; Hodgson, T.; McLoed, M.; Baldovi, J.; Landshoff, P.; Hayes, M.; Calleja, M.; Jones, R.
2010-12-01
Measurements of selected key air quality gases (CO, NO & NO2) have been made with a range of miniature low-cost sensors based on electrochemical gas sensing technology incorporating GPS and GPRS for position and communication respectively. Two types of simple to operate sensors units have been designed to be deployed in relatively large numbers. Mobile handheld sensor units designed for operation by members of the public have been deployed on numerous occasions including in Cambridge, London and Valencia. Static sensor units have also been designed for long-term autonomous deployment on existing street furniture. A study was recently completed in which 45 sensor units were deployed in the Cambridge area for a period of 3 months. Results from these studies indicate that air quality varies widely both spatially and temporally. The widely varying concentrations found suggest that the urban environment cannot be fully understood using limited static site (AURN) networks and that a higher resolution, more dispersed network is required to better define air quality in the urban environment. The results also suggest that higher spatial and temporal resolution measurements could improve knowledge of the levels of individual exposure in the urban environment.
NASA Astrophysics Data System (ADS)
Mailhot, J.; Milbrandt, J. A.; Giguère, A.; McTaggart-Cowan, R.; Erfani, A.; Denis, B.; Glazer, A.; Vallée, M.
2014-01-01
Environment Canada ran an experimental numerical weather prediction (NWP) system during the Vancouver 2010 Winter Olympic and Paralympic Games, consisting of nested high-resolution (down to 1-km horizontal grid-spacing) configurations of the GEM-LAM model, with improved geophysical fields, cloud microphysics and radiative transfer schemes, and several new diagnostic products such as density of falling snow, visibility, and peak wind gust strength. The performance of this experimental NWP system has been evaluated in these winter conditions over complex terrain using the enhanced mesoscale observing network in place during the Olympics. As compared to the forecasts from the operational regional 15-km GEM model, objective verification generally indicated significant added value of the higher-resolution models for near-surface meteorological variables (wind speed, air temperature, and dewpoint temperature) with the 1-km model providing the best forecast accuracy. Appreciable errors were noted in all models for the forecasts of wind direction and humidity near the surface. Subjective assessment of several cases also indicated that the experimental Olympic system was skillful at forecasting meteorological phenomena at high-resolution, both spatially and temporally, and provided enhanced guidance to the Olympic forecasters in terms of better timing of precipitation phase change, squall line passage, wind flow channeling, and visibility reduction due to fog and snow.
NASA Astrophysics Data System (ADS)
Prince, Alyssa; Trout, Joseph; di Mercurio, Alexis
2017-01-01
The Weather Research and Forecasting (WRF) Model is a nested-grid, mesoscale numerical weather prediction system maintained by the Developmental Testbed Center. The model simulates the atmosphere by integrating partial differential equations, which use the conservation of horizontal momentum, conservation of thermal energy, and conservation of mass along with the ideal gas law. This research investigated the possible use of WRF in investigating the effects of weather on wing tip wake turbulence. This poster shows the results of an investigation into the accuracy of WRF using different grid resolutions. Several atmospheric conditions were modeled using different grid resolutions. In general, the higher the grid resolution, the better the simulation, but the longer the model run time. This research was supported by Dr. Manuel A. Rios, Ph.D. (FAA) and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA'' (13-G-006). Dr. Manuel A. Rios, Ph.D. (FAA), and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''
A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes
With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less
NASA Astrophysics Data System (ADS)
Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.
2013-12-01
In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.
NASA Astrophysics Data System (ADS)
Moura, R. C.; Sherwin, S. J.; Peiró, J.
2016-02-01
This study addresses linear dispersion-diffusion analysis for the spectral/hp continuous Galerkin (CG) formulation in one dimension. First, numerical dispersion and diffusion curves are obtained for the advection-diffusion problem and the role of multiple eigencurves peculiar to spectral/hp methods is discussed. From the eigencurves' behaviour, we observe that CG might feature potentially undesirable non-smooth dispersion/diffusion characteristics for under-resolved simulations of problems strongly dominated by either convection or diffusion. Subsequently, the linear advection equation augmented with spectral vanishing viscosity (SVV) is analysed. Dispersion and diffusion characteristics of CG with SVV-based stabilization are verified to display similar non-smooth features in flow regions where convection is much stronger than dissipation or vice-versa, owing to a dependency of the standard SVV operator on a local Péclet number. First a modification is proposed to the traditional SVV scaling that enforces a globally constant Péclet number so as to avoid the previous issues. In addition, a new SVV kernel function is suggested and shown to provide a more regular behaviour for the eigencurves along with a consistent increase in resolution power for higher-order discretizations, as measured by the extent of the wavenumber range where numerical errors are negligible. The dissipation characteristics of CG with the SVV modifications suggested are then verified to be broadly equivalent to those obtained through upwinding in the discontinuous Galerkin (DG) scheme. Nevertheless, for the kernel function proposed, the full upwind DG scheme is found to have a slightly higher resolution power for the same dissipation levels. These results show that improved CG-SVV characteristics can be pursued via different kernel functions with the aid of optimization algorithms.
Performance of Low Dissipative High Order Shock-Capturing Schemes for Shock-Turbulence Interactions
NASA Technical Reports Server (NTRS)
Sandham, N. D.; Yee, H. C.
1998-01-01
Accurate and efficient direct numerical simulation of turbulence in the presence of shock waves represents a significant challenge for numerical methods. The objective of this paper is to evaluate the performance of high order compact and non-compact central spatial differencing employing total variation diminishing (TVD) shock-capturing dissipations as characteristic based filters for two model problems combining shock wave and shear layer phenomena. A vortex pairing model evaluates the ability of the schemes to cope with shear layer instability and eddy shock waves, while a shock wave impingement on a spatially-evolving mixing layer model studies the accuracy of computation of vortices passing through a sequence of shock and expansion waves. A drastic increase in accuracy is observed if a suitable artificial compression formulation is applied to the TVD dissipations. With this modification to the filter step the fourth-order non-compact scheme shows improved results in comparison to second-order methods, while retaining the good shock resolution of the basic TVD scheme. For this characteristic based filter approach, however, the benefits of compact schemes or schemes with higher than fourth order are not sufficient to justify the higher complexity near the boundary and/or the additional computational cost.
NREL: International Activities - Pakistan Resource Maps
. The high-resolution (1-km) annual wind power maps were developed using a numerical modeling approach along with NREL's empirical validation methodology. The high-resolution (10-km) annual and seasonal KB) | High-Res (ZIP 281 KB) 40-km Resolution Annual Maps (Direct) Low-Res (JPG 156 KB) | High-Res
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pomeroy, J. W., E-mail: James.Pomeroy@Bristol.ac.uk; Kuball, M.
2015-10-14
Solid immersion lenses (SILs) are shown to greatly enhance optical spatial resolution when measuring AlGaN/GaN High Electron Mobility Transistors (HEMTs), taking advantage of the high refractive index of the SiC substrates commonly used for these devices. Solid immersion lenses can be applied to techniques such as electroluminescence emission microscopy and Raman thermography, aiding the development device physics models. Focused ion beam milling is used to fabricate solid immersion lenses in SiC substrates with a numerical aperture of 1.3. A lateral spatial resolution of 300 nm is demonstrated at an emission wavelength of 700 nm, and an axial spatial resolution of 1.7 ± 0.3 μm atmore » a laser wavelength of 532 nm is demonstrated; this is an improvement of 2.5× and 5×, respectively, when compared with a conventional 0.5 numerical aperture objective lens without a SIL. These results highlight the benefit of applying the solid immersion lenses technique to the optical characterization of GaN HEMTs. Further improvements may be gained through aberration compensation and increasing the SIL numerical aperture.« less
Explicit filtering in large eddy simulation using a discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Brazell, Matthew J.
The discontinuous Galerkin (DG) method is a formulation of the finite element method (FEM). DG provides the ability for a high order of accuracy in complex geometries, and allows for highly efficient parallelization algorithms. These attributes make the DG method attractive for solving the Navier-Stokes equations for large eddy simulation (LES). The main goal of this work is to investigate the feasibility of adopting an explicit filter in the numerical solution of the Navier-Stokes equations with DG. Explicit filtering has been shown to increase the numerical stability of under-resolved simulations and is needed for LES with dynamic sub-grid scale (SGS) models. The explicit filter takes advantage of DG's framework where the solution is approximated using a polyno- mial basis where the higher modes of the solution correspond to a higher order polynomial basis. By removing high order modes, the filtered solution contains low order frequency content much like an explicit low pass filter. The explicit filter implementation is tested on a simple 1-D solver with an initial condi- tion that has some similarity to turbulent flows. The explicit filter does restrict the resolution as well as remove accumulated energy in the higher modes from aliasing. However, the ex- plicit filter is unable to remove numerical errors causing numerical dissipation. A second test case solves the 3-D Navier-Stokes equations of the Taylor-Green vortex flow (TGV). The TGV is useful for SGS model testing because it is initially laminar and transitions into a fully turbulent flow. The SGS models investigated include the constant coefficient Smagorinsky model, dynamic Smagorinsky model, and dynamic Heinz model. The constant coefficient Smagorinsky model is over dissipative, this is generally not desirable however it does add stability. The dynamic Smagorinsky model generally performs better, especially during the laminar-turbulent transition region as expected. The dynamic Heinz model which is based on an improved model, handles the laminar-turbulent transition region well while also showing additional robustness.
Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling
NASA Astrophysics Data System (ADS)
Rastigejev, Y.
2011-12-01
Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.
NASA Astrophysics Data System (ADS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
Key issues review: numerical studies of turbulence in stars
NASA Astrophysics Data System (ADS)
Arnett, W. David; Meakin, Casey
2016-10-01
Three major problems of single-star astrophysics are convection, magnetic fields and rotation. Numerical simulations of convection in stars now have sufficient resolution to be truly turbulent, with effective Reynolds numbers of \\text{Re}>{{10}4} , and some turbulent boundary layers have been resolved. Implications of these developments are discussed for stellar structure, evolution and explosion as supernovae. Methods for three-dimensional (3D) simulations of stars are compared and discussed for 3D atmospheres, solar rotation, core-collapse and stellar boundary layers. Reynolds-averaged Navier-Stokes (RANS) analysis of the numerical simulations has been shown to provide a novel and quantitative estimate of resolution errors. Present treatments of stellar boundaries require revision, even for early burning stages (e.g. for mixing regions during He-burning). As stellar core-collapse is approached, asymmetry and fluctuations grow, rendering spherically symmetric models of progenitors more unrealistic. Numerical resolution of several different types of three-dimensional (3D) stellar simulations are compared; it is suggested that core-collapse simulations may be under-resolved. The Rayleigh-Taylor instability in explosions has a deep connection to convection, for which the abundance structure in supernova remnants may provide evidence.
SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip
Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutionsmore » of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.« less
NASA Astrophysics Data System (ADS)
Sperber, K. R.; Palmer, T. N.
1996-11-01
The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall variability was also best reproduced. However, for all regions the skill was less than that of the ECMWF model.The relationships of the all-India and Sahel rainfall/SST teleconnections with horizontal resolution, convection scheme closure, and numerics have been evaluated. Models with resolution T42 performed more poorly than lower-resolution models. The higher resolution models were predominantly spectral. At low resolution, spectral versus gridpoint numerics performed with nearly equal verisimilitude. At low resolution, moisture convergence closure was slightly more preferable than other convective closure techniques. At high resolution, the models that used moisture convergence closure performed very poorly, suggesting that moisture convergence may be problematic for models with horizontal resolution T42.
Everall, Neil J; Priestnall, Ian M; Clarke, Fiona; Jayes, Linda; Poulter, Graham; Coombs, David; George, Michael W
2009-03-01
This paper describes preliminary investigations into the spatial resolution of macro attenuated total reflection (ATR) Fourier transform infrared (FT-IR) imaging and the distortions that arise when imaging intact, convex domains, using spheres as an extreme example. The competing effects of shallow evanescent wave penetration and blurring due to finite spatial resolution meant that spheres within the range 20-140 microm all appeared to be approximately the same size ( approximately 30-35 microm) when imaged with a numerical aperture (NA) of approximately 0.2. A very simple model was developed that predicted this extreme insensitivity to particle size. On the basis of these studies, it is anticipated that ATR imaging at this NA will be insensitive to the size of intact highly convex objects. A higher numerical aperture device should give a better estimate of the size of small spheres, owing to superior spatial resolution, but large spheres should still appear undersized due to the shallow sampling depth. An estimate of the point spread function (PSF) was required in order to develop and apply the model. The PSF was measured by imaging a sharp interface; assuming an Airy profile, the PSF width (distance from central maximum to first minimum) was estimated to be approximately 20 and 30 microm for IR bands at 1600 and 1000 cm(-1), respectively. This work has two significant limitations. First, underestimation of domain size only arises when imaging intact convex objects; if surfaces are prepared that randomly and representatively section through domains, the images can be analyzed to calculate parameters such as domain size, area, and volume. Second, the model ignores reflection and refraction and assumes weak absorption; hence, the predicted intensity profiles are not expected to be accurate; they merely give a rough estimate of the apparent sphere size. Much further work is required to place the field of quantitative ATR-FT-IR imaging on a sound basis.
Neuronal foundations of human numerical representations.
Eger, E
2016-01-01
The human species has developed complex mathematical skills which likely emerge from a combination of multiple foundational abilities. One of them seems to be a preverbal capacity to extract and manipulate the numerosity of sets of objects which is shared with other species and in humans is thought to be integrated with symbolic knowledge to result in a more abstract representation of numerical concepts. For what concerns the functional neuroanatomy of this capacity, neuropsychology and functional imaging have localized key substrates of numerical processing in parietal and frontal cortex. However, traditional fMRI mapping relying on a simple subtraction approach to compare numerical and nonnumerical conditions is limited to tackle with sufficient precision and detail the issue of the underlying code for number, a question which more easily lends itself to investigation by methods with higher spatial resolution, such as neurophysiology. In recent years, progress has been made through the introduction of approaches sensitive to within-category discrimination in combination with fMRI (adaptation and multivariate pattern recognition), and the present review summarizes what these have revealed so far about the neural coding of individual numbers in the human brain, the format of these representations and parallels between human and monkey neurophysiology findings. © 2016 Elsevier B.V. All rights reserved.
Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations
NASA Astrophysics Data System (ADS)
Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa
2017-05-01
We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiswell, S
2009-01-11
Assimilation of radar velocity and precipitation fields into high-resolution model simulations can improve precipitation forecasts with decreased 'spin-up' time and improve short-term simulation of boundary layer winds (Benjamin, 2004 & 2007; Xiao, 2008) which is critical to improving plume transport forecasts. Accurate description of wind and turbulence fields is essential to useful atmospheric transport and dispersion results, and any improvement in the accuracy of these fields will make consequence assessment more valuable during both routine operation as well as potential emergency situations. During 2008, the United States National Weather Service (NWS) radars implemented a significant upgrade which increased the real-timemore » level II data resolution to 8 times their previous 'legacy' resolution, from 1 km range gate and 1.0 degree azimuthal resolution to 'super resolution' 250 m range gate and 0.5 degree azimuthal resolution (Fig 1). These radar observations provide reflectivity, velocity and returned power spectra measurements at a range of up to 300 km (460 km for reflectivity) at a frequency of 4-5 minutes and yield up to 13.5 million point observations per level in super-resolution mode. The migration of National Weather Service (NWS) WSR-88D radars to super resolution is expected to improve warning lead times by detecting small scale features sooner with increased reliability; however, current operational mesoscale model domains utilize grid spacing several times larger than the legacy data resolution, and therefore the added resolution of radar data is not fully exploited. The assimilation of super resolution reflectivity and velocity data into high resolution numerical weather model forecasts where grid spacing is comparable to the radar data resolution is investigated here to determine the impact of the improved data resolution on model predictions.« less
B. W. Butler; N. S. Wagenbrenner; J. M. Forthofer; B. K. Lamb; K. S. Shannon; D. Finn; R. M. Eckman; K. Clawson; L. Bradshaw; P. Sopko; S. Beard; D. Jimenez; C. Wold; M. Vosburgh
2015-01-01
A number of numerical wind flow models have been developed for simulating wind flow at relatively fine spatial resolutions (e.g., 100 m); however, there are very limited observational data available for evaluating these high-resolution models. This study presents high-resolution surface wind data sets collected from an isolated mountain and a steep river canyon. The...
NASA Technical Reports Server (NTRS)
Follen, G.; Naiman, C.; auBuchon, M.
2000-01-01
Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.
Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion
NASA Astrophysics Data System (ADS)
Hesser, T.; Farthing, M. W.; Brodie, K.
2016-02-01
The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.
NASA Astrophysics Data System (ADS)
López-Venegas, Alberto M.; Horrillo, Juan; Pampell-Manis, Alyssa; Huérfano, Victor; Mercado, Aurelio
2015-06-01
The most recent tsunami observed along the coast of the island of Puerto Rico occurred on October 11, 1918, after a magnitude 7.2 earthquake in the Mona Passage. The earthquake was responsible for initiating a tsunami that mostly affected the northwestern coast of the island. Runup values from a post-tsunami survey indicated the waves reached up to 6 m. A controversy regarding the source of the tsunami has resulted in several numerical simulations involving either fault rupture or a submarine landslide as the most probable cause of the tsunami. Here we follow up on previous simulations of the tsunami from a submarine landslide source off the western coast of Puerto Rico as initiated by the earthquake. Improvements on our previous study include: (1) higher-resolution bathymetry; (2) a 3D-2D coupled numerical model specifically developed for the tsunami; (3) use of the non-hydrostatic numerical model NEOWAVE (non-hydrostatic evolution of ocean WAVE) featuring two-way nesting capabilities; and (4) comprehensive energy analysis to determine the time of full tsunami wave development. The three-dimensional Navier-Stokes model tsunami solution using the Navier-Stokes algorithm with multiple interfaces for two fluids (water and landslide) was used to determine the initial wave characteristic generated by the submarine landslide. Use of NEOWAVE enabled us to solve for coastal inundation, wave propagation, and detailed runup. Our results were in agreement with previous work in which a submarine landslide is favored as the most probable source of the tsunami, and improvement in the resolution of the bathymetry yielded inundation of the coastal areas that compare well with values from a post-tsunami survey. Our unique energy analysis indicates that most of the wave energy is isolated in the wave generation region, particularly at depths near the landslide, and once the initial wave propagates from the generation region its energy begins to stabilize.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-04-01
The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less
NASA Astrophysics Data System (ADS)
Brasseur, P.; Verron, J. A.; Djath, B.; Duran, M.; Gaultier, L.; Gourdeau, L.; Melet, A.; Molines, J. M.; Ubelmann, C.
2014-12-01
The upcoming high-resolution SWOT altimetry satellite will provide an unprecedented description of the ocean dynamic topography for studying sub- and meso-scale processes in the ocean. But there is still much uncertainty on the signal that will be observed. There are many scientific questions that are unresolved about the observability of altimetry at vhigh resolution and on the dynamical role of the ocean meso- and submesoscales. In addition, SWOT data will raise specific problems due to the size of the data flows. These issues will probably impact the data assimilation approaches for future scientific or operational oceanography applications. In this work, we propose to use a high-resolution numerical model of the Western Pacific Solomon Sea as a regional laboratory to explore such observability and dynamical issues, as well as new data assimilation challenges raised by SWOT. The Solomon Sea connects subtropical water masses to the equatorial ones through the low latitude western boundary currents and could potentially modulate the tropical Pacific climate. In the South Western Pacific, the Solomon Sea exhibits very intense eddy kinetic energy levels, while relatively little is known about the mesoscale and submesoscale activities in this region. The complex bathymetry of the region, complicated by the presence of narrow straits and numerous islands, raises specific challenges. So far, a Solomon sea model configuration has been set up at 1/36° resolution. Numerical simulations have been performed to explore the meso- and submesoscales dynamics. The numerical solutions which have been validated against available in situ data, show the development of small scale features, eddies, fronts and filaments. Spectral analysis reveals a behavior that is consistent with the SQG theory. There is a clear evidence of energy cascade from the small scales including the submesoscales, although those submesoscales are only partially resolved by the model. In parallel, investigations have been conducted using image assimilation approaches in order to explore the richness of high-resolution altimetry missions. These investigations illustrate the potential benefit of combining tracer fields (SST, SSS and spiciness) with high-resolution SWOT data to estimate the fine-scale circulation.
New developments in super-resolution for GaoFen-4
NASA Astrophysics Data System (ADS)
Li, Feng; Fu, Jie; Xin, Lei; Liu, Yuhong; Liu, Zhijia
2017-10-01
In this paper, the application of super resolution (SR, restoring a high spatial resolution image from a series of low resolution images of the same scene) techniques to GaoFen(GF)-4, which is the most advanced geostationaryorbit earth observing satellite in China, remote sensing images is investigated and tested. SR has been a hot research area for decades, but one of the barriers of applying SR in remote sensing community is the time slot between those low resolution (LR) images acquisition. In general, the longer the time slot, the less reliable the reconstruction. GF-4 has the unique advantage of capturing a sequence of LR of the same region in minutes, i.e. working as a staring camera from the point view of SR. This is the first experiment of applying super resolution to a sequence of low resolution images captured by GF-4 within a short time period. In this paper, we use Maximum a Posteriori (MAP) to solve the ill-conditioned problem of SR. Both the wavelet transform and the curvelet transform are used to setup a sparse prior for remote sensing images. By combining several images of both the BeiJing and DunHuang regions captured by GF-4 our method can improve spatial resolution both visually and numerically. Experimental tests show that lots of detail cannot be observed in the captured LR images, but can be seen in the super resolved high resolution (HR) images. To help the evaluation, Google Earth image can also be referenced. Moreover, our experimental tests also show that the higher the temporal resolution, the better the HR images can be resolved. The study illustrates that the application for SR to geostationary-orbit based earth observation data is very feasible and worthwhile, and it holds the potential application for all other geostationary-orbit based earth observing systems.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.
Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L
2015-03-01
Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
Thermodynamical effects and high resolution methods for compressible fluid flows
NASA Astrophysics Data System (ADS)
Li, Jiequan; Wang, Yue
2017-08-01
One of the fundamental differences of compressible fluid flows from incompressible fluid flows is the involvement of thermodynamics. This difference should be manifested in the design of numerical schemes. Unfortunately, the role of entropy, expressing irreversibility, is often neglected even though the entropy inequality, as a conceptual derivative, is verified for some first order schemes. In this paper, we refine the GRP solver to illustrate how the thermodynamical variation is integrated into the design of high resolution methods for compressible fluid flows and demonstrate numerically the importance of thermodynamic effects in the resolution of strong waves. As a by-product, we show that the GRP solver works for generic equations of state, and is independent of technical arguments.
Fercher, A; Hitzenberger, C; Sticker, M; Zawadzki, R; Karamata, B; Lasser, T
2001-12-03
Dispersive samples introduce a wavelength dependent phase distortion to the probe beam. This leads to a noticeable loss of depth resolution in high resolution OCT using broadband light sources. The standard technique to avoid this consequence is to balance the dispersion of the sample byarrangingadispersive materialinthereference arm. However, the impact of dispersion is depth dependent. A corresponding depth dependent dispersion balancing technique is diffcult to implement. Here we present a numerical dispersion compensation technique for Partial Coherence Interferometry (PCI) and Optical Coherence Tomography (OCT) based on numerical correlation of the depth scan signal with a depth variant kernel. It can be used a posteriori and provides depth dependent dispersion compensation. Examples of dispersion compensated depth scan signals obtained from microscope cover glasses are presented.
NASA Technical Reports Server (NTRS)
Garcia-Espada, Susana; Haas, Rudiger; Colomer, Francisco
2010-01-01
An important limitation for the precision in the results obtained by space geodetic techniques like VLBI and GPS are tropospheric delays caused by the neutral atmosphere, see e.g. [1]. In recent years numerical weather models (NWM) have been applied to improve mapping functions which are used for tropospheric delay modeling in VLBI and GPS data analyses. In this manuscript we use raytracing to calculate slant delays and apply these to the analysis of Europe VLBI data. The raytracing is performed through the limited area numerical weather prediction (NWP) model HIRLAM. The advantages of this model are high spatial (0.2 deg. x 0.2 deg.) and high temporal resolution (in prediction mode three hours).
Development of the GEOS-5 Atmospheric General Circulation Model: Evolution from MERRA to MERRA2.
NASA Technical Reports Server (NTRS)
Molod, Andrea; Takacs, Lawrence; Suarez, Max; Bacmeister, Julio
2014-01-01
The Modern-Era Retrospective Analysis for Research and Applications-2 (MERRA2) version of the GEOS-5 (Goddard Earth Observing System Model - 5) Atmospheric General Circulation Model (AGCM) is currently in use in the NASA Global Modeling and Assimilation Office (GMAO) at a wide range of resolutions for a variety of applications. Details of the changes in parameterizations subsequent to the version in the original MERRA reanalysis are presented here. Results of a series of atmosphere-only sensitivity studies are shown to demonstrate changes in simulated climate associated with specific changes in physical parameterizations, and the impact of the newly implemented resolution-aware behavior on simulations at different resolutions is demonstrated. The GEOS-5 AGCM presented here is the model used as part of the GMAO's MERRA2 reanalysis, the global mesoscale "nature run", the real-time numerical weather prediction system, and for atmosphere-only, coupled ocean-atmosphere and coupled atmosphere-chemistry simulations. The seasonal mean climate of the MERRA2 version of the GEOS-5 AGCM represents a substantial improvement over the simulated climate of the MERRA version at all resolutions and for all applications. Fundamental improvements in simulated climate are associated with the increased re-evaporation of frozen precipitation and cloud condensate, resulting in a wetter atmosphere. Improvements in simulated climate are also shown to be attributable to changes in the background gravity wave drag, and to upgrades in the relationship between the ocean surface stress and the ocean roughness. The series of "resolution aware" parameters related to the moist physics were shown to result in improvements at higher resolutions, and result in AGCM simulations that exhibit seamless behavior across different resolutions and applications.
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Case, Jonathan L.; LaFontaine, Frank J.; Kumar, Sujay V.
2012-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the SPoRT-MODIS GVF dataset on a land surface model (LSM) apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. In the West, higher latent heat fluxes prevailed, which enhanced the rates of evapotranspiration and soil moisture depletion in the LSM. By late Summer and Autumn, both the average sensible and latent heat fluxes increased in the West as a result of the more rapid soil drying and higher coverage of GVF. The impacts of the SPoRT GVF dataset on NWP was also examined for a single severe weather case study using the Weather Research and Forecasting (WRF) model. Two separate coupled LIS/WRF model simulations were made for the 17 July 2010 severe weather event in the Upper Midwest using the NCEP and SPoRT GVFs, with all other model parameters remaining the same. Based on the sensitivity results, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and lower direct surface heating, which typically resulted in lower (higher) predicted 2-m temperatures (2-m dewpoint temperatures). Portions of the Northern Plains states experienced substantial increases in convective available potential energy as a result of the higher SPoRT/MODIS GVFs. These differences produced subtle yet quantifiable differences in the simulated convective precipitation systems for this event.
Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil
NASA Astrophysics Data System (ADS)
Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo
2013-04-01
The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive precipitation was reduced by adjusting some parameters in the cloud microphysics scheme. Precipitation overestimate still occurs and further tests are still necessary. The increase of horizontal resolution to 1 km required adjusting model diffusion parameters and refining divergence calculations. Available observations in the region for a thorough evaluation is a major constraint.
NASA Astrophysics Data System (ADS)
Duncan, D.; Kummerow, C. D.; Meier, W.
2016-12-01
Over the lifetime of AMSR-E, operational retrieval algorithms were developed and run for precipitation, ocean suite (SST, wind speed, cloud liquid water path, and column water vapor over ocean), sea ice, snow water equivalent, and soil moisture. With a separate algorithm for each group, the retrievals were never interactive or integrated in any way despite many co-sensitivities. AMSR2, the follow-on mission to AMSR-E, retrieves the same parameters at a slightly higher spatial resolution. We have combined the operational algorithms for AMSR2 in a way that facilitates sharing information between the retrievals. Difficulties that arose were mainly related to calibration, spatial resolution, coastlines, and order of processing. The integration of all algorithms for AMSR2 has numerous benefits, including better detection of light precipitation and sea ice, fewer screened out pixels, and better quality flags. Integrating the algorithms opens up avenues for investigating the limits of detectability for precipitation from a passive microwave radiometer and the impact of spatial resolution on sea ice edge detection; these are investigated using CloudSat and MODIS coincident observations from the A-Train constellation.
GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation
Jiang, Bo; Liang, Shunlin; Ma, Han; ...
2016-03-09
Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less
GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Bo; Liang, Shunlin; Ma, Han
Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less
Fast multigrid-based computation of the induced electric field for transcranial magnetic stimulation
NASA Astrophysics Data System (ADS)
Laakso, Ilkka; Hirata, Akimasa
2012-12-01
In transcranial magnetic stimulation (TMS), the distribution of the induced electric field, and the affected brain areas, depends on the position of the stimulation coil and the individual geometry of the head and brain. The distribution of the induced electric field in realistic anatomies can be modelled using computational methods. However, existing computational methods for accurately determining the induced electric field in realistic anatomical models have suffered from long computation times, typically in the range of tens of minutes or longer. This paper presents a matrix-free implementation of the finite-element method with a geometric multigrid method that can potentially reduce the computation time to several seconds or less even when using an ordinary computer. The performance of the method is studied by computing the induced electric field in two anatomically realistic models. An idealized two-loop coil is used as the stimulating coil. Multiple computational grid resolutions ranging from 2 to 0.25 mm are used. The results show that, for macroscopic modelling of the electric field in an anatomically realistic model, computational grid resolutions of 1 mm or 2 mm appear to provide good numerical accuracy compared to higher resolutions. The multigrid iteration typically converges in less than ten iterations independent of the grid resolution. Even without parallelization, each iteration takes about 1.0 s or 0.1 s for the 1 and 2 mm resolutions, respectively. This suggests that calculating the electric field with sufficient accuracy in real time is feasible.
Multi-scale imaging and elastic simulation of carbonates
NASA Astrophysics Data System (ADS)
Faisal, Titly Farhana; Awedalkarim, Ahmed; Jouini, Mohamed Soufiane; Jouiad, Mustapha; Chevalier, Sylvie; Sassi, Mohamed
2016-05-01
Digital Rock Physics (DRP) is an emerging technology that can be used to generate high quality, fast and cost effective special core analysis (SCAL) properties compared to conventional experimental techniques and modeling techniques. The primary workflow of DRP conssits of three elements: 1) image the rock sample using high resolution 3D scanning techniques (e.g. micro CT, FIB/SEM), 2) process and digitize the images by segmenting the pore and matrix phases 3) simulate the desired physical properties of the rocks such as elastic moduli and velocities of wave propagation. A Finite Element Method based algorithm, that discretizes the basic Hooke's Law equation of linear elasticity and solves it numerically using a fast conjugate gradient solver, developed by Garboczi and Day [1] is used for mechanical and elastic property simulations. This elastic algorithm works directly on the digital images by treating each pixel as an element. The images are assumed to have periodic constant-strain boundary condition. The bulk and shear moduli of the different phases are required inputs. For standard 1.5" diameter cores however the Micro-CT scanning reoslution (around 40 μm) does not reveal smaller micro- and nano- pores beyond the resolution. This results in an unresolved "microporous" phase, the moduli of which is uncertain. Knackstedt et al. [2] assigned effective elastic moduli to the microporous phase based on self-consistent theory (which gives good estimation of velocities for well cemented granular media). Jouini et al. [3] segmented the core plug CT scan image into three phases and assumed that micro porous phase is represented by a sub-extracted micro plug (which too was scanned using Micro-CT). Currently the elastic numerical simulations based on CT-images alone largely overpredict the bulk, shear and Young's modulus when compared to laboratory acoustic tests of the same rocks. For greater accuracy of numerical simulation prediction, better estimates of moduli inputs for this current unresolved phase is important. In this work we take a multi-scale imaging approach by first extracting a smaller 0.5" core and scanning at approx 13 µm, then further extracting a 5mm diameter core scanned at 5 μm. From this last scale, region of interests (containing unresolved areas) are identified for scanning at higher resolutions using Focalised Ion Beam (FIB/SEM) scanning technique reaching 50 nm resolution. Numerical simulation is run on such a small unresolved section to obtain a better estimate of the effective moduli which is then used as input for simulations performed using CT-images. Results are compared with expeirmental acoustic test moduli obtained also at two scales: 1.5" and 0.5" diameter cores.
NASA Astrophysics Data System (ADS)
Tang, Tingting
In this dissertation, we develop structured population models to examine how changes in the environmental affect population processes. In Chapter 2, we develop a general continuous time size structured model describing a susceptible-infected (SI) population coupled with the environment. This model applies to problems arising in ecology, epidemiology, and cell biology. The model consists of a system of quasilinear hyperbolic partial differential equations coupled with a system of nonlinear ordinary differential equations that represent the environment. We develop a second-order high resolution finite difference scheme to numerically solve the model. Convergence of this scheme to a weak solution with bounded total variation is proved. We numerically compare the second order high resolution scheme with a first order finite difference scheme. Higher order of convergence and high resolution property are observed in the second order finite difference scheme. In addition, we apply our model to a multi-host wildlife disease problem, questions regarding the impact of the initial population structure and transition rate within each host are numerically explored. In Chapter 3, we use a stage structured matrix model for wildlife population to study the recovery process of the population given an environmental disturbance. We focus on the time it takes for the population to recover to its pre-event level and develop general formulas to calculate the sensitivity or elasticity of the recovery time to changes in the initial population distribution, vital rates and event severity. Our results suggest that the recovery time is independent of the initial population size, but is sensitive to the initial population structure. Moreover, it is more sensitive to the reduction proportion to the vital rates of the population caused by the catastrophe event relative to the duration of impact of the event. We present the potential application of our model to the amphibian population dynamic and the recovery of a certain plant population. In addition, we explore, in details, the application of the model to the sperm whale population in Gulf of Mexico after the Deepwater Horizon oil spill. In Chapter 4, we summarize the results from Chapter 2 and Chapter 3 and explore some further avenues of our research.
Multiscale modelling of hydraulic conductivity in vuggy porous media
Daly, K. R.; Roose, T.
2014-01-01
Flow in both saturated and non-saturated vuggy porous media, i.e. soil, is inherently multiscale. The complex microporous structure of the soil aggregates and the wider vugs provides a multitude of flow pathways and has received significant attention from the X-ray computed tomography (CT) community with a constant drive to image at higher resolution. Using multiscale homogenization, we derive averaged equations to study the effects of the microscale structure on the macroscopic flow. The averaged model captures the underlying geometry through a series of cell problems and is verified through direct comparison to numerical simulations of the full structure. These methods offer significant reductions in computation time and allow us to perform three-dimensional calculations with complex geometries on a desktop PC. The results show that the surface roughness of the aggregate has a significantly greater effect on the flow than the microstructure within the aggregate. Hence, this is the region in which the resolution of X-ray CT for image-based modelling has the greatest impact. PMID:24511248
NASA Astrophysics Data System (ADS)
Masunaga, Eiji; Uchiyama, Yusuke; Suzue, Yota; Yamazaki, Hidekatsu
2018-04-01
This study investigates the dynamics of tidally induced internal waves over a shallow ridge, the Izu-Ogasawara Ridge off the Japanese mainland, using a downscaled high-resolution regional ocean numerical model. Both the Kuroshio and tides contribute to the field of currents in the study area. The model results show strong internal tidal energy fluxes over the ridge, exceeding 3.5 kW m-1, which are higher than the fluxes along the Japanese mainland. The flux in the upstream side of the Kuroshio is enhanced by an interaction of internal waves and currents. The tidal forcing induces 92% of the total internal wave energy flux, exhibiting the considerable dominance of tides in internal waves. The tidal forcing enhances the kinetic energy, particularly in the northern area of the ridge where the Kuroshio Current is not a direct influence. The tidal forcing contributes to roughly 30% of the total kinetic energy in the study area.
NASA Astrophysics Data System (ADS)
Lu, Tong; Wang, Yihan; Gao, Feng; Zhao, Huijuan; Ntziachristos, Vasilis; Li, Jiao
2018-02-01
Photoacoustic mesoscopy (PAMe), offering high-resolution (sub-100-μm) and high optical contrast imaging at the depth of 1-10 mm, generally obtains massive collection data using a high-frequency focused ultrasonic transducer. The spatial impulse response (SIR) of this focused transducer causes the distortion of measured signals in both duration and amplitude. Thus, the reconstruction method considering the SIR needs to be investigated in the computation-economic way for PAMe. Here, we present a modified back-projection algorithm, by introducing a SIR-dependent calibration process using a non-satationary convolution method. The proposed method is performed on numerical simulations and phantom experiments of microspheres with diameter of both 50 μm and 100 μm, and the improvement of image fidelity of this method is proved to be evident by methodology parameters. The results demonstrate that, the images reconstructed when the SIR of transducer is accounted for have higher contrast-to-noise ratio and more reasonable spatial resolution, compared to the common back-projection algorithm.
NASA Astrophysics Data System (ADS)
Marx, Alain; Lütjens, Hinrich
2017-03-01
A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.
Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2016-08-01
Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.
NASA Astrophysics Data System (ADS)
Schnitzler, H.; Zimmer, Klaus-Peter
2008-09-01
Similar to human's binocular vision, stereomicroscopes are comprised of two optical paths under a convergence angle providing a full perspective insight into the world's microstructure. The numerical aperture of stereomicroscopes has continuously increased over the years, reaching the point where the lenses of left and right perspective paths touched each other. This constraint appeared as an upper limit for the resolution of stereomicroscopes, as the resolution of a stereomicroscope was deduced from the numerical apertures of the two equally sized perspective channels. We present the optical design and advances in resolution of the world's first asymmetrical stereomicroscope, which is a technological breakthrough in many aspects of stereomicroscopes. This unique approach uses a large numerical aperture and thus an, so far, unachievable high lateral resolution in the one path, and a small aperture in the other path, which provides a high depth of field ("Fusion Optics"). This new concept is a technical challenge for the optical design of the zoom system as well as for the common main objectives. Furthermore, the new concept makes use of the particular way in which perspective information by binocular vision is formed in the human's brain. In conjunction with a research project at the University of Zurich, Leica Microsystems consolidated the functionality of this concept in to a new generation of stereomicroscopes.
NASA Astrophysics Data System (ADS)
Lucas-Serrano, A.; Font, J. A.; Ibáñez, J. M.; Martí, J. M.
2004-12-01
We assess the suitability of a recent high-resolution central scheme developed by \\cite{kurganov} for the solution of the relativistic hydrodynamic equations. The novelty of this approach relies on the absence of Riemann solvers in the solution procedure. The computations we present are performed in one and two spatial dimensions in Minkowski spacetime. Standard numerical experiments such as shock tubes and the relativistic flat-faced step test are performed. As an astrophysical application the article includes two-dimensional simulations of the propagation of relativistic jets using both Cartesian and cylindrical coordinates. The simulations reported clearly show the capabilities of the numerical scheme of yielding satisfactory results, with an accuracy comparable to that obtained by the so-called high-resolution shock-capturing schemes based upon Riemann solvers (Godunov-type schemes), even well inside the ultrarelativistic regime. Such a central scheme can be straightforwardly applied to hyperbolic systems of conservation laws for which the characteristic structure is not explicitly known, or in cases where a numerical computation of the exact solution of the Riemann problem is prohibitively expensive. Finally, we present comparisons with results obtained using various Godunov-type schemes as well as with those obtained using other high-resolution central schemes which have recently been reported in the literature.
Zhao, Dong-Jie; Wang, Zhong-Yi; Huang, Lan; Jia, Yong-Peng; Leng, John Q.
2014-01-01
Damaging thermal stimuli trigger long-lasting variation potentials (VPs) in higher plants. Owing to limitations in conventional plant electrophysiological recording techniques, recorded signals are composed of signals originating from all of the cells that are connected to an electrode. This limitation does not enable detailed spatio-temporal distributions of transmission and electrical activities in plants to be visualised. Multi-electrode array (MEA) enables the recording and imaging of dynamic spatio-temporal electrical activities in higher plants. Here, we used an 8 × 8 MEA with a polar distance of 450 μm to measure electrical activities from numerous cells simultaneously. The mapping of the data that were recorded from the MEA revealed the transfer mode of the thermally induced VPs in the leaves of Helianthus annuus L. seedlings in situ. These results suggest that MEA can enable recordings with high spatio-temporal resolution that facilitate the determination of the bioelectrical response mode of higher plants under stress. PMID:24961469
Zhao, Dong-Jie; Wang, Zhong-Yi; Huang, Lan; Jia, Yong-Peng; Leng, John Q
2014-06-25
Damaging thermal stimuli trigger long-lasting variation potentials (VPs) in higher plants. Owing to limitations in conventional plant electrophysiological recording techniques, recorded signals are composed of signals originating from all of the cells that are connected to an electrode. This limitation does not enable detailed spatio-temporal distributions of transmission and electrical activities in plants to be visualised. Multi-electrode array (MEA) enables the recording and imaging of dynamic spatio-temporal electrical activities in higher plants. Here, we used an 8 × 8 MEA with a polar distance of 450 μm to measure electrical activities from numerous cells simultaneously. The mapping of the data that were recorded from the MEA revealed the transfer mode of the thermally induced VPs in the leaves of Helianthus annuus L. seedlings in situ. These results suggest that MEA can enable recordings with high spatio-temporal resolution that facilitate the determination of the bioelectrical response mode of higher plants under stress.
A divergence-cleaning scheme for cosmological SPMHD simulations
NASA Astrophysics Data System (ADS)
Stasyszyn, F. A.; Dolag, K.; Beck, A. M.
2013-01-01
In magnetohydrodynamics (MHD), the magnetic field is evolved by the induction equation and coupled to the gas dynamics by the Lorentz force. We perform numerical smoothed particle magnetohydrodynamics (SPMHD) simulations and study the influence of a numerical magnetic divergence. For instabilities arising from {nabla }\\cdot {boldsymbol B} related errors, we find the hyperbolic/parabolic cleaning scheme suggested by Dedner et al. to give good results and prevent numerical artefacts from growing. Additionally, we demonstrate that certain current SPMHD implementations of magnetic field regularizations give rise to unphysical instabilities in long-time simulations. We also find this effect when employing Euler potentials (divergenceless by definition), which are not able to follow the winding-up process of magnetic field lines properly. Furthermore, we present cosmological simulations of galaxy cluster formation at extremely high resolution including the evolution of magnetic fields. We show synthetic Faraday rotation maps and derive structure functions to compare them with observations. Comparing all the simulations with and without divergence cleaning, we are able to confirm the results of previous simulations performed with the standard implementation of MHD in SPMHD at normal resolution. However, at extremely high resolution, a cleaning scheme is needed to prevent the growth of numerical {nabla }\\cdot {boldsymbol B} errors at small scales.
Two-Fluid Extensions to the M3D CDX-U Validation Study
NASA Astrophysics Data System (ADS)
Breslau, J.; Strauss, H.; Sugiyama, L.
2005-10-01
As part of a cross-code verification and validation effort, both the M3D code [1] and the NIMROD code [2] have qualitatively reproduced the nonlinear behavior of a complete sawtooth cycle in the CDX-U tokamak, chosen for the study because its low temperature and small size puts it in a parameter regime easily accessible to both codes. Initial M3D studies on this problem used a resistive MHD model with a large, empirical perpendicular heat transport value and with modest toroidal resolution (24 toroidal planes). The success of this study prompted the pursuit of more quantitatively accurate predictions by the application of more sophisticated physical models and higher numerical resolution. The results of two consequent follow-up studies are presented here. In the first, the toroidal resolution of the original run is doubled to 48 planes. The behavior of the sawtooth in this case is essentially the same as in the lower- resolution study. The sawtooth study has also been repeated using a two-fluid plasma model, with the effects of the &*circ;i term emphasized. The resulting mode rotation, as well as the effects on the reconnection rate (sawtooth crash time), sawtooth period, and overall stability are presented. [1] W. Park, et al., Phys. Plasmas 6, 1796 (1999). [2] C. Sovinec, et al., J. Comp. Phys. 195, 355 (2004).
Increasing the temporal resolution of direct normal solar irradiance forecasted series
NASA Astrophysics Data System (ADS)
Fernández-Peruchena, Carlos M.; Gastón, Martin; Schroedter-Homscheidt, Marion; Marco, Isabel Martínez; Casado-Rubio, José L.; García-Moya, José Antonio
2017-06-01
A detailed knowledge of the solar resource is a critical point in the design and control of Concentrating Solar Power (CSP) plants. In particular, accurate forecasting of solar irradiance is essential for the efficient operation of solar thermal power plants, the management of energy markets, and the widespread implementation of this technology. Numerical weather prediction (NWP) models are commonly used for solar radiation forecasting. In the ECMWF deterministic forecasting system, all forecast parameters are commercially available worldwide at 3-hourly intervals. Unfortunately, as Direct Normal solar Irradiance (DNI) exhibits a great variability due to the dynamic effects of passing clouds, 3-h time resolution is insufficient for accurate simulations of CSP plants due to their nonlinear response to DNI, governed by various thermal inertias due to their complex response characteristics. DNI series of hourly or sub-hourly frequency resolution are normally used for an accurate modeling and analysis of transient processes in CSP technologies. In this context, the objective of this study is to propose a methodology for generating synthetic DNI time series at 1-h (or higher) temporal resolution from 3-h DNI series. The methodology is based upon patterns as being defined with help of the clear-sky envelope approach together with a forecast of maximum DNI value, and it has been validated with high quality measured DNI data.
Mumcuoglu, Tarkan; Wollstein, Gadi; Wojtkowski, Maciej; Kagemann, Larry; Ishikawa, Hiroshi; Gabriele, Michelle L.; Srinivasan, Vivek; Fujimoto, James G.; Duker, Jay S.; Schuman, Joel S.
2009-01-01
Purpose To test if improving optical coherence tomography (OCT) resolution and scanning speed improves the visualization of glaucomatous structural changes as compared with conventional OCT. Design Prospective observational case series. Participants Healthy and glaucomatous subjects in various stages of disease. Methods Subjects were scanned at a single visit with commercially available OCT (StratusOCT) and high-speed ultrahigh-resolution (hsUHR) OCT. The prototype hsUHR OCT had an axial resolution of 3.4 μm (3 times higher than StratusOCT), with an A-scan rate of 24 000 hertz (60 times faster than StratusOCT). The fast scanning rate allowed the acquisition of novel scanning patterns such as raster scanning, which provided dense coverage of the retina and optic nerve head. Main Outcome Measures Discrimination of retinal tissue layers and detailed visualization of retinal structures. Results High-speed UHR OCT provided a marked improvement in tissue visualization as compared with StratusOCT. This allowed the identification of numerous retinal layers, including the ganglion cell layer, which is specifically prone to glaucomatous damage. Fast scanning and the enhanced A-scan registration properties of hsUHR OCT provided maps of the macula and optic nerve head with unprecedented detail, including en face OCT fundus images and retinal nerve fiber layer thickness maps. Conclusion High-speed UHR OCT improves visualization of the tissues relevant to the detection and management of glaucoma. PMID:17884170
NASA Astrophysics Data System (ADS)
Mazlin, Viacheslav; Xiao, Peng; Dalimier, Eugénie; Grieve, Kate; Irsch, Kristina; Sahel, José; Fink, Mathias; Boccara, Claude
2018-02-01
Despite obvious improvements in visualization of the in vivo cornea through the faster imaging speeds and higher axial resolutions, cellular imaging stays unresolvable task for OCT, as en face viewing with a high lateral resolution is required. The latter is possible with FFOCT, a method that relies on a camera, moderate numerical aperture (NA) objectives and an incoherent light source to provide en face images with a micrometer-level resolution. Recently, we for the first time demonstrated the ability of FFOCT to capture images from the in vivo human cornea1. In the current paper we present an extensive study of appearance of healthy in vivo human corneas under FFOCT examination. En face corneal images with a micrometer-level resolution were obtained from the three healthy subjects. For each subject it was possible to acquire images through the entire corneal depth and visualize the epithelium structures, Bowman's layer, sub-basal nerve plexus (SNP) fibers, anterior, middle and posterior stroma, endothelial cells with nuclei. Dimensions and densities of the structures visible with FFOCT, are in agreement with those seen by other cornea imaging methods. Cellular-level details in the images obtained together with the relatively large field-of-view (FOV) and contactless way of imaging make this device a promising candidate for becoming a new tool in ophthalmological diagnostics.
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen
1999-01-01
Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.
NASA Astrophysics Data System (ADS)
Toigo, Anthony D.; Lee, Christopher; Newman, Claire E.; Richardson, Mark I.
2012-09-01
We investigate the sensitivity of the circulation and thermal structure of the martian atmosphere to numerical model resolution in a general circulation model (GCM) using the martian implementation (MarsWRF) of the planetWRF atmospheric model. We provide a description of the MarsWRF GCM and use it to study the global atmosphere at horizontal resolutions from 7.5° × 9° to 0.5° × 0.5°, encompassing the range from standard Mars GCMs to global mesoscale modeling. We find that while most of the gross-scale features of the circulation (the rough location of jets, the qualitative thermal structure, and the major large-scale features of the surface level winds) are insensitive to horizontal resolution over this range, several major features of the circulation are sensitive in detail. The northern winter polar circulation shows the greatest sensitivity, showing a continuous transition from a smooth polar winter jet at low resolution, to a distinct vertically “split” jet as resolution increases. The separation of the lower and middle atmosphere polar jet occurs at roughly 10 Pa, with the split jet structure developing in concert with the intensification of meridional jets at roughly 10 Pa and above 0.1 Pa. These meridional jets appear to represent the separation of lower and middle atmosphere mean overturning circulations (with the former being consistent with the usual concept of the “Hadley cell”). Further, the transition in polar jet structure is more sensitive to changes in zonal than meridional horizontal resolution, suggesting that representation of small-scale wave-mean flow interactions is more important than fine-scale representation of the meridional thermal gradient across the polar front. Increasing the horizontal resolution improves the match between the modeled thermal structure and the Mars Climate Sounder retrievals for northern winter high latitudes. While increased horizontal resolution also improves the simulation of the northern high latitudes at equinox, even the lowest model resolution considered here appears to do a good job for the southern winter and southern equinoctial pole (although in detail some discrepancies remain). These results suggest that studies of the northern winter jet (e.g., transient waves and cyclogenesis) will be more sensitive to global model resolution that those of the south (e.g., the confining dynamics of the southern polar vortex relevant to studies of argon transport). For surface winds, the major effect of increased horizontal resolution is in the superposition of circulations forced by local-scale topography upon the large-scale surface wind patterns. While passive predictions of dust lifting are generally insensitive to model horizontal resolution when no lifting threshold is considered, increasing the stress threshold produces significantly more lifting in higher resolution simulations with the generation of finer-scale, higher-stress winds due primarily to better-resolved topography. Considering the positive feedbacks expected for radiatively active dust lifting, we expect this bias to increase when such feedbacks are permitted.
Explosive Products EOS: Adjustment for detonation speed and energy release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph
2014-09-05
Propagating detonation waves exhibit a curvature effect in which the detonation speed decreases with increasing front curvature. The curvature effect is due to the width of the wave profile. Numerically, the wave profile depends on resolution. With coarse resolution, the wave width is too large and results in a curvature effect that is too large. Consequently, the detonation speed decreases as the cell size is increased. We propose a modification to the products equation of state (EOS) to compensate for the effect of numerical resolution; i.e., to increase the CJ pressure in order that a simulation propagates a detonation wavemore » with a speed that is on average correct. The EOS modification also adjusts the release isentrope to correct the energy release.« less
Fang, Yishan; Huang, Xinjian; Wang, Lishi
2015-01-06
Discrimination and quantification of electroactive species are traditionally realized by a potential difference which is mainly determined by thermodynamics. However, the resolution of this approach is limited to tens of millivolts. In this paper, we described an application of Fourier transformed sinusoidal voltammetry (FT-SV) that provides a new approach for discrimination and quantitative evaluation of electroactive species, especially thermodynamic similar ones. Numerical simulation indicates that electron transfer kinetics difference between electroactive species can be revealed by the phase angle of higher order harmonics of FT-SV, and the difference can be amplified order by order. Thus, even a very subtle kinetics difference can be amplified to be distinguishable at a certain order of harmonics. This method was verified with structurally similar ferrocene derivatives which were chosen as the model systems. Although these molecules have very close redox potential (<10 mV), discrimination and selective detection were achieved by as high as the thirteenth harmonics. The results demonstrated the feasibility and reliability of the method. It was also implied that the combination of the traditional thermodynamic method and this kinetics method can form a two-dimension resolved detection method, and it has the potential to extend the resolution of voltammetric techniques to a new level.
Experimental and numerical investigation of tissue harmonic imaging (THI)
NASA Astrophysics Data System (ADS)
Jing, Yuan; Yang, Xinmai; Cleveland, Robin O.
2003-04-01
In THI the probing ultrasonic pulse has enough amplitude that it undergoes nonlinear distortion and energy shifts from the fundamental frequency of the pulse into its higher harmonics. Images generated from the second harmonic (SH) have superior quality to the images formed from the fundamental frequency. Experiments with a single element focused ultrasound transducer were used to compare a line target embedded in a tissue phantom using either fundamental or SH imaging. SH imaging showed an improvement in both the axial resolution (0.70 mm vs 0.92 mm) and the lateral resolution (1.02 mm vs 2.70 mm) of the target. In addition, the contrast-to-tissue ratio of the target was 2 dB higher with SH imaging. A three-dimensional model of the forward propagation has been developed to simulate the experimental system. The model is based on a time-domain code for solving the KZK equation and accounts for arbitrary spatial variations in all tissue properties. The code was used to determine the impact of a nearfield layer of fat on the fundamental and second harmonic signals. For a 15 mm thick layer the SH side-lobes remained the same but the fundamental side-lobes increased by 2 dB. [Work supported by the NSF through the Center for Subsurface Sensing and Imaging Systems.
Bianchi, S; Rajamanickam, V P; Ferrara, L; Di Fabrizio, E; Liberale, C; Di Leonardo, R
2013-12-01
The use of individual multimode optical fibers in endoscopy applications has the potential to provide highly miniaturized and noninvasive probes for microscopy and optical micromanipulation. A few different strategies have been proposed recently, but they all suffer from intrinsically low resolution related to the low numerical aperture of multimode fibers. Here, we show that two-photon polymerization allows for direct fabrication of micro-optics components on the fiber end, resulting in an increase of the numerical aperture to a value that is close to 1. Coupling light into the fiber through a spatial light modulator, we were able to optically scan a submicrometer spot (300 nm FWHM) over an extended region, facing the opposite fiber end. Fluorescence imaging with improved resolution is also demonstrated.
Research highlights: June 1990 - May 1991
NASA Technical Reports Server (NTRS)
1991-01-01
Linear instability calculations at MSFC have suggested that the Geophysical Fluid Flow Cell (GFFC) should exhibit classic baroclinic instability at accessible parameter settings. Interest was in the mechanisms of transition to temporal chaos and the evolution of spatio-temporal chaos. In order to understand more about such transitions, high resolution numerical experiments for the physically simplest model of two layer baroclinic instability were conducted. This model has the advantage that the numerical code is exponentially convergent and can be efficiently run for very long times, enabling the study of chaotic attractors without the often devastating effects of low-order trunction found in many previous studies. Numerical algorithms for implementing an empirical orthogonal function (EOF) analysis of the high resolution numerical results were completed. Under conditions of rapid rotation and relatively low differential heating, convection in a spherical shell takes place as columnar banana cells wrapped around the annular gap, but with axes oriented along the axis of rotation; these were clearly evident in the GFFC experiments. The results of recent numerical simulations of columnar convection and future research plans are presented.
The Effects of Dissipation and Coarse Grid Resolution for Multigrid in Flow Problems
NASA Technical Reports Server (NTRS)
Eliasson, Peter; Engquist, Bjoern
1996-01-01
The objective of this paper is to investigate the effects of the numerical dissipation and the resolution of the solution on coarser grids for multigrid with the Euler equation approximations. The convergence is accomplished by multi-stage explicit time-stepping to steady state accelerated by FAS multigrid. A theoretical investigation is carried out for linear hyperbolic equations in one and two dimensions. The spectra reveals that for stability and hence robustness of spatial discretizations with a small amount of numerical dissipation the grid transfer operators have to be accurate enough and the smoother of low temporal accuracy. Numerical results give grid independent convergence in one dimension. For two-dimensional problems with a small amount of numerical dissipation, however, only a few grid levels contribute to an increased speed of convergence. This is explained by the small numerical dissipation leading to dispersion. Increasing the mesh density and hence making the problem over resolved increases the number of mesh levels contributing to an increased speed of convergence. If the steady state equations are elliptic, all grid levels contribute to the convergence regardless of the mesh density.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
ULTRA-SHARP solution of the Smith-Hutton problem
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Mokhtari, Simin
1992-01-01
Highly convective scalar transport involving near-discontinuities and strong streamline curvature was addressed in a paper by Smith and Hutton in 1982, comparing several different convection schemes applied to a specially devised test problem. First order methods showed significant artificial diffusion, whereas higher order methods gave less smearing but had a tendency to overshoot and oscillate. Perhaps because unphysical oscillations are more obvious than unphysical smearing, the intervening period has seen a rise in popularity of low order artificially diffusive schemes, especially in the numerical heat transfer industry. The present paper describes an alternate strategy of using non-artificially diffusive high order methods, while maintaining strictly monotonic transitions through the use of simple flux limited constraints. Limited third order upwinding is usually found to be the most cost effective basic convection scheme. Tighter resolution of discontinuities can be obtained at little additional cost by using automatic adaptive stencil expansion to higher order in local regions, as needed.
Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J
2014-01-01
We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840
Davis, Joe M
2011-10-28
General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.
High-magnification super-resolution FINCH microscopy using birefringent crystal lens interferometers
NASA Astrophysics Data System (ADS)
Siegel, Nisan; Lupashin, Vladimir; Storrie, Brian; Brooker, Gary
2016-12-01
Fresnel incoherent correlation holography (FINCH) microscopy is a promising approach for high-resolution biological imaging but has so far been limited to use with low-magnification, low-numerical-aperture configurations. We report the use of in-line incoherent interferometers made from uniaxial birefringent α-barium borate (α-BBO) or calcite crystals that overcome the aberrations and distortions present with previous implementations that employed spatial light modulators or gradient refractive index lenses. FINCH microscopy incorporating these birefringent elements and high-numerical-aperture oil immersion objectives could outperform standard wide-field fluorescence microscopy, with, for example, a 149 nm lateral point spread function at a wavelength of 590 nm. Enhanced resolution was confirmed with sub-resolution fluorescent beads. Taking the Golgi apparatus as a biological example, three different proteins labelled with GFP and two other fluorescent dyes in HeLa cells were resolved with an image quality that is comparable to similar samples captured by structured illumination microscopy.
Resolving the fine-scale structure in turbulent Rayleigh-Benard convection
NASA Astrophysics Data System (ADS)
Scheel, Janet; Emran, Mohammad; Schumacher, Joerg
2013-11-01
Results from high-resolution direct numerical simulations of turbulent Rayleigh-Benard convection in a cylindrical cell with an aspect ratio of one will be presented. We focus on the finest scales of convective turbulence, in particular the statistics of the kinetic energy and thermal dissipation rates in the bulk and the whole cell. These dissipation rates as well as the local dissipation scales are compared for different Rayleigh and Prandtl numbers. We also have investigated the convergence properties of our spectral element method and have found that both dissipation fields are very sensitive to insufficient resolution. We also demonstrate that global transport properties, such as the Nusselt number and the energy balances, are partly insensitive to insufficient resolution and yield consistent results even when the dissipation fields are under-resolved. Our present numerical framework is also compared with high-resolution simulations which use a finite difference method. For most of the compared quantities the agreement is found to be satisfactory.
Optical path difference microscopy with a Shack-Hartmann wavefront sensor.
Gong, Hai; Agbana, Temitope E; Pozzi, Paolo; Soloviev, Oleg; Verhaegen, Michel; Vdovin, Gleb
2017-06-01
In this Letter, we show that a Shack-Hartmann wavefront sensor can be used for the quantitative measurement of the specimen optical path difference (OPD) in an ordinary incoherent optical microscope, if the spatial coherence of the illumination light in the plane of the specimen is larger than the microscope resolution. To satisfy this condition, the illumination numerical aperture should be smaller than the numerical aperture of the imaging lens. This principle has been successfully applied to build a high-resolution reference-free instrument for the characterization of the OPD of micro-optical components and microscopic biological samples.
Conflict Resolution in the Genome: How Transcription and Replication Make It Work.
Hamperl, Stephan; Cimprich, Karlene A
2016-12-01
The complex machineries involved in replication and transcription translocate along the same DNA template, often in opposing directions and at different rates. These processes routinely interfere with each other in prokaryotes, and mounting evidence now suggests that RNA polymerase complexes also encounter replication forks in higher eukaryotes. Indeed, cells rely on numerous mechanisms to avoid, tolerate, and resolve such transcription-replication conflicts, and the absence of these mechanisms can lead to catastrophic effects on genome stability and cell viability. In this article, we review the cellular responses to transcription-replication conflicts and highlight how these inevitable encounters shape the genome and impact diverse cellular processes. Copyright © 2016 Elsevier Inc. All rights reserved.
Notes on integral identities for 3d supersymmetric dualities
NASA Astrophysics Data System (ADS)
Aghaei, Nezhla; Amariti, Antonio; Sekiguchi, Yuta
2018-04-01
Four dimensional N=2 Argyres-Douglas theories have been recently conjectured to be described by N=1 Lagrangian theories. Such models, once reduced to 3d, should be mirror dual to Lagrangian N=4 theories. This has been numerically checked through the matching of the partition functions on the three sphere. In this article, we provide an analytic derivation for this result in the A 2 n-1 case via hyperbolic hypergeometric integrals. We study the D 4 case as well, commenting on some open questions and possible resolutions. In the second part of the paper we discuss other integral identities leading to the matching of the partition functions in 3d dual pairs involving higher monopole superpotentials.
Super-resolution differential interference contrast microscopy by structured illumination.
Chen, Jianling; Xu, Yan; Lv, Xiaohua; Lai, Xiaomin; Zeng, Shaoqun
2013-01-14
We propose a structured illumination differential interference contrast (SI-DIC) microscopy, breaking the diffraction resolution limit of differential interference contrast (DIC) microscopy. SI-DIC extends the bandwidth of coherent transfer function of the DIC imaging system, thus the resolution is improved. With 0.8 numerical aperture condenser and objective, the reconstructed SI-DIC image of 53 nm polystyrene beads reveals lateral resolution of approximately 190 nm, doubling that of the conventional DIC image. We also demonstrate biological observations of label-free cells with improved spatial resolution. The SI-DIC microscopy can provide sub-diffraction resolution and high contrast images with marker-free specimens, and has the potential for achieving sub-diffraction resolution quantitative phase imaging.
Design and evaluation of a THz time domain imaging system using standard optical design software.
Brückner, Claudia; Pradarutti, Boris; Müller, Ralf; Riehemann, Stefan; Notni, Gunther; Tünnermann, Andreas
2008-09-20
A terahertz (THz) time domain imaging system is analyzed and optimized with standard optical design software (ZEMAX). Special requirements to the illumination optics and imaging optics are presented. In the optimized system, off-axis parabolic mirrors and lenses are combined. The system has a numerical aperture of 0.4 and is diffraction limited for field points up to 4 mm and wavelengths down to 750 microm. ZEONEX is used as the lens material. Higher aspherical coefficients are used for correction of spherical aberration and reduction of lens thickness. The lenses were manufactured by ultraprecision machining. For optimization of the system, ray tracing and wave-optical methods were combined. We show how the ZEMAX Gaussian beam analysis tool can be used to evaluate illumination optics. The resolution of the THz system was tested with a wire and a slit target, line gratings of different period, and a Siemens star. The behavior of the temporal line spread function can be modeled with the polychromatic coherent line spread function feature in ZEMAX. The spectral and temporal resolutions of the line gratings are compared with the respective modulation transfer function of ZEMAX. For maximum resolution, the system has to be diffraction limited down to the smallest wavelength of the spectrum of the THz pulse. Then, the resolution on time domain analysis of the pulse maximum can be estimated with the spectral resolution of the center of gravity wavelength. The system resolution near the optical axis on time domain analysis of the pulse maximum is 1 line pair/mm with an intensity contrast of 0.22. The Siemens star is used for estimation of the resolution of the whole system. An eight channel electro-optic sampling system was used for detection. The resolution on time domain analysis of the pulse maximum of all eight channels could be determined with the Siemens star to be 0.7 line pairs/mm.
Hydrologic downscaling of soil moisture using global data without site-specific calibration
USDA-ARS?s Scientific Manuscript database
Numerous applications require fine-resolution (10-30 m) soil moisture patterns, but most satellite remote sensing and land-surface models provide coarse-resolution (9-60 km) soil moisture estimates. The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales soil moistu...
Numerical Hydrodynamics in Special Relativity.
Martí, J M; Müller, E
1999-01-01
This review is concerned with a discussion of numerical methods for the solution of the equations of special relativistic hydrodynamics (SRHD). Particular emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods in SRHD. Results obtained with different numerical SRHD methods are compared, and two astrophysical applications of SRHD flows are discussed. An evaluation of the various numerical methods is given and future developments are analyzed. Supplementary material is available for this article at 10.12942/lrr-1999-3.
Dances with Membranes: Breakthroughs from Super-resolution Imaging
Curthoys, Nikki M.; Parent, Matthew; Mlodzianoski, Michael; Nelson, Andrew J.; Lilieholm, Jennifer; Butler, Michael B.; Valles, Matthew; Hess, Samuel T.
2017-01-01
Biological membrane organization mediates numerous cellular functions and has also been connected with an immense number of human diseases. However, until recently, experimental methodologies have been unable to directly visualize the nanoscale details of biological membranes, particularly in intact living cells. Numerous models explaining membrane organization have been proposed, but testing those models has required indirect methods; the desire to directly image proteins and lipids in living cell membranes is a strong motivation for the advancement of technology. The development of super-resolution microscopy has provided powerful tools for quantification of membrane organization at the level of individual proteins and lipids, and many of these tools are compatible with living cells. Previously inaccessible questions are now being addressed, and the field of membrane biology is developing rapidly. This chapter discusses how the development of super-resolution microscopy has led to fundamental advances in the field of biological membrane organization. We summarize the history and some models explaining how proteins are organized in cell membranes, and give an overview of various super-resolution techniques and methods of quantifying super-resolution data. We discuss the application of super-resolution techniques to membrane biology in general, and also with specific reference to the fields of actin and actin-binding proteins, virus infection, mitochondria, immune cell biology, and phosphoinositide signaling. Finally, we present our hopes and expectations for the future of super-resolution microscopy in the field of membrane biology. PMID:26015281
Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca; Palmer, Kevin; Deutsch, Clayton V.
High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit inmore » South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molenkamp, C.R.; Grossman, A.
1999-12-20
A network of small balloon-borne transponders which gather very high resolution wind and temperature data for use by modern numerical weather predication models has been proposed to improve the reliability of long-range weather forecasts. The global distribution of an array of such transponders is simulated using LLNL's atmospheric parcel transport model (GRANTOUR) with winds supplied by two different general circulation models. An initial study used winds from CCM3 with a horizontal resolution of about 3 degrees in latitude and longitude, and a second study used winds from NOGAPS with a 0.75 degree horizontal resolution. Results from both simulations show thatmore » reasonable global coverage can be attained by releasing balloons from an appropriate set of launch sites.« less
Ultra high energy resolution focusing monochromator for inelastic X-ray scattering spectrometer
Suvorov, Alexey; Cunsolo, Alessandro; Chubar, Oleg; ...
2015-11-25
Further development of a focusing monochromator concept for X-ray energy resolution of 0.1 meV and below is presented. Theoretical analysis of several optical layouts based on this concept was supported by numerical simulations performed in the “Synchrotron Radiation Workshop” software package using the physical-optics approach and careful modeling of partially-coherent synchrotron (undulator) radiation. Along with the energy resolution, the spectral shape of the energy resolution function was investigated. We show that under certain conditions the decay of the resolution function tails can be faster than that of the Gaussian function.
Dual-axis confocal microscope for high-resolution in vivo imaging
Wang, Thomas D.; Mandella, Michael J.; Contag, Christopher H.; Kino, Gordon S.
2007-01-01
We describe a novel confocal microscope that uses separate low-numerical-aperture objectives with the illumination and collection axes crossed at angle θ from the midline. This architecture collects images in scattering media with high transverse and axial resolution, long working distance, large field of view, and reduced noise from scattered light. We measured transverse and axial (FWHM) resolution of 1.3 and 2.1 μm, respectively, in free space, and confirm subcellular resolution in excised esophageal mucosa. The optics may be scaled to millimeter dimensions and fiber coupled for collection of high-resolution images in vivo. PMID:12659264
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-01-01
Photoacoustic imaging (PAI) is an emerging medical imaging modality capable of providing high spatial resolution of Ultrasound (US) imaging and high contrast of optical imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in PAI. However, using DAS beamformer leads to low resolution images and considerable contribution of off-axis signals. A new paradigm namely delay-multiply-and-sum (DMAS), which was originally used as a reconstruction algorithm in confocal microwave imaging, was introduced to overcome the challenges in DAS. DMAS was used in PAI systems and it was shown that this algorithm results in resolution improvement and sidelobe degrading. However, DMAS is still sensitive to high levels of noise, and resolution improvement is not satisfying. Here, we propose a novel algorithm based on DAS algebra inside DMAS formula expansion, double stage DMAS (DS-DMAS), which improves the image resolution and levels of sidelobe, and is much less sensitive to high level of noise compared to DMAS. The performance of DS-DMAS algorithm is evaluated numerically and experimentally. The resulted images are evaluated qualitatively and quantitatively using established quality metrics including signal-to-noise ratio (SNR), full-width-half-maximum (FWHM) and contrast ratio (CR). It is shown that DS-DMAS outperforms DAS and DMAS at the expense of higher computational load. DS-DMAS reduces the lateral valley for about 15 dB and improves the SNR and FWHM better than 13% and 30%, respectively. Moreover, the levels of sidelobe are reduced for about 10 dB in comparison with those in DMAS.
Two-photon speckle illumination for super-resolution microscopy.
Negash, Awoke; Labouesse, Simon; Chaumet, Patrick C; Belkebir, Kamal; Giovannini, Hugues; Allain, Marc; Idier, Jérôme; Sentenac, Anne
2018-06-01
We present a numerical study of a microscopy setup in which the sample is illuminated with uncontrolled speckle patterns and the two-photon excitation fluorescence is collected on a camera. We show that, using a simple deconvolution algorithm for processing the speckle low-resolution images, this wide-field imaging technique exhibits resolution significantly better than that of two-photon excitation scanning microscopy or one-photon excitation bright-field microscopy.
NASA Astrophysics Data System (ADS)
Berntsen, Jarle; Alendal, Guttorm; Avlesen, Helge; Thiem, Øyvind
2018-05-01
The flow of dense water along continental slopes is considered. There is a large literature on the topic based on observations and laboratory experiments. In addition, there are many analytical and numerical studies of dense water flows. In particular, there is a sequence of numerical investigations using the dynamics of overflow mixing and entrainment (DOME) setup. In these papers, the sensitivity of the solutions to numerical parameters such as grid size and numerical viscosity coefficients and to the choices of methods and models is investigated. In earlier DOME studies, three different bottom boundary conditions and a range of vertical grid sizes are applied. In other parts of the literature on numerical studies of oceanic gravity currents, there are statements that appear to contradict choices made on bottom boundary conditions in some of the DOME papers. In the present study, we therefore address the effects of the bottom boundary condition and vertical resolution in numerical investigations of dense water cascading on a slope. The main finding of the present paper is that it is feasible to capture the bottom Ekman layer dynamics adequately and cost efficiently by using a terrain-following model system using a quadratic drag law with a drag coefficient computed to give near-bottom velocity profiles in agreement with the logarithmic law of the wall. Many studies of dense water flows are performed with a quadratic bottom drag law and a constant drag coefficient. It is shown that when using this bottom boundary condition, Ekman drainage will not be adequately represented. In other studies of gravity flow, a no-slip bottom boundary condition is applied. With no-slip and a very fine resolution near the seabed, the solutions are essentially equal to the solutions obtained with a quadratic drag law and a drag coefficient computed to produce velocity profiles matching the logarithmic law of the wall. However, with coarser resolution near the seabed, there may be a substantial artificial blocking effect when using no-slip.
NASA Technical Reports Server (NTRS)
Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)
1985-01-01
Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Case, Jonathan L.; Molthan, Andrew L.
2011-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center develops new products and techniques that can be used in operational meteorology. The majority of these products are derived from NASA polar-orbiting satellite imagery from the Earth Observing System (EOS) platforms. One such product is a Greenness Vegetation Fraction (GVF) dataset, which is produced from Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA EOS Aqua and Terra satellites. NASA SPoRT began generating daily real-time GVF composites at 1-km resolution over the Continental United States (CONUS) on 1 June 2010. The purpose of this study is to compare the National Centers for Environmental Prediction (NCEP) climatology GVF product (currently used in operational weather models) to the SPoRT-MODIS GVF during June to October 2010. The NASA Land Information System (LIS) was employed to study the impacts of the new SPoRT-MODIS GVF dataset on land surface models apart from a full numerical weather prediction (NWP) model. For the 2010 warm season, the SPoRT GVF in the western portion of the CONUS was generally higher than the NCEP climatology. The eastern CONUS GVF had variations both above and below the climatology during the period of study. These variations in GVF led to direct impacts on the rates of heating and evaporation from the land surface. The second phase of the project is to examine the impacts of the SPoRT GVF dataset on NWP using the Weather Research and Forecasting (WRF) model. Two separate WRF model simulations were made for individual severe weather case days using the NCEP GVF (control) and SPoRT GVF (experimental), with all other model parameters remaining the same. Based on the sensitivity results in these case studies, regions with higher GVF in the SPoRT model runs had higher evapotranspiration and lower direct surface heating, which typically resulted in lower (higher) predicted 2-m temperatures (2-m dewpoint temperatures). The opposite was true for areas with lower GVF in the SPoRT model runs. These differences in the heating and evaporation rates produced subtle yet quantifiable differences in the simulated convective precipitation systems for the selected severe weather case examined.
Quality evaluation of pansharpened hyperspectral images generated using multispectral images
NASA Astrophysics Data System (ADS)
Matsuoka, Masayuki; Yoshioka, Hiroki
2012-11-01
Hyperspectral remote sensing can provide a smooth spectral curve of a target by using a set of higher spectral resolution detectors. The spatial resolution of the hyperspectral images, however, is generally much lower than that of multispectral images due to the lower energy of incident radiation. Pansharpening is an image-fusion technique that generates higher spatial resolution multispectral images by combining lower resolution multispectral images with higher resolution panchromatic images. In this study, higher resolution hyperspectral images were generated by pansharpening of simulated lower hyperspectral and higher multispectral data. Spectral and spatial qualities of pansharpened images, then, were accessed in relation to the spectral bands of multispectral images. Airborne hyperspectral data of AVIRIS was used in this study, and it was pansharpened using six methods. Quantitative evaluations of pansharpened image are achieved using two frequently used indices, ERGAS, and the Q index.
A High-Resolution Capability for Large-Eddy Simulation of Jet Flows
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2011-01-01
A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.
VizieR Online Data Catalog: Abundances in the local region. II. F, G, and K dwarfs (Luck+, 2017)
NASA Astrophysics Data System (ADS)
Luck, R. E.
2017-06-01
The McDonald Observatory 2.1m Telescope and Sandiford Cassegrain Echelle Spectrograph provided much of the observational data for this study. High-resolution spectra were obtained during numerous observing runs, from 1996 to 2010. The spectra cover a continuous wavelength range from about 484 to 700nm, with a resolving power of about 60000. The wavelength range used demands two separate observations--one centered at about 520nm, and the other at about 630nm. Typical S/N values per pixel for the spectra are more than 150. Spectra of 57 dwarfs were obtained using the Hobby-Eberly telescope and High-Resolution Spectrograph. The spectra have a resolution of 30000, spanning the wavelength range of 400 to 785nm. They also have very high signal-to-noise ratios, >300 per resolution element in numerous cases. The last set of spectra were obtained from the ELODIE Archive (Moultaka et al. 2004PASP..116..693M). These spectra are fully processed, including order co-addition, and have a continuous wavelength span of 400 to 680nm and a resolution of 42000. The ELODIE spectra utilized here all have S/N>75 per pixel. (6 data files).
Reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning.
Song, Ying; Zhu, Zhen; Lu, Yang; Liu, Qiegen; Zhao, Jun
2014-03-01
To improve the magnetic resonance imaging (MRI) data acquisition speed while maintaining the reconstruction quality, a novel method is proposed for multislice MRI reconstruction from undersampled k-space data based on compressed-sensing theory using dictionary learning. There are two aspects to improve the reconstruction quality. One is that spatial correlation among slices is used by extending the atoms in dictionary learning from patches to blocks. The other is that the dictionary-learning scheme is used at two resolution levels; i.e., a low-resolution dictionary is used for sparse coding and a high-resolution dictionary is used for image updating. Numerical experiments are carried out on in vivo 3D MR images of brains and abdomens with a variety of undersampling schemes and ratios. The proposed method (dual-DLMRI) achieves better reconstruction quality than conventional reconstruction methods, with the peak signal-to-noise ratio being 7 dB higher. The advantages of the dual dictionaries are obvious compared with the single dictionary. Parameter variations ranging from 50% to 200% only bias the image quality within 15% in terms of the peak signal-to-noise ratio. Dual-DLMRI effectively uses the a priori information in the dual-dictionary scheme and provides dramatically improved reconstruction quality. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Porto da Silveira, I.; Zuidema, P.; Kirtman, B. P.
2017-12-01
The rugged topography of the Andes Cordillera along with strong coastal upwelling, strong sea surface temperatures (SST) gradients and extensive but geometrically-thin stratocumulus decks turns the Southeast Pacific (SEP) into a challenge for numerical modeling. In this study, hindcast simulations using the Community Climate System Model (CCSM4) at two resolutions were analyzed to examine the importance of resolution alone, with the parameterizations otherwise left unchanged. The hindcasts were initialized on January 1 with the real-time oceanic and atmospheric reanalysis (CFSR) from 1982 to 2003, forming a 10-member ensemble. The two resolutions are (0.1o oceanic and 0.5o atmospheric) and (1.125o oceanic and 0.9o atmospheric). The SST error growth in the first six days of integration (fast errors) and those resulted from model drift (saturated errors) are assessed and compared towards evaluating the model processes responsible for the SST error growth. For the high-resolution simulation, SST fast errors are positive (+0.3oC) near the continental borders and negative offshore (-0.1oC). Both are associated with a decrease in cloud cover, a weakening of the prevailing southwesterly winds and a reduction of latent heat flux. The saturated errors possess a similar spatial pattern, but are larger and are more spatially concentrated. This suggests that the processes driving the errors already become established within the first week, in contrast to the low-resolution simulations. These, instead, manifest too-warm SSTs related to too-weak upwelling, driven by too-strong winds and Ekman pumping. Nevertheless, the ocean surface tends to be cooler in the low-resolution simulation than the high-resolution due to a higher cloud cover. Throughout the integration, saturated SST errors become positive and could reach values up to +4oC. These are accompanied by upwelling dumping and a decrease in cloud cover. High and low resolution models presented notable differences in how SST errors variability drove atmospheric changes, especially because the high resolution is sensitive to resurgence regions. This allows the model to resolve cloud heights and establish different radiative feedbacks.
NASA Astrophysics Data System (ADS)
Nunes, Ana
2015-04-01
Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
Magnetic resonance imaging of the inner ear by using a hybrid radiofrequency coil at 7 T
NASA Astrophysics Data System (ADS)
Kim, Kyoung-Nam; Heo, Phil; Kim, Young-Bo; Han, Gyu-Cheol
2015-01-01
Visualization of the membranous structures of the inner ear has been limited to the detection of the normal fluid signal intensity within the bony labyrinth by using magnetic resonance imaging (MRI) equipped with a 1.5 Tesla (T) magnet. High-field (HF) MRI has been available for more than a decade, and numerous studies have documented its significant advantages over conventional MRI with regards to its use in basic scientific research and routine clinical assessments. No previous studies of the inner ear by using HF MRI have been reported, in part because high-quality resolution of mastoid pneumatization is challenging due to artifacts generated in the HF environment and insufficient performance of radiofrequency (RF) coils. Therefore, a hybrid RF coil with integrated circuitry was developed at 7 T and was targeted for anatomical imaging to achieve a high resolution image of the structure of the human inner ear, excluding the bony portion. The inner-ear's structure is composed of soft tissues containing hydrogen ions and includes the membranous labyrinth, endolymphatic space, perilymphatic space, and cochlear-vestibular nerves. Visualization of the inner-ear's anatomy was performed in-vivo with a custom-designed hybrid RF coil and a specific imaging protocol based on an interpolated breath-held examination sequence. The comparative signal intensity value at 30-mm away from the phantom side was 88% higher for the hybrid RF coil and 24% higher for the 8-channel transmit/receive (Tx/Rx) coil than for the commercial birdcage coil. The optimized MRI protocol employed a hybrid RF coil because it enabled high-resolution imaging of the inner-ear's anatomy and accurate mapping of structures including the cochlea and the semicircular canals. These results indicate that 7 T MRI achieves high spatial resolution visualization of the inner-ear's anatomy. Therefore, MRI imaging using a hybrid RF coil at 7 T could provide a powerful tool for clinical investigations of petrous pathologies of the inner ear.
NASA Astrophysics Data System (ADS)
Naidu, S.; Benner, L.; Brozovic, M.; Giorgini, J. D.; Jao, J. S.; Lee, C. G.; Busch, M.; Ghigo, F. D.; Ford, A.; Kobelski, A.; Marshall, S.
2015-12-01
We present new results from bistatic Goldstone to Green Bank Telescope (GBT) high-resolution radar imaging of near-Earth asteroids (NEAs). Previously, most radar observations used either the 305-m Arecibo radar or the 70-m DSS-14 radar at Goldstone. Following the installation of new data-taking equipment at the GBT in late 2014, the number of bistatic Goldstone/GBT observations has increased substantially. Receiving Goldstone radar echoes at the 100-m GBT improves the signal-to-noise ratios (SNRs) two- to three-fold relative to monostatic reception at DSS-14. The higher SNRs allow us to obtain higher resolution images than is possible with DSS-14 both transmitting and receiving. Thus far in 2015, we have used the GBT receiver in combination with the 450 kW DSS-14 antenna and a new low-power 80kW transmitter on the 34-m DSS-13 antenna at the Goldstone complex to image five and two NEAs respectively. Asteroids 2005 YQ96, 2004 BL86, and 1994 AW1 are binary systems. 2011 UW158 has a spin period of 36 minutes that is unusually fast among asteroids its size (~500 m). 1999 JD6 is a deeply bifurcated double-lobed object. 2015 HM10 is an elongated 80 m asteroid with a spin period of 22 minutes. Our best images of these objects resolve the surface with resolutions of 3.75 m and reveal numerous features. Such images are useful to estimate the 3D shape, spin state, and other physical and dynamical properties of the objects. This knowledge is of particular interest for spacecraft mission planning, impact threat assessment, and resource utilization. Over the long term, such observations will help answer fundamental questions regarding the origin of the diversity in asteroid morphologies, the importance of spin-up mechanisms and collisional influences, the interior structure and thermal properties of asteroids, and the variety of dynamical states.
NASA Astrophysics Data System (ADS)
Tarquini, S.; Nannipieri, L.; Favalli, M.; Fornaciai, A.; Vinci, S.; Doumaz, F.
2012-04-01
Digital elevation models (DEMs) are fundamental in any kind of environmental or morphological study. DEMs are obtained from a variety of sources and generated in several ways. Nowadays, a few global-coverage elevation datasets are available for free (e.g., SRTM, http://www.jpl.nasa.gov/srtm; ASTER, http://asterweb.jpl.nasa.gov/). When the matrix of a DEM is used also for computational purposes, the choice of the elevation dataset which better suits the target of the study is crucial. Recently, the increasing use of DEM-based numerical simulation tools (e.g. for gravity driven mass flows), would largely benefit from the use of a higher resolution/higher accuracy topography than those available at planetary scale. Similar elevation datasets are neither easily nor freely available for all countries worldwide. Here we introduce a new web resource which made available for free (for research purposes only) a 10 m-resolution DEM for the whole Italian territory. The creation of this elevation dataset was presented by Tarquini et al. (2007). This DEM was obtained in triangular irregular network (TIN) format starting from heterogeneous vector datasets, mostly consisting in elevation contour lines and elevation points derived from several sources. The input vector database was carefully cleaned up to obtain an improved seamless TIN refined by using the DEST algorithm, thus improving the Delaunay tessellation. The whole TINITALY/01 DEM was converted in grid format (10-m cell size) according to a tiled structure composed of 193, 50-km side square elements. The grid database consists of more than 3 billions of cells and occupies almost 12 GB of disk memory. A web-GIS has been created (http://tinitaly.pi.ingv.it/ ) where a seamless layer of images in full resolution (10 m) obtained from the whole DEM (both in color-shaded and anaglyph mode) is open for browsing. Accredited navigators are allowed to download the elevation dataset.
NASA Astrophysics Data System (ADS)
Bastin, Sophie; Champollion, Cédric; Bock, Olivier; Drobinski, Philippe; Masson, Frédéric
2005-03-01
Global Positioning System (GPS) tomography analyses of water vapor, complemented by high-resolution numerical simulations are used to investigate a Mistral/sea breeze event in the region of Marseille, France, during the ESCOMPTE experiment. This is the first time GPS tomography has been used to validate the three-dimensional water vapor concentration from numerical simulation, and to analyze a small-scale meteorological event. The high spatial and temporal resolution of GPS analyses provides a unique insight into the evolution of the vertical and horizontal distribution of water vapor during the Mistral/sea-breeze transition.
Dynamic Moss Observed with Hi-C
NASA Technical Reports Server (NTRS)
Alexander, Caroline; Winebarger, Amy; Morton, Richard; Savage, Sabrina
2014-01-01
The High-resolution Coronal Imager (Hi-C), flown on 11 July 2012, has revealed an unprecedented level of detail and substructure within the solar corona. Hi--C imaged a large active region (AR11520) with 0.2-0.3'' spatial resolution and 5.5s cadence over a 5 minute period. An additional dataset with a smaller FOV, the same resolution, but with a higher temporal cadence (1s) was also taken during the rocket flight. This dataset was centered on a large patch of 'moss' emission that initially seemed to show very little variability. Image processing revealed this region to be much more dynamic than first thought with numerous bright and dark features observed to appear, move and disappear over the 5 minute observation. Moss is thought to be emission from the upper transition region component of hot loops so studying its dynamics and the relation between the bright/dark features and underlying magnetic features is important to tie the interaction of the different atmospheric layers together. Hi-C allows us to study the coronal emission of the moss at the smallest scales while data from SDO/AIA and HMI is used to give information on these structures at different heights/temperatures. Using the high temporal and spatial resolution of Hi-C the observed moss features were tracked and the distribution of displacements, speeds, and sizes were measured. This allows us to comment on both the physical processes occurring within the dynamic moss and the scales at which these changes are occurring.
Dynamic Moss Observed with Hi-C
NASA Technical Reports Server (NTRS)
Alexander, Caroline; Winebarger, Amy; Morton, Richard; Savage, Sabrina
2014-01-01
The High-resolution Coronal Imager (Hi-C), flown on 11 July 2012, has revealed an unprecedented level of detail and substructure within the solar corona. Hi-C imaged a large active region (AR11520) with 0.2-0.3'' spatial resolution and 5.5s cadence over a 5 minute period. An additional dataset with a smaller FOV, the same resolution, but with a higher temporal cadence (1s) was also taken during the rocket flight. This dataset was centered on a large patch of 'moss' emission that initially seemed to show very little variability. Image processing revealed this region to be much more dynamic than first thought with numerous bright and dark features observed to appear, move and disappear over the 5 minute observation. Moss is thought to be emission from the upper transition region component of hot loops so studying its dynamics and the relation between the bright/dark features and underlying magnetic features is important to tie the interaction of the different atmospheric layers together. Hi-C allows us to study the coronal emission of the moss at the smallest scales while data from SDO/AIA and HMI is used to give information on these structures at different heights/temperatures. Using the high temporal and spatial resolution of Hi-C the observed moss features were tracked and the distribution of displacements, speeds, and sizes were measured. This allows us to comment on both the physical processes occurring within the dynamic moss and the scales at which these changes are occurring.
Can we trust climate models to realistically represent severe European windstorms?
NASA Astrophysics Data System (ADS)
Trzeciak, Tomasz M.; Knippertz, Peter; Pirret, Jennifer S. R.; Williams, Keith D.
2016-06-01
Cyclonic windstorms are one of the most important natural hazards for Europe, but robust climate projections of the position and the strength of the North Atlantic storm track are not yet possible, bearing significant risks to European societies and the (re)insurance industry. Previous studies addressing the problem of climate model uncertainty through statistical comparisons of simulations of the current climate with (re-)analysis data show large disagreement between different climate models, different ensemble members of the same model and observed climatologies of intense cyclones. One weakness of such evaluations lies in the difficulty to separate influences of the climate model's basic state from the influence of fast processes on the development of the most intense storms, which could create compensating effects and therefore suggest higher reliability than there really is. This work aims to shed new light into this problem through a cost-effective "seamless" approach of hindcasting 20 historical severe storms with the two global climate models, ECHAM6 and GA4 configuration of the Met Office Unified Model, run in a numerical weather prediction mode using different lead times, and horizontal and vertical resolutions. These runs are then compared to re-analysis data. The main conclusions from this work are: (a) objectively identified cyclone tracks are represented satisfactorily by most hindcasts; (b) sensitivity to vertical resolution is low; (c) cyclone depth is systematically under-predicted for a coarse resolution of T63 by both climate models; (d) no systematic bias is found for the higher resolution of T127 out to about three days, demonstrating that climate models are in fact able to represent the complex dynamics of explosively deepening cyclones well, if given the correct initial conditions; (e) an analysis using a recently developed diagnostic tool based on the surface pressure tendency equation points to too weak diabatic processes, mainly latent heating, as the main source for the under-prediction in the coarse-resolution runs. Finally, an interesting implication of these results is that the too low number of deep cyclones in many free-running climate simulations may therefore be related to an insufficient number of storm-prone initial conditions. This question will be addressed in future work.
Faithful replication of grating patterns in polymer through electrohydrodynamic instabilities
NASA Astrophysics Data System (ADS)
Li, H.; Yu, W.; Wang, T.; Zhang, H.; Cao, Y.; Abraham, E.; Desmulliez, M. P. Y.
2014-07-01
Electrohydrodynamic instability patterning (EHDIP) as an alternative patterning method has attracted a great deal of attention over the past decade. This article demonstrates the faithful transfer of patterns with a high aspect ratio onto a polymer film via electrohydrodynamic instabilities for a given patterned grating mask. We perform a simple mathematical analysis to determine the influence of process parameters on the pressure difference ▵P. Through numerical simulation, it is demonstrated that thick films subject to large electric fields are essential to realize this faithful replication. In particular, the influence of the material properties of the polymer on pattern replication is discussed in detail. It is found that, to achieve the smaller periodic patterns with a higher resolution, film with a larger value of the dielectric constant and smaller value of the surface tension should be chosen. In addition, an ideal replication of the mask pattern with a short evolution time is possible by reducing the viscosity of the polymer liquid. Finally, the experiments of the pattern replication with and without defects are demonstrated to compare with the numerical simulation results. It is found that experiments are in good agreement with the simulation results and prove that the numerical simulation method provides an effective way to predict faithful replication.
Multi-slice ptychography with large numerical aperture multilayer Laue lenses
Ozturk, Hande; Yan, Hanfei; He, Yan; ...
2018-05-09
Here, the highly convergent x-ray beam focused by multilayer Laue lenses with large numerical apertures is used as a three-dimensional (3D) probe to image layered structures with an axial separation larger than the depth of focus. Instead of collecting weakly scattered high-spatial-frequency signals, the depth-resolving power is provided purely by the intense central cone diverged from the focused beam. Using the multi-slice ptychography method combined with the on-the-fly scan scheme, two layers of nanoparticles separated by 10 μm are successfully reconstructed with 8.1 nm lateral resolution and with a dwell time as low as 0.05 s per scan point. Thismore » approach obtains high-resolution images with extended depth of field, which paves the way for multi-slice ptychography as a high throughput technique for high-resolution 3D imaging of thick samples.« less
Multi-slice ptychography with large numerical aperture multilayer Laue lenses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozturk, Hande; Yan, Hanfei; He, Yan
Here, the highly convergent x-ray beam focused by multilayer Laue lenses with large numerical apertures is used as a three-dimensional (3D) probe to image layered structures with an axial separation larger than the depth of focus. Instead of collecting weakly scattered high-spatial-frequency signals, the depth-resolving power is provided purely by the intense central cone diverged from the focused beam. Using the multi-slice ptychography method combined with the on-the-fly scan scheme, two layers of nanoparticles separated by 10 μm are successfully reconstructed with 8.1 nm lateral resolution and with a dwell time as low as 0.05 s per scan point. Thismore » approach obtains high-resolution images with extended depth of field, which paves the way for multi-slice ptychography as a high throughput technique for high-resolution 3D imaging of thick samples.« less
On the application of subcell resolution to conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Chang, Shih-Hung
1989-01-01
LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.
NASA Astrophysics Data System (ADS)
Weng, Jiawen; Clark, David C.; Kim, Myung K.
2016-05-01
A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.
A Species-Level Phylogeny of Extant Snakes with Description of a New Colubrid Subfamily and Genus.
Figueroa, Alex; McKelvy, Alexander D; Grismer, L Lee; Bell, Charles D; Lailvaux, Simon P
2016-01-01
With over 3,500 species encompassing a diverse range of morphologies and ecologies, snakes make up 36% of squamate diversity. Despite several attempts at estimating higher-level snake relationships and numerous assessments of generic- or species-level phylogenies, a large-scale species-level phylogeny solely focusing on snakes has not been completed. Here, we provide the largest-yet estimate of the snake tree of life using maximum likelihood on a supermatrix of 1745 taxa (1652 snake species + 7 outgroup taxa) and 9,523 base pairs from 10 loci (5 nuclear, 5 mitochondrial), including previously unsequenced genera (2) and species (61). Increased taxon sampling resulted in a phylogeny with a new higher-level topology and corroborate many lower-level relationships, strengthened by high nodal support values (> 85%) down to the species level (73.69% of nodes). Although the majority of families and subfamilies were strongly supported as monophyletic with > 88% support values, some families and numerous genera were paraphyletic, primarily due to limited taxon and loci sampling leading to a sparse supermatrix and minimal sequence overlap between some closely-related taxa. With all rogue taxa and incertae sedis species eliminated, higher-level relationships and support values remained relatively unchanged, except in five problematic clades. Our analyses resulted in new topologies at higher- and lower-levels; resolved several previous topological issues; established novel paraphyletic affiliations; designated a new subfamily, Ahaetuliinae, for the genera Ahaetulla, Chrysopelea, Dendrelaphis, and Dryophiops; and appointed Hemerophis (Coluber) zebrinus to a new genus, Mopanveldophis. Although we provide insight into some distinguished problematic nodes, at the deeper phylogenetic scale, resolution of these nodes may require sampling of more slowly-evolving nuclear genes.
A Species-Level Phylogeny of Extant Snakes with Description of a New Colubrid Subfamily and Genus
McKelvy, Alexander D.; Grismer, L. Lee; Bell, Charles D.; Lailvaux, Simon P.
2016-01-01
Background With over 3,500 species encompassing a diverse range of morphologies and ecologies, snakes make up 36% of squamate diversity. Despite several attempts at estimating higher-level snake relationships and numerous assessments of generic- or species-level phylogenies, a large-scale species-level phylogeny solely focusing on snakes has not been completed. Here, we provide the largest-yet estimate of the snake tree of life using maximum likelihood on a supermatrix of 1745 taxa (1652 snake species + 7 outgroup taxa) and 9,523 base pairs from 10 loci (5 nuclear, 5 mitochondrial), including previously unsequenced genera (2) and species (61). Results Increased taxon sampling resulted in a phylogeny with a new higher-level topology and corroborate many lower-level relationships, strengthened by high nodal support values (> 85%) down to the species level (73.69% of nodes). Although the majority of families and subfamilies were strongly supported as monophyletic with > 88% support values, some families and numerous genera were paraphyletic, primarily due to limited taxon and loci sampling leading to a sparse supermatrix and minimal sequence overlap between some closely-related taxa. With all rogue taxa and incertae sedis species eliminated, higher-level relationships and support values remained relatively unchanged, except in five problematic clades. Conclusion Our analyses resulted in new topologies at higher- and lower-levels; resolved several previous topological issues; established novel paraphyletic affiliations; designated a new subfamily, Ahaetuliinae, for the genera Ahaetulla, Chrysopelea, Dendrelaphis, and Dryophiops; and appointed Hemerophis (Coluber) zebrinus to a new genus, Mopanveldophis. Although we provide insight into some distinguished problematic nodes, at the deeper phylogenetic scale, resolution of these nodes may require sampling of more slowly-evolving nuclear genes. PMID:27603205
Three-axis digital holographic microscopy for high speed volumetric imaging.
Saglimbeni, F; Bianchi, S; Lepore, A; Di Leonardo, R
2014-06-02
Digital Holographic Microscopy allows to numerically retrieve three dimensional information encoded in a single 2D snapshot of the coherent superposition of a reference and a scattered beam. Since no mechanical scans are involved, holographic techniques have a superior performance in terms of achievable frame rates. Unfortunately, numerical reconstructions of scattered field by back-propagation leads to a poor axial resolution. Here we show that overlapping the three numerical reconstructions obtained by tilted red, green and blue beams results in a great improvement over the axial resolution and sectioning capabilities of holographic microscopy. A strong reduction in the coherent background noise is also observed when combining the volumetric reconstructions of the light fields at the three different wavelengths. We discuss the performance of our technique with two test objects: an array of four glass beads that are stacked along the optical axis and a freely diffusing rod shaped E.coli bacterium.
Effects of sounding temperature assimilation on weather forecasting - Model dependence studies
NASA Technical Reports Server (NTRS)
Ghil, M.; Halem, M.; Atlas, R.
1979-01-01
In comparing various methods for the assimilation of remote sounding information into numerical weather prediction (NWP) models, the problem of model dependence for the different results obtained becomes important. The paper investigates two aspects of the model dependence question: (1) the effect of increasing horizontal resolution within a given model on the assimilation of sounding data, and (2) the effect of using two entirely different models with the same assimilation method and sounding data. Tentative conclusions reached are: first, that model improvement as exemplified by increased resolution, can act in the same direction as judicious 4-D assimilation of remote sounding information, to improve 2-3 day numerical weather forecasts. Second, that the time continuous 4-D methods developed at GLAS have similar beneficial effects when used in the assimilation of remote sounding information into NWP models with very different numerical and physical characteristics.
A priori and a posteriori analysis of the flow around a rectangular cylinder
NASA Astrophysics Data System (ADS)
Cimarelli, A.; Leonforte, A.; Franciolini, M.; De Angelis, E.; Angeli, D.; Crivellini, A.
2017-11-01
The definition of a correct mesh resolution and modelling approach for the Large Eddy Simulation (LES) of the flow around a rectangular cylinder is recognized to be a rather elusive problem as shown by the large scatter of LES results present in the literature. In the present work, we aim at assessing this issue by performing an a priori analysis of Direct Numerical Simulation (DNS) data of the flow. This approach allows us to measure the ability of the LES field on reproducing the main flow features as a function of the resolution employed. Based on these results, we define a mesh resolution which maximize the opposite needs of reducing the computational costs and of adequately resolving the flow dynamics. The effectiveness of the resolution method proposed is then verified by means of an a posteriori analysis of actual LES data obtained by means of the implicit LES approach given by the numerical properties of the Discontinuous Galerkin spatial discretization technique. The present work represents a first step towards a best practice for LES of separating and reattaching flows.
Micro-computed tomography pore-scale study of flow in porous media: Effect of voxel resolution
NASA Astrophysics Data System (ADS)
Shah, S. M.; Gray, F.; Crawshaw, J. P.; Boek, E. S.
2016-09-01
A fundamental understanding of flow in porous media at the pore-scale is necessary to be able to upscale average displacement processes from core to reservoir scale. The study of fluid flow in porous media at the pore-scale consists of two key procedures: Imaging - reconstruction of three-dimensional (3D) pore space images; and modelling such as with single and two-phase flow simulations with Lattice-Boltzmann (LB) or Pore-Network (PN) Modelling. Here we analyse pore-scale results to predict petrophysical properties such as porosity, single-phase permeability and multi-phase properties at different length scales. The fundamental issue is to understand the image resolution dependency of transport properties, in order to up-scale the flow physics from pore to core scale. In this work, we use a high resolution micro-computed tomography (micro-CT) scanner to image and reconstruct three dimensional pore-scale images of five sandstones (Bentheimer, Berea, Clashach, Doddington and Stainton) and five complex carbonates (Ketton, Estaillades, Middle Eastern sample 3, Middle Eastern sample 5 and Indiana Limestone 1) at four different voxel resolutions (4.4 μm, 6.2 μm, 8.3 μm and 10.2 μm), scanning the same physical field of view. Implementing three phase segmentation (macro-pore phase, intermediate phase and grain phase) on pore-scale images helps to understand the importance of connected macro-porosity in the fluid flow for the samples studied. We then compute the petrophysical properties for all the samples using PN and LB simulations in order to study the influence of voxel resolution on petrophysical properties. We then introduce a numerical coarsening scheme which is used to coarsen a high voxel resolution image (4.4 μm) to lower resolutions (6.2 μm, 8.3 μm and 10.2 μm) and study the impact of coarsening data on macroscopic and multi-phase properties. Numerical coarsening of high resolution data is found to be superior to using a lower resolution scan because it avoids the problem of partial volume effects and reduces the scaling effect by preserving the pore-space properties influencing the transport properties. This is evidently compared in this study by predicting several pore network properties such as number of pores and throats, average pore and throat radius and coordination number for both scan based analysis and numerical coarsened data.
38 CFR 4.86 - Exceptional patterns of hearing impairment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the Roman numeral designation for hearing impairment from either Table VI or Table VIa, whichever... determine the Roman numeral designation for hearing impairment from either Table VI or Table VIa, whichever results in the higher numeral. That numeral will then be elevated to the next higher Roman numeral. Each...
Mariappan, Leo; Hu, Gang; He, Bin
2014-02-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼ 1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction.
NASA Astrophysics Data System (ADS)
Kumpová, I.; Vavřík, D.; Fíla, T.; Koudelka, P.; Jandejsek, I.; Jakůbek, J.; Kytýř, D.; Zlámal, P.; Vopálenský, M.; Gantar, A.
2016-02-01
To overcome certain limitations of contemporary materials used for bone tissue engineering, such as inflammatory response after implantation, a whole new class of materials based on polysaccharide compounds is being developed. Here, nanoparticulate bioactive glass reinforced gelan-gum (GG-BAG) has recently been proposed for the production of bone scaffolds. This material offers promising biocompatibility properties, including bioactivity and biodegradability, with the possibility of producing scaffolds with directly controlled microgeometry. However, to utilize such a scaffold with application-optimized properties, large sets of complex numerical simulations using the real microgeometry of the material have to be carried out during the development process. Because the GG-BAG is a material with intrinsically very low attenuation to X-rays, its radiographical imaging, including tomographical scanning and reconstructions, with resolution required by numerical simulations might be a very challenging task. In this paper, we present a study on X-ray imaging of GG-BAG samples. High-resolution volumetric images of investigated specimens were generated on the basis of micro-CT measurements using a large area flat-panel detector and a large area photon-counting detector. The photon-counting detector was composed of a 010× 1 matrix of Timepix edgeless silicon pixelated detectors with tiling based on overlaying rows (i.e. assembled so that no gap is present between individual rows of detectors). We compare the results from both detectors with the scanning electron microscopy on selected slices in transversal plane. It has been shown that the photon counting detector can provide approx. 3× better resolution of the details in low-attenuating materials than the integrating flat panel detectors. We demonstrate that employment of a large area photon counting detector is a good choice for imaging of low attenuating materials with the resolution sufficient for numerical simulations.
Numerical modeling of landslide-generated tsunami using adaptive unstructured meshes
NASA Astrophysics Data System (ADS)
Wilson, Cian; Collins, Gareth; Desousa Costa, Patrick; Piggott, Matthew
2010-05-01
Landslides impacting into or occurring under water generate waves, which can have devastating environmental consequences. Depending on the characteristics of the landslide the waves can have significant amplitude and potentially propagate over large distances. Linear models of classical earthquake-generated tsunamis cannot reproduce the highly nonlinear generation mechanisms required to accurately predict the consequences of landslide-generated tsunamis. Also, laboratory-scale experimental investigation is limited to simple geometries and short time-scales before wave reflections contaminate the data. Computational fluid dynamics models based on the nonlinear Navier-Stokes equations can simulate landslide-tsunami generation at realistic scales. However, traditional chessboard-like structured meshes introduce superfluous resolution and hence the computing power required for such a simulation can be prohibitively high, especially in three dimensions. Unstructured meshes allow the grid spacing to vary rapidly from high resolution in the vicinity of small scale features to much coarser, lower resolution in other areas. Combining this variable resolution with dynamic mesh adaptivity allows such high resolution zones to follow features like the interface between the landslide and the water whilst minimising the computational costs. Unstructured meshes are also better suited to representing complex geometries and bathymetries allowing more realistic domains to be simulated. Modelling multiple materials, like water, air and a landslide, on an unstructured adaptive mesh poses significant numerical challenges. Novel methods of interface preservation must be considered and coupled to a flow model in such a way that ensures conservation of the different materials. Furthermore this conservation property must be maintained during successive stages of mesh optimisation and interpolation. In this paper we validate a new multi-material adaptive unstructured fluid dynamics model against the well-known Lituya Bay landslide-generated wave experiment and case study [1]. In addition, we explore the effect of physical parameters, such as the shape, velocity and viscosity of the landslide, on wave amplitude and run-up, to quantify their influence on the landslide-tsunami hazard. As well as reproducing the experimental results, the model is shown to have excellent conservation and bounding properties. It also requires fewer nodes than an equivalent resolution fixed mesh simulation, therefore minimising at least one aspect of the computational cost. These computational savings are directly transferable to higher dimensions and some initial three dimensional results are also presented. These reproduce the experiments of DiRisio et al. [2], where an 80cm long landslide analogue was released from the side of an 8.9m diameter conical island in a 50 × 30m tank of water. The resulting impact between the landslide and the water generated waves with an amplitude of 1cm at wave gauges around the island. The range of scales that must be considered in any attempt to numerically reproduce this experiment makes it an ideal case study for our multi-material adaptive unstructured fluid dynamics model. [1] FRITZ, H. M., MOHAMMED, F., & YOO, J. 2009. Lituya Bay Landslide Impact Generated Mega-Tsunami 50th Anniversary. Pure and Applied Geophysics, 166(1), 153-175. [2] DIRISIO, M., DEGIROLAMO, P., BELLOTTI, G., PANIZZO, A., ARISTODEMO, F.,
Measurements of hot electrons in the Extrap T1 reversed-field pinch
NASA Astrophysics Data System (ADS)
Welander, A.; Bergsåker, H.
1998-02-01
The presence of an anisotropic energetic electron population in the edge region is a characteristic feature of reversed-field pinch (RFP) plasmas. In the Extrap T1 RFP, the anisotropic, parallel heat flux in the edge region measured by calorimetry was typically several hundred 0741-3335/40/2/011/img1. To gain more insight into the origin of the hot electron component and to achieve time resolution of the hot electron flow during the discharge, a target probe with a soft x-ray monitor was designed, calibrated and implemented. The x-ray emission from the target was measured with a surface barrier detector covered with a set of different x-ray filters to achieve energy resolution. A calibration in the range 0.5-2 keV electron energy was performed on the same target and detector assembly using a 0741-3335/40/2/011/img2 cathode electron gun. The calibration data are interpolated and extrapolated numerically. A directional asymmetry of more than a factor of 100 for the higher energy electrons is observed. The hot electrons are estimated to constitute 10% of the total electron density at the edge and their energy distribution is approximated by a half-Maxwellian with a temperature slightly higher than the central electron temperature. Scalings with plasma current, as well as correlations with local 0741-3335/40/2/011/img3 measurements and radial dependences, are presented.
NASA Astrophysics Data System (ADS)
Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul
2015-09-01
Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.
Neogene compressional deformation and possible thrust faulting in southwest Dominican Republic
NASA Technical Reports Server (NTRS)
Golombek, M. P.; Goreau, P.; Dixon, T. H.
1985-01-01
Analysis of regional and high resolution remote sensing data coupled with detailed field investigations indicates Neogene compressional deformation in the southwest Dominican Republic. Airborne synthetic aperture radar data and high resolution near infrared photography show folds in Tertiary sediments and possible thrust fault scarps implying NE to SW compression in the region. Large road cuts through the scarps allow study of otherwise poorly accessible, heavily vegetated karst terrain. Deformation increases toward scrap fronts where small bedding-plane thrust faults become more numerous. Analysis of mesoscopic faults with slickensides indicates compression oriented between N to S and E to W. The lowermost scarp has highly sheared fault breccia and undeformed frontal talus breccias implying it is the basal thrust into which the higher thrust faults sole. Thus, the scarps probably formed in a regional NE to SW compressional stress regime and are the toes of thrust sheets. Previous workers have suggested that these scarps are ancient shorelines. However, the gross morphology of the scarps differs substantially from well known erosional terraces on the north coast.
Wang, Hsiao-Fan; Hsu, Hsin-Wei
2010-11-01
With the urgency of global warming, green supply chain management, logistics in particular, has drawn the attention of researchers. Although there are closed-loop green logistics models in the literature, most of them do not consider the uncertain environment in general terms. In this study, a generalized model is proposed where the uncertainty is expressed by fuzzy numbers. An interval programming model is proposed by the defined means and mean square imprecision index obtained from the integrated information of all the level cuts of fuzzy numbers. The resolution for interval programming is based on the decision maker (DM)'s preference. The resulting solution provides useful information on the expected solutions under a confidence level containing a degree of risk. The results suggest that the more optimistic the DM is, the better is the resulting solution. However, a higher risk of violation of the resource constraints is also present. By defining this probable risk, a solution procedure was developed with numerical illustrations. This provides a DM trade-off mechanism between logistic cost and the risk. Copyright 2010 Elsevier Ltd. All rights reserved.
[Optimum design of imaging spectrometer based on toroidal uniform-line-spaced (TULS) spectrometer].
Xue, Qing-Sheng; Wang, Shu-Rong
2013-05-01
Based on the geometrical aberration theory, a optimum-design method for designing an imaging spectrometer based on toroidal uniform grating spectrometer is proposed. To obtain the best optical parameters, twice optimization is carried out using genetic algorithm(GA) and optical design software ZEMAX A far-ultraviolet(FUV) imaging spectrometer is designed using this method. The working waveband is 110-180 nm, the slit size is 50 microm x 5 mm, and the numerical aperture is 0.1. Using ZEMAX software, the design result is analyzed and evaluated. The results indicate that the MTF for different wavelengths is higher than 0.7 at Nyquist frequency 10 lp x mm(-1), and the RMS spot radius is less than 14 microm. The good imaging quality is achieved over the whole working waveband, the design requirements of spatial resolution 0.5 mrad and spectral resolution 0.6 nm are satisfied. It is certificated that the optimum-design method proposed in this paper is feasible. This method can be applied in other waveband, and is an instruction method for designing grating-dispersion imaging spectrometers.
Femtosecond Electron Wave Packet Propagation and Diffraction: Towards Making the ``Molecular Movie"
NASA Astrophysics Data System (ADS)
Miller, R. J. Dwayne
2003-03-01
Time-resolved electron diffraction harbors great promise for achieving atomic resolution of the fastest chemical processes. The generation of sufficiently short electron pulses to achieve this real time view of a chemical reaction has been limited by problems in maintaining short electron pulses with realistic electron densities to the sample. The propagation dynamics of femtosecond electron packets in the drift region of a photoelectron gun are investigated with an N-body numerical simulation and mean-field model. This analyis shows that the redistribution of electrons inside the packet, arising from space-charge and dispersion contributions, changes the pulse envelope and leads to the development of a spatially linear axial velocity distribution. These results have been used in the design of femtosecond photoelectron guns with higher time resolution and novel electron-optical methods of pulse characterization that are approaching 100 fs timescales. Time-resolved diffraction studies with electron pulses of approximately 500 femtoseconds have focused on solid-liquid phase transitions under far from equilibrium conditions. This work gives a microscopic description of the melting process and illustrates the promise of atomically resolving transition state processes.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
75 FR 44811 - Sunshine Act Meeting of the Board of Directors and Five Board Committees
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-29
... impediments to development of numerical criteria for the measurement of LSC performance Briefing on how the..., including identification of possible impediments to development of numerical criteria for the measurement of... on Resolution 2010-XXX regarding future amendments to the LSC Accounting Manual 14. Consider and act...
High Order Finite Difference Methods with Subcell Resolution for 2D Detonation Waves
NASA Technical Reports Server (NTRS)
Wang, W.; Shu, C. W.; Yee, H. C.; Sjogreen, B.
2012-01-01
In simulating hyperbolic conservation laws in conjunction with an inhomogeneous stiff source term, if the solution is discontinuous, spurious numerical results may be produced due to different time scales of the transport part and the source term. This numerical issue often arises in combustion and high speed chemical reacting flows.
Time reversal through a solid-liquid interface and super-resolution
NASA Astrophysics Data System (ADS)
Tsogka, Chrysoula; Papanicolaou, George C.
2002-12-01
We present numerical computations that reproduce the time-reversal experiments of Draeger et al (Draeger C, Cassereau D and Fink M 1998 Appl. Phys. Lett. 72 1567-9), where ultrasound elastic waves are time-reversed back to their source with a time-reversal mirror in a fluid adjacent to the solid. We also show numerically that multipathing caused by random inhomogeneities improves the focusing of the back-propagated elastic waves beyond the diffraction limit seen previously in acoustic wave propagation (Dowling D R and Jackson D R 1990 J. Acoust. Soc. Am. 89 171-81, Dowling D R and Jackson D R 1992 J. Acoust. Soc. Am. 91 3257-77, Fink M 1999 Sci. Am. 91-7, Kuperman W A, Hodgkiss W S, Song H C, Akal T, Ferla C and Jackson D R 1997 J. Acoust. Soc. Am. 103 25-40, Derode A, Roux P and Fink M 1995 Phys. Rev. Lett. 75 4206-9), which is called super-resolution. A theoretical explanation of the robustness of super-resolution is given, along with several numerical computations that support this explanation (Blomgren P, Papanicolaou G and Zhao H 2002 J. Acoust. Soc. Am. 111 238-48). Time reversal with super-resolution can be used in non-destructive testing and, in a different way, in imaging with active arrays (Borcea L, Papanicolaou G, Tsogka C and Berryman J 2002 Inverse Problems 18 1247-79).
Consistent three-equation model for thin films
NASA Astrophysics Data System (ADS)
Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul
2017-11-01
Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.
Star-disc interaction in galactic nuclei: orbits and rates of accreted stars
NASA Astrophysics Data System (ADS)
Kennedy, Gareth F.; Meiron, Yohai; Shukirgaliyev, Bekdaulet; Panamarev, Taras; Berczik, Peter; Just, Andreas; Spurzem, Rainer
2016-07-01
We examine the effect of an accretion disc on the orbits of stars in the central star cluster surrounding a central massive black hole by performing a suite of 39 high-accuracy direct N-body simulations using state-of-the art software and accelerator hardware, with particle numbers up to 128k. The primary focus is on the accretion rate of stars by the black hole (equivalent to their tidal disruption rate for black holes in the small to medium mass range) and the eccentricity distribution of these stars. Our simulations vary not only the particle number, but disc model (two models examined), spatial resolution at the centre (characterized by the numerical accretion radius) and softening length. The large parameter range and physically realistic modelling allow us for the first time to confidently extrapolate these results to real galactic centres. While in a real galactic centre both particle number and accretion radius differ by a few orders of magnitude from our models, which are constrained by numerical capability, we find that the stellar accretion rate converges for models with N ≥ 32k. The eccentricity distribution of accreted stars, however, does not converge. We find that there are two competing effects at work when improving the resolution: larger particle number leads to a smaller fraction of stars accreted on nearly circular orbits, while higher spatial resolution increases this fraction. We scale our simulations to some nearby galaxies and find that the expected boost in stellar accretion (or tidal disruption, which could be observed as X-ray flares) in the presence of a gas disc is about a factor of 10. Even with this boost, the accretion of mass from stars is still a factor of ˜100 slower than the accretion of gas from the disc. Thus, it seems accretion of stars is not a major contributor to black hole mass growth.
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-04-14
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.
Hutchinson, James L; Rajagopal, Shalini P; Sales, Kurt J; Jabbour, Henry N
2011-07-01
Inflammatory processes are central to reproductive events including ovulation, menstruation, implantation and labour, while inflammatory dysregulation is a feature of numerous reproductive pathologies. In recent years, there has been much research into the endogenous mechanisms by which inflammatory reactions are terminated and tissue homoeostasis is restored, a process termed resolution. The identification and characterisation of naturally occurring pro-resolution mediators including lipoxins and annexin A1 has prompted a shift in the field of anti-inflammation whereby resolution is now observed as an active process, triggered as part of a normal inflammatory response. This review will address the process of resolution, discuss available evidence for expression of pro-resolution factors in the reproductive tract and explore possible roles for resolution in physiological reproductive processes and associated pathologies.
NASA Astrophysics Data System (ADS)
Zodiatis, George; Radhakrishnan, Hari; Lardner, Robin; Hayes, Daniel; Gertman, Isaac; Menna, Milena; Poulain, Pierre-Marie
2014-05-01
The general anticlockwise circulation along the coastline of the Eastern Mediterranean Levantine Basin was first proposed by Nielsen in 1912. Half a century later the schematic of the circulation in the area was enriched with sub-basin flow structures. In late 1980s, a more detailed picture of the circulation composed of eddies, gyres and coastal-offshore jets was defined during the POEM cruises. In 2005, Millot and Taupier-Letage have used SST satellite imagery to argue for a simpler pattern similar to the one proposed almost a century ago. During the last decade, renewed in-situ multi-platforms investigations under the framework of CYBO, CYCLOPS, NEMED, GROOM, HaiSec and PERSEUS projects, as well the development of the operational ocean forecasts and hindcasts in the framework of the MFS, ECOOP, MERSEA and MyOcean projects, have made possible to obtain an improved, higher spatial and temporal resolution picture of the circulation in the area. After some years of scientific disputes on the circulation pattern of the region, the new in-situ data sets and the operational numerical simulations confirm the relevant POEM results. The existing POM-based Cyprus Coastal Ocean Forecasting System (CYCOFOS), downscaling the MyOcean MFS, has been providing operational forecasts in the Eastern Mediterranean Levantine Basin region since early 2002. Recently, Radhakrishnan et al. (2012) parallelized the CYCOFOS hydrodynamic flow model using MPI to improve the accuracy of predictions while reducing the computational time. The parallel flow model is capable of modeling the Eastern Mediterranean Levantine Basin flow at a resolution of 500 m. The model was run in hindcast mode during which the innovations were computed using the historical data collected using gliders and cruises. Then, DD-OceanVar (D'Amore et al., 2013), a data assimilation tool based on 3DVAR developed by CMCC was used to compute the temperature and salinity field corrections. Numerical modeling results after the data assimilation will be presented.
NASA Astrophysics Data System (ADS)
Vinod Kumar, A.; Sitaraman, V.; Oza, R. B.; Krishnamoorthy, T. M.
A one-dimensional numerical planetary boundary layer (PBL) model is developed and applied to study the vertical distribution of radon and its daughter products in the atmosphere. The meteorological model contains parameterization for the vertical diffusion coefficient based on turbulent kinetic energy and energy dissipation ( E- ɛ model). The increased vertical resolution and the realistic concentration of radon and its daughter products based on the time-dependent PBL model is compared with the steady-state model results and field observations. The ratio of radon concentration at higher levels to that at the surface has been studied to see the effects of atmospheric stability. The significant change in the vertical profile of concentration due to decoupling of the upper portion of the boundary layer from the shallow lower stable layer is explained by the PBL model. The disequilibrium ratio of 214Bi/ 214Pb broadly agrees with the observed field values. The sharp decrease in the ratio during transition from unstable to stable atmospheric condition is also reproduced by the model.
NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
2009-02-28
The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less
The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.
The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less
NASA Astrophysics Data System (ADS)
Wagenbrenner, N. S.; Forthofer, J.; Gibson, C.; Lamb, B. K.
2017-12-01
Frequent strong gap winds were measured in a deep, steep, wildfire-prone river canyon of central Idaho, USA during July-September 2013. Analysis of archived surface pressure data indicate that the gap wind events were driven by regional scale surface pressure gradients. The events always occurred between 0400 and 1200 LT and typically lasted 3-4 hours. The timing makes these events particularly hazardous for wildland firefighting applications since the morning is typically a period of reduced fire activity and unsuspecting firefighters could be easily endangered by the onset of strong downcanyon winds. The gap wind events were not explicitly forecast by operational numerical weather prediction (NWP) models due to the small spatial scale of the canyon ( 1-2 km wide) compared to the horizontal resolution of operational NWP models (3 km or greater). Custom WRF simulations initialized with NARR data were run at 1 km horizontal resolution to assess whether higher resolution NWP could accurately simulate the observed gap winds. Here, we show that the 1 km WRF simulations captured many of the observed gap wind events, although the strength of the events was underpredicted. We also present evidence from these WRF simulations which suggests that the Salmon River Canyon is near the threshold of WRF-resolvable terrain features when the standard WRF coordinate system and discretization schemes are used. Finally, we show that the strength of the gap wind events can be predicted reasonably well as a function of the surface pressure gradient across the gap, which could be useful in the absence of high-resolution NWP. These are important findings for wildland firefighting applications in narrow gaps where routine forecasts may not provide warning for wind effects induced by high-resolution terrain features.
Label-free super-resolution with coherent nonlinear structured-illumination microscopy
NASA Astrophysics Data System (ADS)
Huttunen, Mikko J.; Abbas, Aazad; Upham, Jeremy; Boyd, Robert W.
2017-08-01
Structured-illumination microscopy enables up to a two-fold lateral resolution improvement by spatially modulating the intensity profile of the illumination beam. We propose a novel way to generalize the concept of structured illumination to nonlinear widefield modalities by spatially modulating, instead of field intensities, the phase of the incident field while interferometrically measuring the complex-valued scattered field. We numerically demonstrate that for second-order and third-order processes an almost four- and six-fold increase in lateral resolution is achievable, respectively. This procedure overcomes the conventional Abbe diffraction limit and provides new possibilities for label-free super-resolution microscopy.
Resolution enhancement using simultaneous couple illumination
NASA Astrophysics Data System (ADS)
Hussain, Anwar; Martínez Fuentes, José Luis
2016-10-01
A super-resolution technique based on structured illumination created by a liquid crystal on silicon spatial light modulator (LCOS-SLM) is presented. Single and simultaneous pairs of tilted beams are generated to illuminate a target object. Resolution enhancement of an optical 4f system is demonstrated by using numerical simulations. The resulting intensity images are recorded at a charged couple device (CCD) and stored in the computer memory for further processing. One dimension enhancement can be performed with only 15 images. Two dimensional complete improvement requires 153 different images. The resolution of the optical system is extended three times compared to the band limited system.
High-resolution digital holography with the aid of coherent diffraction imaging.
Jiang, Zhilong; Veetil, Suhas P; Cheng, Jun; Liu, Cheng; Wang, Ling; Zhu, Jianqiang
2015-08-10
The image reconstructed in ordinary digital holography was unable to bring out desired resolution in comparison to photographic materials; thus making it less preferable for many interesting applications. A method is proposed to enhance the resolution of digital holography in all directions by placing a random phase plate between the specimen and the electronic camera and then using an iterative approach to do the reconstruction. With this method, the resolution is improved remarkably in comparison to ordinary digital holography. Theoretical analysis is supported by numerical simulation. The feasibility of the method is also studied experimentally.
Optical coherence microscope for invariant high resolution in vivo skin imaging
NASA Astrophysics Data System (ADS)
Murali, S.; Lee, K. S.; Meemon, P.; Rolland, J. P.
2008-02-01
A non-invasive, reliable and affordable imaging system with the capability of detecting skin pathologies such as skin cancer would be a valuable tool to use for pre-screening and diagnostic applications. Optical Coherence Microscopy (OCM) is emerging as a building block for in vivo optical diagnosis, where high numerical aperture optics is introduced in the sample arm to achieve high lateral resolution. While high numerical aperture optics enables realizing high lateral resolution at the focus point, dynamic focusing is required to maintain the target lateral resolution throughout the depth of the sample being imaged. In this paper, we demonstrate the ability to dynamically focus in real-time with no moving parts to a depth of up to 2mm in skin-equivalent tissue in order to achieve 3.5μm lateral resolution throughout an 8 cubic millimeter sample. The built-in dynamic focusing ability is provided by an addressable liquid lens embedded in custom-designed optics which was designed for a broadband laser source of 120 nm bandwidth centered at around 800nm. The imaging probe was designed to be low-cost and portable. Design evaluation and tolerance analysis results show that the probe is robust to manufacturing errors and produces consistent high performance throughout the imaging volume.
NASA Astrophysics Data System (ADS)
Schmitt, Rainer M.; Scott, W. Guy; Irving, Richard D.; Arnold, Joe; Bardons, Charles; Halpert, Daniel; Parker, Lawrence
2004-09-01
A new type of fingerprint sensor is presented. The sensor maps the acoustic impedance of the fingerprint pattern by estimating the electrical impedance of its sensor elements. The sensor substrate, made of 1-3 piezo-ceramic, which is fabricated inexpensively at large scales, can provide a resolution up to 50 μm over an area of 20 x 25 mm2. Using FE modeling the paper presents the numerical validation of the basic principle. It evaluates an optimized pillar aspect ratio, estimates spatial resolution and the point spread function for a 100 μm and 50 μm pitch model. In addition, first fingerprints obtained with the prototype sensor are presented.
High-resolution numerical models for smoke transport in plumes from wildland fires
Philip Cunningham; Scott Goodrick
2013-01-01
A high-resolution large-eddy simulation (LES) model is employed to examine the fundamental structure and dynamics of buoyant plumes arising from heat sources representative of wildland fires. Herein we describe several aspects of the mean properties of the simulated plumes. Mean plume trajectories are apparently well described by the traditional two-thirds law for...
Impedance Eduction in Large Ducts Containing Higher-Order Modes and Grazing Flow
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.
2017-01-01
Impedance eduction test data are acquired in ducts with small and large cross-sectional areas at the NASA Langley Research Center. An improved data acquisition system in the large duct has resulted in increased control of the acoustic energy in source modes and more accurate resolution of higher-order duct modes compared to previous tests. Two impedance eduction methods that take advantage of the improved data acquisition to educe the liner impedance in grazing flow are presented. One method measures the axial propagation constant of a dominant mode in the liner test section (by implementing the Kumarsean and Tufts algorithm) and educes the impedance from an exact analytical expression. The second method solves numerically the convected Helmholtz equation and minimizes an objective function to obtain the liner impedance. The two methods are tested first on data synthesized from an exact mode solution and then on measured data. Results show that when the methods are applied to data acquired in the larger duct with a dominant higher-order mode, the same impedance spectra are educed as that obtained in the small duct where only the plane wave mode propagates. This result holds for each higher-order mode in the large duct provided that the higher-order mode is sufficiently attenuated by the liner.
1987-10-15
apparent shift of this band to higher energy with increasing coverage, observed at lower resolution (but higher sensitivity) in electron energy loss...apparent shift of this band to higher energy with increasing coverage, observed at lower resolution (but higher sen- sitivity) in electron energy ...11 using high-resolution electron energy -loss spectroscopy (EELS), is especially intriguing. 02 dissociates on this surface to populate two types of
NASA Astrophysics Data System (ADS)
Deo, R. K.; Domke, G. M.; Russell, M.; Woodall, C. W.
2017-12-01
Landsat data have been widely used to support strategic forest inventory and management decisions despite the limited success of passive optical remote sensing for accurate estimation of aboveground biomass (AGB). The archive of publicly available Landsat data, available at 30-m spatial resolutions since 1984, has been a valuable resource for cost-effective large-area estimation of AGB to inform national requirements such as for the US national greenhouse gas inventory (NGHGI). In addition, other optical satellite data such as MODIS imagery of wider spatial coverage and higher temporal resolution are enriching the domain of spatial predictors for regional scale mapping of AGB. Because NGHGIs require national scale AGB information and there are tradeoffs in the prediction accuracy versus operational efficiency of Landsat, this study evaluated the impact of various resolutions of Landsat predictors on the accuracy of regional AGB models across three different sites in the eastern USA: Maine, Pennsylvania-New Jersey, and South Carolina. We used recent national forest inventory (NFI) data with numerous Landsat-derived predictors at ten different spatial resolutions ranging from 30 to 1000 m to understand the optimal spatial resolution of the optical data for enhanced spatial inventory of AGB for NGHGI reporting. Ten generic spatial models at different spatial resolutions were developed for all sites and large-area estimates were evaluated (i) at the county-level against the independent designed-based estimates via the US NFI Evalidator tool and (ii) within a large number of strips ( 1 km wide) predicted via LiDAR metrics at a high spatial resolution. The county-level estimates by the Evalidator and Landsat models were statistically equivalent and produced coefficients of determination (R2) above 0.85 that varied with sites and resolution of predictors. The mean and standard deviation of county-level estimates followed increasing and decreasing trends, respectively, with models of decreasing resolutions. The Landsat-based total AGB estimates within the strips against the total AGB obtained using LiDAR metrics did not differ significantly and were within ±15 Mg/ha for each of the sites. We conclude that the optical satellite data at resolutions up to 1000 m provide acceptable accuracy for the US' NGHGI.
NASA Astrophysics Data System (ADS)
Dill, Robert; Bergmann-Wolf, Inga; Thomas, Maik; Dobslaw, Henryk
2016-04-01
The global numerical weather prediction model routinely operated at the European Centre for Medium-Range Weather Forecasts (ECMWF) is typically updated about two times a year to incorporate the most recent improvements in the numerical scheme, the physical model or the data assimilation procedures into the system for steadily improving daily weather forecasting quality. Even though such changes frequently affect the long-term stability of meteorological quantities, data from the ECMWF deterministic model is often preferred over alternatively available atmospheric re-analyses due to both the availability of the data in near real-time and the substantially higher spatial resolution. However, global surface pressure time-series, which are crucial for the interpretation of geodetic observables, such as Earth rotation, surface deformation, and the Earth's gravity field, are in particular affected by changes in the surface orography of the model associated with every major change in horizontal resolution happened, e.g., in February 2006, January 2010, and May 2015 in case of the ECMWF operational model. In this contribution, we present an algorithm to harmonize surface pressure time-series from the operational ECMWF model by projecting them onto a time-invariant reference topography under consideration of the time-variable atmospheric density structure. The effectiveness of the method will be assessed globally in terms of pressure anomalies. In addition, we will discuss the impact of the method on predictions of crustal deformations based on ECMWF input, which have been recently made available by GFZ Potsdam.
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Numerical simulation of double‐diffusive finger convection
Hughes, Joseph D.; Sanford, Ward E.; Vacher, H. Leonard
2005-01-01
A hybrid finite element, integrated finite difference numerical model is developed for the simulation of double‐diffusive and multicomponent flow in two and three dimensions. The model is based on a multidimensional, density‐dependent, saturated‐unsaturated transport model (SUTRA), which uses one governing equation for fluid flow and another for solute transport. The solute‐transport equation is applied sequentially to each simulated species. Density coupling of the flow and solute‐transport equations is accounted for and handled using a sequential implicit Picard iterative scheme. High‐resolution data from a double‐diffusive Hele‐Shaw experiment, initially in a density‐stable configuration, is used to verify the numerical model. The temporal and spatial evolution of simulated double‐diffusive convection is in good agreement with experimental results. Numerical results are very sensitive to discretization and correspond closest to experimental results when element sizes adequately define the spatial resolution of observed fingering. Numerical results also indicate that differences in the molecular diffusivity of sodium chloride and the dye used to visualize experimental sodium chloride concentrations are significant and cause inaccurate mapping of sodium chloride concentrations by the dye, especially at late times. As a result of reduced diffusion, simulated dye fingers are better defined than simulated sodium chloride fingers and exhibit more vertical mass transfer.
Probing evolutionary population synthesis models in the near infrared with early-type galaxies
NASA Astrophysics Data System (ADS)
Dahmer-Hahn, Luis Gabriel; Riffel, Rogério; Rodríguez-Ardila, Alberto; Martins, Lucimara P.; Kehrig, Carolina; Heckman, Timothy M.; Pastoriza, Miriani G.; Dametto, Natacha Z.
2018-06-01
We performed a near-infrared (NIR; ˜1.0 -2.4 μm) stellar population study in a sample of early-type galaxies. The synthesis was performed using five different evolutionary population synthesis libraries of models. Our main results can be summarized as follows: low-spectral-resolution libraries are not able to produce reliable results when applied to the NIR alone, with each library finding a different dominant population. The two newest higher resolution models, on the other hand, perform considerably better, finding consistent results to each other and to literature values. We also found that optical results are consistent with each other even for lower resolution models. We also compared optical and NIR results and found out that lower resolution models tend to disagree in the optical and in the NIR, with higher fraction of young populations in the NIR and dust extinction ˜1 mag higher than optical values. For higher resolution models, optical and NIR results tend to agree much better, suggesting that a higher spectral resolution is fundamental to improve the quality of the results.
An explicit three-dimensional nonhydrostatic numerical simulation of a tropical cyclone
NASA Technical Reports Server (NTRS)
Tripoli, G. J.
1992-01-01
A nonhydrostatic numerical simulation of a tropical cyclone is performed with explicit representation of cumulus on a meso-beta scale grid and for a brief period on a meso-gamma scale grid. Individual cumulus plumes are represented by a combination of explicit resolution and a 1.5 level closure predicting turbulent kinetic energy (TKE).
NASA Astrophysics Data System (ADS)
Font, J. A.; Ibanez, J. M.; Marti, J. M.
1993-04-01
Some numerical solutions via local characteristic approach have been obtained describing multidimensional flows. These solutions have been used as tests of a two- dimensional code which extends some high-resolution shock-captunng methods, designed recently to solve nonlinear hyperbolic systems of conservation laws. K words: HYDRODYNAMICS - BLACK HOLE - RELATIVITY - SHOCK WAVES
A high-resolution Godunov method for compressible multi-material flow on overlapping grids
NASA Astrophysics Data System (ADS)
Banks, J. W.; Schwendeman, D. W.; Kapila, A. K.; Henshaw, W. D.
2007-04-01
A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on a uniform-pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on the Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of a planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.
Direct and Inverse Kinematics of a Novel Tip-Tilt-Piston Parallel Manipulator
NASA Technical Reports Server (NTRS)
Tahmasebi, Farhad
2004-01-01
Closed-form direct and inverse kinematics of a new three degree-of-freedom (DOF) parallel manipulator with inextensible limbs and base-mounted actuators are presented. The manipulator has higher resolution and precision than the existing three DOF mechanisms with extensible limbs. Since all of the manipulator actuators are base-mounted; higher payload capacity, smaller actuator sizes, and lower power dissipation can be obtained. The manipulator is suitable for alignment applications where only tip, tilt, and piston motions are significant. The direct kinematics of the manipulator is reduced to solving an eighth-degree polynomial in the square of tangent of half-angle between one of the limbs and the base plane. Hence, there are at most 16 assembly configurations for the manipulator. In addition, it is shown that the 16 solutions are eight pairs of reflected configurations with respect to the base plane. Numerical examples for the direct and inverse kinematics of the manipulator are also presented.
Kinematics of a New High Precision Three Degree-of-Freedom Parallel Manipulator
NASA Technical Reports Server (NTRS)
Tahmasebi, Farhad
2005-01-01
Closed-form direct and inverse kinematics of a new three degree-of-freedom (DOF) parallel manipulator with inextensible limbs and base-mounted actuators are presented. The manipulator has higher resolution and precision than the existing three DOF mechanisms with extensible limbs. Since all of the manipulator actuators are base-mounted; higher payload capacity, smaller actuator sizes, and lower power dissipation can be obtained. The manipulator is suitable for alignment applications where only tip, tilt, and piston motions are significant. The direct kinematics of the manipulator is reduced to solving an eighth-degree polynomial in the square of tangent of half-angle between one of the limbs and the base plane. Hence, there are at most sixteen assembly configurations for the manipulator. In addition, it is shown that the sixteen solutions are eight pairs of reflected configurations with respect to the base plane. Numerical examples for the direct and inverse kinematics of the manipulator are also presented.
Saturation of a toroidal Alfvén eigenmode due to enhanced damping of nonlinear sidebands
NASA Astrophysics Data System (ADS)
Todo, Y.; Berk, H. L.; Breizman, B. N.
2012-09-01
This paper examines nonlinear magneto-hydrodynamic effects on the energetic particle driven toroidal Alfvén eigenmode (TAE) for lower dissipation coefficients and with higher numerical resolution than in the previous simulations (Todo et al 2010 Nucl. Fusion 50 084016). The investigation is focused on a TAE mode with toroidal mode number n = 4. It is demonstrated that the mechanism of mode saturation involves generation of zonal (n = 0) and higher-n (n ⩾ 8) sidebands, and that the sidebands effectively increase the mode damping rate via continuum damping. The n = 0 sideband includes the zonal flow peaks at the TAE gap locations. It is also found that the n = 0 poloidal flow represents a balance between the nonlinear driving force from the n = 4 components and the equilibrium plasma response to the n = 0 fluctuations. The spatial profile of the n = 8 sideband peaks at the n = 8 Alfvén continuum, indicating enhanced dissipation due to continuum damping.
NASA Astrophysics Data System (ADS)
Liu, Wei; Yao, Kainan; Chen, Lu; Huang, Danian; Cao, Jingtai; Gu, Haijun
2018-03-01
Based-on the previous study on the theory of the sequential pyramid wavefront sensor (SPWFS), in this paper, the SPWFS is first applied to the coherent free space optical communications (FSOC) with more flexible spatial resolution and higher sensitivity than the Shack-Hartmann wavefront sensor, and with higher uniformity of intensity distribution and much simpler than the pyramid wavefront sensor. Then, the mixing efficiency (ME) and the bit error rate (BER) of the coherent FSOC are analyzed during the aberrations correction through numerical simulation with binary phase shift keying (BPSK) modulation. Finally, an experimental AO system based-on SPWFS is setup, and the experimental data is used to analyze the ME and BER of homodyne detection with BPSK modulation. The results show that the AO system based-on SPWFS can increase ME and decrease BER effectively. The conclusions of this paper provide a new method of wavefront sensing for designing the AO system for a coherent FSOC system.
Energy Spectra of Higher Reynolds Number Turbulence by the DNS with up to 122883 Grid Points
NASA Astrophysics Data System (ADS)
Ishihara, Takashi; Kaneda, Yukio; Morishita, Koji; Yokokawa, Mitsuo; Uno, Atsuya
2014-11-01
Large-scale direct numerical simulations (DNS) of forced incompressible turbulence in a periodic box with up to 122883 grid points have been performed using K computer. The maximum Taylor-microscale Reynolds number Rλ, and the maximum Reynolds number Re based on the integral length scale are over 2000 and 105, respectively. Our previous DNS with Rλ up to 1100 showed that the energy spectrum has a slope steeper than - 5 / 3 (the Kolmogorov scaling law) by factor 0 . 1 at the wavenumber range (kη < 0 . 03). Here η is the Kolmogorov length scale. Our present DNS at higher resolutions show that the energy spectra with different Reynolds numbers (Rλ > 1000) are well normalized not by the integral length-scale but by the Kolmogorov length scale, at the wavenumber range of the steeper slope. This result indicates that the steeper slope is not inherent character in the inertial subrange, and is affected by viscosity.
Uncertainty in temperature-based determination of time of death
NASA Astrophysics Data System (ADS)
Weiser, Martin; Erdmann, Bodo; Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Mall, Gita; Zachow, Stefan
2018-03-01
Temperature-based estimation of time of death (ToD) can be performed either with the help of simple phenomenological models of corpse cooling or with detailed mechanistic (thermodynamic) heat transfer models. The latter are much more complex, but allow a higher accuracy of ToD estimation as in principle all relevant cooling mechanisms can be taken into account. The potentially higher accuracy depends on the accuracy of tissue and environmental parameters as well as on the geometric resolution. We investigate the impact of parameter variations and geometry representation on the estimated ToD. For this, numerical simulation of analytic heat transport models is performed on a highly detailed 3D corpse model, that has been segmented and geometrically reconstructed from a computed tomography (CT) data set, differentiating various organs and tissue types. From that and prior information available on thermal parameters and their variability, we identify the most crucial parameters to measure or estimate, and obtain an a priori uncertainty quantification for the ToD.
NASA Astrophysics Data System (ADS)
Liu, Hailiang; Wang, Zhongming
2017-01-01
We design an arbitrary-order free energy satisfying discontinuous Galerkin (DG) method for solving time-dependent Poisson-Nernst-Planck systems. Both the semi-discrete and fully discrete DG methods are shown to satisfy the corresponding discrete free energy dissipation law for positive numerical solutions. Positivity of numerical solutions is enforced by an accuracy-preserving limiter in reference to positive cell averages. Numerical examples are presented to demonstrate the high resolution of the numerical algorithm and to illustrate the proven properties of mass conservation, free energy dissipation, as well as the preservation of steady states.
Low-resolution simulations of vesicle suspensions in 2D
NASA Astrophysics Data System (ADS)
Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George
2018-03-01
Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.
A Class of High-Resolution Explicit and Implicit Shock-Capturing Methods
NASA Technical Reports Server (NTRS)
Yee, H. C.
1994-01-01
The development of shock-capturing finite difference methods for hyperbolic conservation laws has been a rapidly growing area for the last decade. Many of the fundamental concepts, state-of-the-art developments and applications to fluid dynamics problems can only be found in meeting proceedings, scientific journals and internal reports. This paper attempts to give a unified and generalized formulation of a class of high-resolution, explicit and implicit shock capturing methods, and to illustrate their versatility in various steady and unsteady complex shock waves, perfect gases, equilibrium real gases and nonequilibrium flow computations. These numerical methods are formulated for the purpose of ease and efficient implementation into a practical computer code. The various constructions of high-resolution shock-capturing methods fall nicely into the present framework and a computer code can be implemented with the various methods as separate modules. Included is a systematic overview of the basic design principle of the various related numerical methods. Special emphasis will be on the construction of the basic nonlinear, spatially second and third-order schemes for nonlinear scalar hyperbolic conservation laws and the methods of extending these nonlinear scalar schemes to nonlinear systems via the approximate Riemann solvers and flux-vector splitting approaches. Generalization of these methods to efficiently include real gases and large systems of nonequilibrium flows will be discussed. Some perbolic conservation laws to problems containing stiff source terms and terms and shock waves are also included. The performance of some of these schemes is illustrated by numerical examples for one-, two- and three-dimensional gas-dynamics problems. The use of the Lax-Friedrichs numerical flux to obtain high-resolution shock-capturing schemes is generalized. This method can be extended to nonlinear systems of equations without the use of Riemann solvers or flux-vector splitting approaches and thus provides a large savings for multidimensional, equilibrium real gases and nonequilibrium flow computations.
NASA Astrophysics Data System (ADS)
Javernick, L.; Bertoldi, W.; Redolfi, M.
2017-12-01
Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical agreement simulations suffered to represent braiding planforms (evolving toward meandering), and parameterization that ensured braided produced exaggerated activation and bank erosion rates. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917
NASA Astrophysics Data System (ADS)
Bowden, J.; Terando, A. J.; Misra, V.; Wootten, A.
2017-12-01
Small island nations are vulnerable to changes in the hydrologic cycle because of their limited water resources. This risk to water security is likely even higher in sub-tropical regions where anthropogenic forcing of the climate system is expected to lead to a drier future (the so-called `dry-get-drier' pattern). However, high-resolution numerical modeling experiments have also shown an enhancement of existing orographically-influenced precipitation patterns on islands with steep topography, potentially mitigating subtropical drying on windward mountain sides. Here we explore the robustness of the near-term (25-45 years) subtropical precipitation decline (SPD) across two island groupings in the Caribbean, Puerto Rico and the U.S. Virgin Islands. These islands, forming the boundary between the Greater and Lesser Antilles, significantly differ in size, topographic relief, and orientation to prevailing winds. Two 2-km horizontal resolution regional climate model simulations are used to downscale a total of three different GCMs under the RCP8.5 emissions scenario. Results indicate some possibility for modest increases in precipitation at the leading edge of the Luquillo Mountains in Puerto Rico, but consistent declines elsewhere. We conclude with a discussion of potential explanations for these patterns and the attendant risks to water security that subtropical small island nations could face as the climate warms.
Haldar, Justin P.; Leahy, Richard M.
2013-01-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603
The development and validation of command schedules for SeaWiFS
NASA Astrophysics Data System (ADS)
Woodward, Robert H.; Gregg, Watson W.; Patt, Frederick S.
1994-11-01
An automated method for developing and assessing spacecraft and instrument command schedules is presented for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) project. SeaWiFS is to be carried on the polar-orbiting SeaStar satellite in 1995. The primary goal of the SeaWiFS mission is to provide global ocean chlorophyll concentrations every four days by employing onboard recorders and a twice-a-day data downlink schedule. Global Area Coverage (GAC) data with about 4.5 km resolution will be used to produce the global coverage. Higher resolution (1.1 km resolution) Local Area Coverage (LAC) data will also be recorded to calibrate the sensor. In addition, LAC will be continuously transmitted from the satellite and received by High Resolution Picture Transmission (HRPT) stations. The methods used to generate commands for SeaWiFS employ numerous hierarchical checks as a means of maximizing coverage of the Earth's surface and fulfilling the LAC data requirements. The software code is modularized and written in Fortran with constructs to mirror the pre-defined mission rules. The overall method is specifically developed for low orbit Earth-observing satellites with finite onboard recording capabilities and regularly scheduled data downlinks. Two software packages using the Interactive Data Language (IDL) for graphically displaying and verifying the resultant command decisions are presented. Displays can be generated which show portions of the Earth viewed by the sensor and spacecraft sub-orbital locations during onboard calibration activities. An IDL-based interactive method of selecting and testing LAC targets and calibration activities for command generation is also discussed.
ERIC Educational Resources Information Center
Shuval, Kerem; Pillsbury, Charles A.; Cavanaugh, Brenda; McGruder, La'rie; McKinney, Christy M.; Massey, Zohar; Groce, Nora E.
2010-01-01
Numerous schools are implementing youth violence prevention interventions aimed at enhancing conflict resolution skills without evaluating their effectiveness. Consequently, we formed a community-academic partnership between a New Haven community-based organization and Yale's School of Public Health and Prevention Research Center to examine the…
On some limitations on temporal resolution in imaging subpicosecond photoelectronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shchelev, M Ya; Andreev, S V; Degtyareva, V P
2015-05-31
Numerical modelling is used to analyse some effects restricting the enhancement of temporal resolution into the area better than 100 fs in streak image tubes and photoelectron guns. A particular attention is paid to broadening of an electron bunch as a result of Coulomb interaction. Possible ways to overcome the limitations under consideration are discussed. (extreme light fields and their applications)
Direct Numerical Simulation of Cell Printing
NASA Astrophysics Data System (ADS)
Qiao, Rui; He, Ping
2010-11-01
Structural cell printing, i.e., printing three dimensional (3D) structures of cells held in a tissue matrix, is gaining significant attention in the biomedical community. The key idea is to use desktop printer or similar devices to print cells into 3D patterns with a resolution comparable to the size of mammalian cells, similar to that in living organs. Achieving such a resolution in vitro can lead to breakthroughs in areas such as organ transplantation and understanding of cell-cell interactions in truly 3D spaces. Although the feasibility of cell printing has been demonstrated in the recent years, the printing resolution and cell viability remain to be improved. In this work, we investigate one of the unit operations in cell printing, namely, the impact of a cell-laden droplet into a pool of highly viscous liquids using direct numerical simulations. The dynamics of droplet impact (e.g., crater formation and droplet spreading and penetration) and the evolution of cell shape and internal stress are quantified in details.
NASA Astrophysics Data System (ADS)
Gastón, Martín; Fernández-Peruchena, Carlos; Körnich, Heiner; Landelius, Tomas
2017-06-01
The present work describes the first approach of a new procedure to forecast Direct Normal Irradiance (DNI): the #hashtdim that treats to combine ground information and Numerical Weather Predictions. The system is centered in generate predictions for the very short time. It combines the outputs from the Numerical Weather Prediction Model HARMONIE with an adaptive methodology based on Machine Learning. The DNI predictions are generated with 15-minute and hourly temporal resolutions and presents 3-hourly updates. Each update offers forecasts to the next 12 hours, the first nine hours are generated with 15-minute temporal resolution meanwhile the last three hours present hourly temporal resolution. The system is proved over a Spanish emplacement with BSRN operative station in south of Spain (PSA station). The #hashtdim has been implemented in the framework of the Direct Normal Irradiance Nowcasting methods for optimized operation of concentrating solar technologies (DNICast) project, under the European Union's Seventh Programme for research, technological development and demonstration framework.
The MM5 Numerical Model to Correct PSInSAR Atmospheric Phase Screen
NASA Astrophysics Data System (ADS)
Perissin, D.; Pichelli, E.; Ferretti, R.; Rocca, F.; Pierdicca, N.
2010-03-01
In this work we make an experimental analysis to research the capability of Numerical Weather Prediction (NWP) models as MM5 to produce high resolution (1km-500m) maps of Integrated Water Vapour (IWV) in the atmosphere to mitigate the well-known disturbances that affect the radar signal while travelling from the sensor to the ground and back. Experiments have been conducted over the area surrounding Rome using ERS data acquired during the three days phase in '94 and using Envisat data acquired in recent years. By means of the PS technique SAR data have been processed and the Atmospheric Phase Screen (APS) of Slave images with respect to a reference Master have been extracted. MM5 IWV maps have a much lower resolution than PSInSAR APS's: the turbulent term of the atmospheric vapour field cannot be well resolved by MM5, at least with the low resolution ECMWF inputs. However, the vapour distribution term that depends on the local topography has been found quite in accordance.
Sparse synthetic aperture with Fresnel elements (S-SAFE) using digital incoherent holograms
Kashter, Yuval; Rivenson, Yair; Stern, Adrian; Rosen, Joseph
2015-01-01
Creating a large-scale synthetic aperture makes it possible to break the resolution boundaries dictated by the wave nature of light of common optical systems. However, their implementation is challenging, since the generation of a large size continuous mosaic synthetic aperture composed of many patterns is complicated in terms of both phase matching and time-multiplexing duration. In this study we present an advanced configuration for an incoherent holographic imaging system with super resolution qualities that creates a partial synthetic aperture. The new system, termed sparse synthetic aperture with Fresnel elements (S-SAFE), enables significantly decreasing the number of the recorded elements, and it is free from positional constrains on their location. Additionally, in order to obtain the best image quality we propose an optimal mosaicking structure derived on the basis of physical and numerical considerations, and introduce three reconstruction approaches which are compared and discussed. The super-resolution capabilities of the proposed scheme and its limitations are analyzed, numerically simulated and experimentally demonstrated. PMID:26367947
The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling
NASA Astrophysics Data System (ADS)
Thornes, Tobias; Duben, Peter; Palmer, Tim
2016-04-01
At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.
Preliminary validation of WRF model in two Arctic fjords, Hornsund and Porsanger
NASA Astrophysics Data System (ADS)
Aniskiewicz, Paulina; Stramska, Małgorzata
2017-04-01
Our research is focused on development of efficient modeling system for arctic fjords. This tool should include high-resolution meteorological data derived using downscaling approach. In this presentation we have focused on modeling, with high spatial resolution, of the meteorological conditions in two Arctic fjords: Hornsund (H), located in the western part of Svalbard archipelago and Porsanger (P) located in the coastal waters of the Barents Sea. The atmospheric downscaling is based on The Weather Research and Forecasting Model (WRF, www.wrf-model.org) with polar stereographic projection. We have created two parent domains with grid point distances of about 3.2 km (P) and 3.0 km (H) and with nested domains (almost 5 times higher resolution than parent domains). We tested what is the impact of the spatial resolution of the model on derived meteorological quantities. For both fjords the input topography data resolution is 30 sec. To validate the results we have used meteorological data from the Norwegian Meteorological Institute for stations Lakselv (L) and Honningsvåg (Ho) located in the inner and outer parts of the Porsanger fjord as well as from station in the outer part of the Hornsund fjord. We have estimated coefficients of determination (r2), statistical errors (St) and systematic errors (Sy) between measured and modelled air temperature and wind speed at each station. This approach will allow us to create high resolution spatially variable meteorological fields that will serve as forcing for numerical models of the fjords. We will investigate the role of different meteorological quantities (e. g. wind, solar insolation, precipitation) on hydrohraphic processes in fjords. The project has been financed from the funds of the Leading National Research Centre (KNOW) received by the Centre for Polar Studies for the period 2014-2018. This work was also funded by the Norway Grants (NCBR contract No. 201985, project NORDFLUX). Partial support comes from the Institute of Oceanology (IO PAN).
Scalar excursions in large-eddy simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matheou, Georgios; Dimotakis, Paul E.
Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less
Scalar excursions in large-eddy simulations
Matheou, Georgios; Dimotakis, Paul E.
2016-08-31
Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less
NASA Astrophysics Data System (ADS)
Kumkar, Yogesh V.; Sen, P. N.; Chaudhari, Hemankumar S.; Oh, Jai-Ho
2018-02-01
In this paper, an attempt has been made to conduct a numerical experiment with the high-resolution global model GME to predict the tropical storms in the North Indian Ocean during the year 2007. Numerical integrations using the icosahedral hexagonal grid point global model GME were performed to study the evolution of tropical cyclones, viz., Akash, Gonu, Yemyin and Sidr over North Indian Ocean during 2007. It has been seen that the GME model forecast underestimates cyclone's intensity, but the model can capture the evolution of cyclone's intensity especially its weakening during landfall, which is primarily due to the cutoff of the water vapor supply in the boundary layer as cyclones approach the coastal region. A series of numerical simulation of tropical cyclones have been performed with GME to examine model capability in prediction of intensity and track of the cyclones. The model performance is evaluated by calculating the root mean square errors as cyclone track errors.
The future of EUV lithography: enabling Moore's Law in the next decade
NASA Astrophysics Data System (ADS)
Pirati, Alberto; van Schoot, Jan; Troost, Kars; van Ballegoij, Rob; Krabbendam, Peter; Stoeldraijer, Judon; Loopstra, Erik; Benschop, Jos; Finders, Jo; Meiling, Hans; van Setten, Eelco; Mika, Niclas; Dredonx, Jeannot; Stamm, Uwe; Kneer, Bernhard; Thuering, Bernd; Kaiser, Winfried; Heil, Tilmann; Migura, Sascha
2017-03-01
While EUV systems equipped with a 0.33 Numerical Aperture lenses are readying to start volume manufacturing, ASML and Zeiss are ramping up their development activities on a EUV exposure tool with Numerical Aperture greater than 0.5. The purpose of this scanner, targeting a resolution of 8nm, is to extend Moore's law throughout the next decade. A novel, anamorphic lens design, has been developed to provide the required Numerical Aperture; this lens will be paired with new, faster stages and more accurate sensors enabling Moore's law economical requirements, as well as the tight focus and overlay control needed for future process nodes. The tighter focus and overlay control budgets, as well as the anamorphic optics, will drive innovations in the imaging and OPC modelling, and possibly in the metrology concepts. Furthermore, advances in resist and mask technology will be required to image lithography features with less than 10nm resolution. This paper presents an overview of the key technology innovations and infrastructure requirements for the next generation EUV systems.
NASA Technical Reports Server (NTRS)
Hussaini, M. Y. (Editor); Kumar, A. (Editor); Salas, M. D. (Editor)
1993-01-01
The purpose here is to assess the state of the art in the areas of numerical analysis that are particularly relevant to computational fluid dynamics (CFD), to identify promising new developments in various areas of numerical analysis that will impact CFD, and to establish a long-term perspective focusing on opportunities and needs. Overviews are given of discretization schemes, computational fluid dynamics, algorithmic trends in CFD for aerospace flow field calculations, simulation of compressible viscous flow, and massively parallel computation. Also discussed are accerelation methods, spectral and high-order methods, multi-resolution and subcell resolution schemes, and inherently multidimensional schemes.
3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"
NASA Technical Reports Server (NTRS)
Douglass, A.
2005-01-01
The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
NASA Astrophysics Data System (ADS)
Ramsey, M.; Dehn, J.; Wessels, R.; Byrnes, J.; Duda, K.; Maldonado, L.; Dwyer, J.
2004-12-01
Numerous government agencies and university partnerships are currently utilizing orbital instruments with high-temporal/low-spatial resolution (e.g. MODIS, AVHRR) to monitor hazards. These hazards are varied and include both natural (volcanic eruptions, severe weather, wildfires, earthquake damage) and anthropogenic (environmental damage, urban terrorism). Although monitoring a hazardous situation is critical, a key strategy of NASA's Earth science program is to develop a scientific understanding of the Earth system and its responses to changes, as well as to improve prediction of hazard onset. In order to develop a quantitative scientific basis from which to model transient geological and climatological hazards, much higher spatial/spectral resolution datasets are required. Such datasets are sparse, currently available from certain government (e.g. ASTER, Hyperion) and commercial (e.g. IKONOS, QuickBird) instruments. However, only ASTER has the capability to acquire high spatial resolution data from the visible to thermal infrared (TIR) wavelength region in conjunction with digital elevation models (DEM) generation. These capabilities are particularly useful for numerous aspects of volcanic remote sensing. For example, multispectral TIR data are critical for monitoring low temperature anomalies and mapping both chemical and textural variations on volcanic surfaces. Because ASTER data are scheduled in advance and the raw data are sent to Japan for calibration processing, rapid acquisition of hazards becomes problematic. However, a "rapid response" mode does exist for ASTER data scheduling and processing, but its availability is limited and requires significant human interaction. A newly-funded NASA ASTER science team project seeks to link this ASTER rapid response pathway to larger-scale monitoring alerts, which are already in-place and in-use by other organizations. By refining the initial event detection criteria and improving interfaces between these organizations and the ASTER project, we expect to minimize lag time and use existing monitoring tools as triggers for the emergency response of ASTER. The first phase of this project will be integrated into the Alaska Volcano Observatory's current near-real-time volcanic monitoring system, which relies on high temporal/low spatial resolution orbital data. This synergy will allow small-scale activity to be targeted for science and response, and a calibration baseline between each sensor to be established. If successful, this will be the first time that high spatial resolution, multispectral satellite data will be routinely scheduled, acquired, and analyzed in a "rapid response" mode within an existing hazard monitoring framework. Initial testing of this system is now underway using data from previous eruptions in the north Pacific region, and modifications to the rapid data flow procedure within the ASTER science and support structure has begun.
The sixteen to forty micron spectroscopy from the NASA Lear jet
NASA Technical Reports Server (NTRS)
Houck, J. R.
1982-01-01
Two cryogenically cooled infrared grating spectrometers were designed, fabricated and used on the NASA Lear Jet Observatory. The first spectrometer was used to measure continuum sources such as dust in H II regions, the galactic center and the thermal emission from Mars, Jupiter, Saturn, and Venus over the 16 to 40 micron spectral range. The second spectrometer had higher resolution and was used to measure ionic spectral lines in H II regions (S III at 18.7 microns). It was later used extensively on NASA C-141 Observatory to make observations of numerous objects including H II regions, planetary nebulae, stars with circumstellar shells, the galactic center and extragalactic objects. The spectrometers are described including the major innovations and a list of the scientific contributions.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Li, Shuguang; Liu, Qiang; Feng, Xinxing; Zhang, Shuhuan; Wang, Yujun; Wu, Junjun
2018-07-01
A groove micro-structure optical fiber refractive index sensor with nanoscale gold film based on surface plasmon resonance (SPR) is proposed and analyzed by the finite element method (FEM). Numerical results show that the average sensitivity is 15,933 nm/refractive index unit (RIU) with the refractive index of analyte ranging from 1.40 to 1.43 and the maximum sensitivity is 28,600 nm/RIU and the resolution of the sensor is 3.50 × 10-8 RIU. The groove micro-structure optical fiber refractive index sensor do some changes on the D-shaped fiber sensor, compared with conventional D-shaped fiber sensor, it has a higher sensitivity and it is easier to produce than the traditional SPR sensor.
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-01-01
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345
Modal analysis of the ultrahigh finesse Haroche QED cavity
NASA Astrophysics Data System (ADS)
Marsic, Nicolas; De Gersem, Herbert; Demésy, Guillaume; Nicolet, André; Geuzaine, Christophe
2018-04-01
In this paper, we study a high-order finite element approach to simulate an ultrahigh finesse Fabry–Pérot superconducting open resonator for cavity quantum electrodynamics. Because of its high quality factor, finding a numerically converged value of the damping time requires an extremely high spatial resolution. Therefore, the use of high-order simulation techniques appears appropriate. This paper considers idealized mirrors (no surface roughness and perfect geometry, just to cite a few hypotheses), and shows that under these assumptions, a damping time much higher than what is available in experimental measurements could be achieved. In addition, this work shows that both high-order discretizations of the governing equations and high-order representations of the curved geometry are mandatory for the computation of the damping time of such cavities.
Statistical Limits to Super Resolution
NASA Astrophysics Data System (ADS)
Lucy, L. B.
1992-08-01
The limits imposed by photon statistics on the degree to which Rayleigh's resolution limit for diffraction-limited images can be surpassed by applying image restoration techniques are investigated. An approximate statistical theory is given for the number of detected photons required in the image of an unresolved pair of equal point sources in order that its information content allows in principle resolution by restoration. This theory is confirmed by numerical restoration experiments on synthetic images, and quantitative limits are presented for restoration of diffraction-limited images formed by slit and circular apertures.
Dunes in the Solar System : New Perspectives, Analogs and Challenges
NASA Astrophysics Data System (ADS)
Lorenz, R. D.
2016-12-01
These are exciting times for planetary Aeolian research. New paradigms opened up by numerical modeling backed by laboratory and field experimentation now permit a much higher-fidelity mapping of dune morphology to wind regime and sediment characteristics. The identification of the 'fingering mode' of bedform growth, and its association with limited sediment supply, now brings a systematic explanation of what was once bewildering complexity and opens the way to decoding more environmental detail from the landscape than was possible before. Much of this model work has been developed in parallel with, if not stimulated by, the discovery of vast fields of sand dunes on Titan a decade ago, and datasets of higher resolution and wider coverage on Mars and Earth. The pace of relevant discoveries has accelerated, with bedforms observed on comet 67P-Churyumov-Gerasimenko, periodic structures on Pluto's landscape, and a possibly new class of bedform discovered by the Curiosity rover's close inspection of the Bagnold dunes on Mars - all in the last two years! These features have all stimulated examination of transport physics at the particle and bedform scale, especially in rarified conditions.At the global scale, Titan's dune patterns have been broadly explained, and hint at Croll-Milankovich climate cycles. Yet the origin of the sand remains a mystery. Much work remains to understand regional transports on all worlds, which can be addressed with mesoscale and CFD models. Observationally, the greatest opportunity for progress will come with higher resolution views of the surfaces of Venus and Titan. Venus, a world on which aeolian transport was observed in only a couple of hours of surface observation, is in particular long overdue for further exploration. In all these cases, terrestrial analogs provide valuable insights.
Coincidental match of numerical simulation and physics
NASA Astrophysics Data System (ADS)
Pierre, B.; Gudmundsson, J. S.
2010-08-01
Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.
Cousins, Matthew M; Laeyendecker, Oliver; Beauchamp, Geetha; Brookmeyer, Ronald; Towler, William I; Hudelson, Sarah E; Khaki, Leila; Koblin, Beryl; Chesney, Margaret; Moore, Richard D; Kelen, Gabor D; Coates, Thomas; Celum, Connie; Buchbinder, Susan P; Seage, George R; Quinn, Thomas C; Donnell, Deborah; Eshleman, Susan H
2011-01-01
Cross-sectional assessment of HIV incidence relies on laboratory methods to discriminate between recent and non-recent HIV infection. Because HIV diversifies over time in infected individuals, HIV diversity may serve as a biomarker for assessing HIV incidence. We used a high resolution melting (HRM) diversity assay to compare HIV diversity in adults with different stages of HIV infection. This assay provides a single numeric HRM score that reflects the level of genetic diversity of HIV in a sample from an infected individual. HIV diversity was measured in 203 adults: 20 with acute HIV infection (RNA positive, antibody negative), 116 with recent HIV infection (tested a median of 189 days after a previous negative HIV test, range 14-540 days), and 67 with non-recent HIV infection (HIV infected >2 years). HRM scores were generated for two regions in gag, one region in pol, and three regions in env. Median HRM scores were higher in non-recent infection than in recent infection for all six regions tested. In multivariate models, higher HRM scores in three of the six regions were independently associated with non-recent HIV infection. The HRM diversity assay provides a simple, scalable method for measuring HIV diversity. HRM scores, which reflect the genetic diversity in a viral population, may be useful biomarkers for evaluation of HIV incidence, particularly if multiple regions of the HIV genome are examined.
Cousins, Matthew M.; Laeyendecker, Oliver; Beauchamp, Geetha; Brookmeyer, Ronald; Towler, William I.; Hudelson, Sarah E.; Khaki, Leila; Koblin, Beryl; Chesney, Margaret; Moore, Richard D.; Kelen, Gabor D.; Coates, Thomas; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Quinn, Thomas C.; Donnell, Deborah; Eshleman, Susan H.
2011-01-01
Background Cross-sectional assessment of HIV incidence relies on laboratory methods to discriminate between recent and non-recent HIV infection. Because HIV diversifies over time in infected individuals, HIV diversity may serve as a biomarker for assessing HIV incidence. We used a high resolution melting (HRM) diversity assay to compare HIV diversity in adults with different stages of HIV infection. This assay provides a single numeric HRM score that reflects the level of genetic diversity of HIV in a sample from an infected individual. Methods HIV diversity was measured in 203 adults: 20 with acute HIV infection (RNA positive, antibody negative), 116 with recent HIV infection (tested a median of 189 days after a previous negative HIV test, range 14–540 days), and 67 with non-recent HIV infection (HIV infected >2 years). HRM scores were generated for two regions in gag, one region in pol, and three regions in env. Results Median HRM scores were higher in non-recent infection than in recent infection for all six regions tested. In multivariate models, higher HRM scores in three of the six regions were independently associated with non-recent HIV infection. Conclusions The HRM diversity assay provides a simple, scalable method for measuring HIV diversity. HRM scores, which reflect the genetic diversity in a viral population, may be useful biomarkers for evaluation of HIV incidence, particularly if multiple regions of the HIV genome are examined. PMID:22073290
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Bernhard W.; Mane, Anil U.; Elam, Jeffrey W.
X-ray detectors that combine two-dimensional spatial resolution with a high time resolution are needed in numerous applications of synchrotron radiation. Most detectors with this combination of capabilities are based on semiconductor technology and are therefore limited in size. Furthermore, the time resolution is often realised through rapid time-gating of the acquisition, followed by a slower readout. Here, a detector technology is realised based on relatively inexpensive microchannel plates that uses GHz waveform sampling for a millimeter-scale spatial resolution and better than 100 ps time resolution. The technology is capable of continuous streaming of time- and location-tagged events at rates greatermore » than 10 7events per cm 2. Time-gating can be used for improved dynamic range.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)
NASA Astrophysics Data System (ADS)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng
2018-06-01
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.
3D numerical simulations of multiphase continental rifting
NASA Astrophysics Data System (ADS)
Naliboff, J.; Glerum, A.; Brune, S.
2017-12-01
Observations of rifted margin architecture suggest continental breakup occurs through multiple phases of extension with distinct styles of deformation. The initial rifting stages are often characterized by slow extension rates and distributed normal faulting in the upper crust decoupled from deformation in the lower crust and mantle lithosphere. Further rifting marks a transition to higher extension rates and coupling between the crust and mantle lithosphere, with deformation typically focused along large-scale detachment faults. Significantly, recent detailed reconstructions and high-resolution 2D numerical simulations suggest that rather than remaining focused on a single long-lived detachment fault, deformation in this phase may progress toward lithospheric breakup through a complex process of fault interaction and development. The numerical simulations also suggest that an initial phase of distributed normal faulting can play a key role in the development of these complex fault networks and the resulting finite deformation patterns. Motivated by these findings, we will present 3D numerical simulations of continental rifting that examine the role of temporal increases in extension velocity on rifted margin structure. The numerical simulations are developed with the massively parallel finite-element code ASPECT. While originally designed to model mantle convection using advanced solvers and adaptive mesh refinement techniques, ASPECT has been extended to model visco-plastic deformation that combines a Drucker Prager yield criterion with non-linear dislocation and diffusion creep. To promote deformation localization, the internal friction angle and cohesion weaken as a function of accumulated plastic strain. Rather than prescribing a single zone of weakness to initiate deformation, an initial random perturbation of the plastic strain field combined with rapid strain weakening produces distributed normal faulting at relatively slow rates of extension in both 2D and 3D simulations. Our presentation will focus on both the numerical assumptions required to produce these results and variations in 3D rifted margin architecture arising from a transition from slow to rapid rates of extension.
NASA Astrophysics Data System (ADS)
Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale
2013-04-01
A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.
Mariappan, Leo; Hu, Gang; He, Bin
2014-01-01
Purpose: Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. Methods: In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. Results: The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. Conclusions: The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction. PMID:24506649
Numerical computation of linear instability of detonations
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry; Kasimov, Aslan
2017-11-01
We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-05-01
El Salvador is the smallest and most densely populated country in Central America; its coast has approximately a length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there have been 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and hundreds of victims. The hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached from both Probabilistic and Deterministic Methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold, on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps and from the elevation in the near-shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences - finite volumes numerical model in this work, based on the Linear and Non-linear Shallow Water Equations, to simulate a total of 24 earthquake generated tsunami scenarios. In the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained with the high resolution numerical modelling, being a good and fast approximation to obtain preliminary tsunami hazard estimations. In Acajutla and La Libertad, both important tourism centres being actively developed, flooding depths between 2 and 4 m are frequent, accompanied with high and very high person instability hazard. Inside the Gulf of Fonseca the impact of the waves is almost negligible.
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-11-01
El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained with the high-resolution numerical modelling, being a good and fast approximation to obtain preliminary tsunami hazard estimations. In Acajutla and La Libertad, both important tourism centres being actively developed, flooding depths between 2 and 4 m are frequent, accompanied with high and very high person instability hazard. Inside the Gulf of Fonseca the impact of the waves is almost negligible.
High-resolution modeling of a marine ecosystem using the FRESCO hydroecological model
NASA Astrophysics Data System (ADS)
Zalesny, V. B.; Tamsalu, R.
2009-02-01
The FRESCO (Finnish Russian Estonian Cooperation) mathematical model describing a marine hydroecosystem is presented. The methodology of the numerical solution is based on the method of multicomponent splitting into physical and biological processes, spatial coordinates, etc. The model is used for the reproduction of physical and biological processes proceeding in the Baltic Sea. Numerical experiments are performed with different spatial resolutions for four marine basins that are enclosed into one another: the Baltic Sea, the Gulf of Finland, the Tallinn-Helsinki water area, and Tallinn Bay. Physical processes are described by the equations of nonhydrostatic dynamics, including the k-ω parametrization of turbulence. Biological processes are described by the three-dimensional equations of an aquatic ecosystem with the use of a size-dependent parametrization of biochemical reactions. The main goal of this study is to illustrate the efficiency of the developed numerical technique and to demonstrate the importance of a high spatial resolution for water basins that have complex bottom topography, such as the Baltic Sea. Detailed information about the atmospheric forcing, bottom topography, and coastline is very important for the description of coastal dynamics and specific features of a marine ecosystem. Experiments show that the spatial inhomogeneity of hydroecosystem fields is caused by the combined effect of upwelling, turbulent mixing, surface-wave breaking, and temperature variations, which affect biochemical reactions.
Choi, WooJhon; Baumann, Bernhard; Swanson, Eric A.; Fujimoto, James G.
2012-01-01
We present a numerical approach to extract the dispersion mismatch in ultrahigh-resolution Fourier domain optical coherence tomography (OCT) imaging of the retina. The method draws upon an analogy with a Shack-Hartmann wavefront sensor. By exploiting mathematical similarities between the expressions for aberration in optical imaging and dispersion mismatch in spectral / Fourier domain OCT, Shack-Hartmann principles can be extended from the two-dimensional paraxial wavevector space (or the x-y plane in the spatial domain) to the one-dimensional wavenumber space (or the z-axis in the spatial domain). For OCT imaging of the retina, different retinal layers, such as the retinal nerve fiber layer (RNFL), the photoreceptor inner and outer segment junction (IS/OS), or all the retinal layers near the retinal pigment epithelium (RPE) can be used as point source beacons in the axial direction, analogous to point source beacons used in conventional two-dimensional Shack-Hartman wavefront sensors for aberration characterization. Subtleties regarding speckle phenomena in optical imaging, which affect the Shack-Hartmann wavefront sensor used in adaptive optics, also occur analogously in this application. Using this approach and carefully suppressing speckle, the dispersion mismatch in spectral / Fourier domain OCT retinal imaging can be successfully extracted numerically and used for numerical dispersion compensation to generate sharper, ultrahigh-resolution OCT images. PMID:23187353
A High-Resolution Godunov Method for Compressible Multi-Material Flow on Overlapping Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banks, J W; Schwendeman, D W; Kapila, A K
2006-02-13
A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on amore » uniform pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of an planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.« less
Aggression, conflict resolution, popularity, and attitude to school in Russian adolescents.
Butovskaya, Marina L; Timentschik, Vera M; Burkova, Valentina N
2007-01-01
The objective of the present study was to examine the effects of aggression and conflict-managing skills on popularity and attitude to school in Russian adolescents. Three types of aggression (physical, verbal, and indirect), constructive conflict resolution, third-party intervention, withdrawal, and victimization were examined using the Peer-Estimated Conflict Behavior (PECOBE) inventory [Bjorkquist and Osterman, 1998]. Also, all respondents rated peer and self-popularity with same-sex classmates and personal attitude to school. The sample consisted of 212 Russian adolescents (101 boys, 111 girls) aged between 11 and 15 years. The findings attest to significant sex differences in aggression and conflict resolution patterns. Boys scored higher on physical and verbal aggression, and girls on indirect aggression. Girls were socially more skillful than boys in the use of peaceful means of conflict resolution (they scored higher on constructive conflict resolution and third-party intervention). The attributional discrepancy index (ADI) scores were negative for all three types of aggression in both sexes. Verbal aggression is apparently more condemned in boys than in girls. ADI scores were positive for constructive conflict resolution and third-party intervention in both genders, being higher in boys. In girls, verbal aggression was positively correlated with popularity. In both sexes, popularity showed a positive correlation with constructive conflict resolution and third-party intervention, and a negative correlation with withdrawal and victimization. Boys who liked school were popular with same-sex peers and scored higher on constructive conflict resolution. Girls who liked school were less aggressive according to peer rating. They also rated higher on conflict resolution and third-party intervention. Physical aggression was related to age. The results are discussed in a cross-cultural perspective. Copyright 2007 Wiley-Liss, Inc.
Chuang, Tzu-Chao; Huang, Hsuan-Hung; Chang, Hing-Chiu; Wu, Ming-Ting
2014-06-01
To achieve better spatial and temporal resolution of dynamic contrast-enhanced MR imaging, the concept of k-space data sharing, or view sharing, can be implemented for PROPELLER acquisition. As found in other view-sharing methods, the loss of high-resolution dynamics is possible for view-sharing PROPELLER (VS-Prop) due to the temporal smoothing effect. The degradation can be more severe when a narrow blade with less phase encoding steps is chosen in the acquisition for higher frame rate. In this study, an iterative algorithm termed pixel-based optimal blade selection (POBS) is proposed to allow spatially dependent selection of the rotating blades, to generate high-resolution dynamic images with minimal reconstruction artifacts. In the reconstruction of VS-Prop, the central k-space which dominates the image contrast is only provided by the target blade with the peripheral k-space contributed by a minimal number of consecutive rotating blades. To reduce the reconstruction artifacts, the set of neighboring blades exhibiting the closest image contrast with the target blade is picked by POBS algorithm. Numerical simulations and phantom experiments were conducted in this study to investigate the dynamic response and spatial profiles of images generated using our proposed method. In addition, dynamic contrast-enhanced cardiovascular imaging of healthy subjects was performed to demonstrate the feasibility and advantages. The simulation results show that POBS VS-Prop can provide timely dynamic response to rapid signal change, especially for a small region of interest or with the use of narrow blades. The POBS algorithm also demonstrates its capability to capture nonsimultaneous signal changes over the entire FOV. In addition, both phantom and in vivo experiments show that the temporal smoothing effect can be avoided by means of POBS, leading to higher wash-in slope of contrast enhancement after the bolus injection. With the satisfactory reconstruction quality provided by the POBS algorithm, VS-Prop acquisition technique may find useful clinical applications in DCE MR imaging studies where both spatial and temporal resolutions play important roles.
GPS Tomography: Water Vapour Monitoring for Germany
NASA Astrophysics Data System (ADS)
Bender, Michael; Dick, Galina; Wickert, Jens; Raabe, Armin
2010-05-01
Ground based GPS atmosphere sounding provides numerous atmospheric quantities with a high temporal resolution for all weather conditions. The spatial resolution of the GPS observations is mainly given by the number of GNSS satellites and GPS ground stations. The latter could considerably be increased in the last few years leading to more reliable and better resolved GPS products. New techniques such as the GPS water vapour tomography gain increased significance as data from large and dense GPS networks become available. The GPS tomography has the potential to provide spatially resolved fields of different quantities operationally, i. e. the humidity or wet refractivity as required for meteorological applications or the refraction index which is important for several space based observations or for precise positioning. The number of German GPS stations operationally processed by the GFZ in Potsdam was recently enlarged to more than 300. About 28000 IWV observations and more than 1.4 millions of slant total delay data are now available per day with a temporal resolution of 15 min and 2.5 min, respectively. The extended network leads not only to a higher spatial resolution of the tomographically reconstructed 3D fields but also to a much higher stability of the inversion process and with that to an increased quality of the results. Under these improved conditions the GPS tomography can operate continuously over several days or weeks without applying too tight constraints. Time series of tomographically reconstructed humidity fields will be shown and different initialisation strategies will be discussed: Initialisation with a simple exponential profile, with a 3D humidity field extrapolated from synoptic observations and with the result of the preceeding reconstruction. The results are compared to tomographic reconstructions initialised with COSMO-DE analyses and to the corresponding model fields. The inversion can be further stabilised by making use of independent adequately weighted observations, such as synoptic observations or IWV data. The impact of such observations on the quality of the tomographic reconstruction will be discussed together with different alternatives for weighting different types of observations.
Can we trust climate models to realistically represent severe European windstorms?
NASA Astrophysics Data System (ADS)
Trzeciak, Tomasz M.; Knippertz, Peter; Owen, Jennifer S. R.
2014-05-01
Despite the enormous advances made in climate change research, robust projections of the position and the strength of the North Atlantic stormtrack are not yet possible. In particular with respect to damaging windstorms, this incertitude bears enormous risks to European societies and the (re)insurance industry. Previous studies have addressed the problem of climate model uncertainty through statistical comparisons of simulations of the current climate with (re-)analysis data and found that there is large disagreement between different climate models, different ensemble members of the same model and observed climatologies of intense cyclones. One weakness of such statistical evaluations lies in the difficulty to separate influences of the climate model's basic state from the influence of fast processes on the development of the most intense storms. Compensating effects between the two might conceal errors and suggest higher reliability than there really is. A possible way to separate influences of fast and slow processes in climate projections is through a "seamless" approach of hindcasting historical, severe storms with climate models started from predefined initial conditions and run in a numerical weather prediction mode on the time scale of several days. Such a cost-effective case-study approach, which draws from and expands on the concepts from the Transpose-AMIP initiative, has recently been undertaken in the SEAMSEW project at the University of Leeds funded by the AXA Research Fund. Key results from this work focusing on 20 historical storms and using different lead times and horizontal and vertical resolutions include: (a) Tracks are represented reasonably well by most hindcasts. (b) Sensitivity to vertical resolution is low. (c) There is a systematic underprediction of cyclone depth for a coarse resolution of T63, but surprisingly no systematic bias is found for higher-resolution runs using T127, showing that climate models are in fact able to represent the storm dynamics well, if given the correct initial conditions. Combined with a too low number of deep cyclones in many climate models, this points too an insufficient number of storm-prone initial conditions in free-running climate runs. This question will be addressed in future work.
NASA Astrophysics Data System (ADS)
Rhodes, R. C.; Barron, C. N.; Fox, D. N.; Smedstad, L. F.
2001-12-01
A global implementation of the Navy Coastal Ocean Model (NCOM), developed by the Naval Research Laboratory (NRL) at Stennis Space Center is currently running in real-time and is planned for transition to the Naval Oceanographic Office (NAVOCEANO) in 2002. The model encompasses the open ocean to 5 m depth on a curvilinear global model grid with 1/8 degree grid spacing at 45N, extending from 80 S to a complete arctic cap with grid singularities mapped into Canada and Russia. Vertically, the model employs 41 sigma-z levels with sigma in the upper-ocean and coastal regions and z in the deeper ocean. The Navy Operational Global Atmospheric Prediction System (NOGAPS) provides 6-hourly wind stresses and heat fluxes for forcing, while the operational Modular Ocean Data Assimilation System (MODAS) provides the background climatology and tools for data pre-processing. Operationally available sea surface temperature (SST) and altimetry (SSH) data are assimilated into the NAVOCEANO global 1/8 degree MODAS 2-D analysis and the 1/16 degree Navy Layered Ocean Model (NLOM) to provide analyses and forecasts of SSH and SST. The 2-D SSH and SST nowcast fields are used as input to the MODAS synthetic climatology database to yield three-dimensional fields of synthetic temperature and salinity for assimilation into global NCOM. The synthetic profiles are weighted higher at depth in the assimilation process to allow the numerical model to properly develop the mixed-layer structure driven by the real-time atmospheric forcing. Global NCOM nowcasts and forecasts provide a valuable resource for rapid response to the varied and often unpredictable operational requests for 3-dimensional fields of ocean temperature, salinity, and currents. In some cases, the resolution of the global product is sufficient for guidance. In cases requiring higher resolution, the global product offers a quick overview of local circulation and provides initial and boundary conditions for higher resolution coastal models that may be more specialized for a particular task or domain. Nowcast and forecast results are presented globally and in selected areas of interest and model results are compared with historical and concurrent observations and analyses.
CRSP, numerical results for an electrical resistivity array to detect underground cavities
NASA Astrophysics Data System (ADS)
Amini, Amin; Ramazi, Hamidreza
2017-03-01
This paper is devoted to the application of the Combined Resistivity Sounding and Profiling electrode configuration (CRSP) to detect underground cavities. Electrical resistivity surveying is among the most favorite geophysical methods due to its nondestructive and economical properties in a wide range of geosciences. Several types of the electrode arrays are applied to detect different certain objectives. In one hand, the electrode array plays an important role in determination of output resolution and depth of investigations in all resistivity surveys. On the other hand, they have their own merits and demerits in terms of depth of investigations, signal strength, and sensitivity to resistivity variations. In this article several synthetic models, simulating different conditions of cavity occurrence, were used to examine the responses of some conventional electrode arrays and also CRSP array. The results showed that CRSP electrode configuration can detect the desired objectives with a higher resolution rather than some other types of arrays. Also a field case study was discussed in which electrical resistivity approach was conducted in Abshenasan expressway (Tehran, Iran) U-turn bridge site for detecting potential cavities and/or filling loose materials. The results led to detect an aqueduct tunnel passing beneath the study area.
NASA Astrophysics Data System (ADS)
Teixeira, J. C.; Carvalho, A. C.; Carvalho, M. J.; Luna, T.; Rocha, A.
2014-08-01
The advances in satellite technology in recent years have made feasible the acquisition of high-resolution information on the Earth's surface. Examples of such information include elevation and land use, which have become more detailed. Including this information in numerical atmospheric models can improve their results in simulating lower boundary forced events, by providing detailed information on their characteristics. Consequently, this work aims to study the sensitivity of the weather research and forecast (WRF) model to different topography as well as land-use simulations in an extreme precipitation event. The test case focused on a topographically driven precipitation event over the island of Madeira, which triggered flash floods and mudslides in the southern parts of the island. Difference fields between simulations were computed, showing that the change in the data sets produced statistically significant changes to the flow, the planetary boundary layer structure and precipitation patterns. Moreover, model results show an improvement in model skill in the windward region for precipitation and in the leeward region for wind, in spite of the non-significant enhancement in the overall results with higher-resolution data sets of topography and land use.
Simulation of the present-day climate with the climate model INMCM5
NASA Astrophysics Data System (ADS)
Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykossov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Iakovlev, N. G.
2017-12-01
In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions. Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979-2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.
Salas-Montiel, Rafael; Berthel, Martin; Beltran-Madrigal, Josslyn; Huant, Serge; Drezet, Aurélien; Blaize, Sylvain
2017-05-19
One of the most explored single quantum emitters for the development of nanoscale fluorescence lifetime imaging is the nitrogen-vacancy (NV) color center in diamond. An NV center does not experience fluorescence bleaching or blinking at room temperature. Furthermore, its optical properties are preserved when embedded into nanodiamond hosts. This paper focuses on the modeling of the local density of states (LDOS) in a plasmonic nanofocusing structure with an NV center acting as local illumination sources. Numerical calculations of the LDOS near such a nanostructure were done with a classical electric dipole radiation placed inside a diamond sphere as well as near-field optical fluorescence lifetime imaging of the structure. We found that Purcell factors higher than ten can be reached with diamond nanospheres of radius less than 5 nm and at a distance of less than 20 nm from the surface of the structure. Although the spatial resolution of the experiment is limited by the size of the nanodiamond, our work supports the analysis and interpretation of a single NV color center in a nanodiamond as a probe for scanning near-field optical microscopy.
NASA Astrophysics Data System (ADS)
Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L.; Kozorovitskiy, Yevgenia
2018-05-01
Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.
Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L; Kozorovitskiy, Yevgenia
2018-05-14
Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi, /sōpī/) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi's flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.
Mesoscale influence on long-range transport — evidence from ETEX modelling and observations
NASA Astrophysics Data System (ADS)
Sørensen, Jens Havskov; Rasmussen, Alix; Ellermann, Thomas; Lyck, Erik
During the first European Tracer Experiment (ETEX) tracer gas was released from a site in Brittany, France, and subsequently observed over a range of 2000 km. Hourly measurements were taken at the National Environmental Research Institute (NERI) located at Risø, Denmark, using two measurement techniques. At this location, the observed concentration time series shows a double-peak structure occurring between two and three days after the release. By using the Danish Emergency Response Model of the Atmosphere (DERMA), which is developed at the Danish Meteorological Institute (DMI), simulations of the dispersion of the tracer gas have been performed. Using numerical weather-prediction data from the European Centre for Medium-Range Weather Forecast (ECMWF) by DERMA, the arrival time of the tracer is quite well predicted, so also is the duration of the passage of the plume, but the double-peak structure is not reproduced. However, using higher-resolution data from the DMI version of the HIgh Resolution Limited Area Model (DMI-HIRLAM), DERMA reproduces the observed structure very well. The double-peak structure is caused by the influence of a mesoscale anti-cyclonic eddy on the tracer gas plume about one day earlier.
A combined optical, SEM and STM study of growth spirals on the polytypic cadmium iodide crystals
NASA Astrophysics Data System (ADS)
Singh, Rajendra; Samanta, S. B.; Narlikar, A. V.; Trigunayat, G. C.
2000-05-01
Some novel results of a combined sequential study of growth spirals on the basal surface of the richly polytypic CdI 2 crystals by optical microscopy, scanning electron microscopy (SEM) and scanning tunneling microscopy (STM) are presented and discussed. Under the high resolution and magnification achieved in the scanning electron microscope, the growth steps of large heights seen in the optical micrographs are found to have a large number of additional steps of smaller heights existing between any two adjacent large height growth steps. When further seen by a scanning tunneling microscope, which provides still higher resolution, sequences of unit substeps, each of height equal to the unit cell height of the underlying polytype, are revealed to exist on the surface. Several large steps also lie between the unit steps, with heights equal to an integral multiple of either the unit cell height of the underlying polytype or the thickness of a molecular sheet I-Cd-I. It is suggested that initially a giant screw dislocation may form by brittle fracture of the crystal platelet, which may gradually decompose into numerous unit dislocations during subsequent crystal growth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Sandilya V. B.; Ibrahim, Yehia M.; Tang, Keqi
A novel concept for ion spatial peak compression is described, and discussed primarily in the context of ion mobility spectrometry (IMS). Using theoretical and numerical methods, the effects of using non-constant (e.g., linearly varying) electric fields on ion distributions (e.g., an ion mobility peak) is evaluated both in the physical and temporal domains. The application of linearly decreasing electric field in conjunction with conventional drift field arrangements is shown to lead to a reduction in IMS physical peak width. When multiple ion packets in a selected mobility window are simultaneously subjected to such fields, there is ion packet compression, i.e.,more » a reduction in peak widths of all species. This peak compression occurs with a modest reduction of resolution, but which can be quickly recovered as ions drift in a constant field after the compression event. Compression also yields a significant increase in peak intensities. In addition, approaches for peak compression in traveling wave IMS are also discussed. Ion mobility peak compression can be particularly useful for mitigating diffusion driven peak spreading over very long path length separations (e.g., in cyclic multi-pass arrangements), and for achieving higher S/N and IMS resolution over a selected mobility range.« less
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.
2011-01-01
This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.
Hu, Zhen-Hua; Huang, Teng; Wang, Ying-Ping; Ding, Lei; Zheng, Hai-Yang; Fang, Li
2011-06-01
Taking solar source as radiation in the near-infrared high-resolution absorption spectrum is widely used in remote sensing of atmospheric parameters. The present paper will take retrieval of the concentration of CO2 for example, and study the effect of solar spectra resolution. Retrieving concentrations of CO2 by using high resolution absorption spectra, a method which uses the program provided by AER to calculate the solar spectra at the top of atmosphere as radiation and combine with the HRATS (high resolution atmospheric transmission simulation) to simulate retrieving concentration of CO2. Numerical simulation shows that the accuracy of solar spectrum is important to retrieval, especially in the hyper-resolution spectral retrieavl, and the error of retrieval concentration has poor linear relation with the resolution of observation, but there is a tendency that the decrease in the resolution requires low resolution of solar spectrum. In order to retrieve the concentration of CO2 of atmosphere, the authors' should take full advantage of high-resolution solar spectrum at the top of atmosphere.
NASA Astrophysics Data System (ADS)
Zuschin, Martin; Nawrot, Rafal; Harzhauser, Mathias; Mandic, Oleg
2015-04-01
Among the most important questions in quantitative palaeoecology is how taxonomic and numerical resolution affect the analysis of community and metacommunity patterns. A species-abundance data set (10 localities, 213 bulk samples, 478 species, > 49,000 shells) from Burdigalian, Langhian and Serravallian benthic marine molluscan assemblages of the Central Paratethys was studied for this purpose. Assemblages are from two nearshore habitats (estuarine and marine intertidal) and three subtidal habitats (estuarine, fully marine sandy, and fully marine pelitic), which represent four biozones and four 3rd order depositional sequences over more than three million years, and are developed along the same depth-related environmental gradient. Double-standardized data subsampled to 19 samples per habitat, each with a minimum of 50 specimens, were used to calculate R²-values from PERMANOVA as a measure of differences between habitats at three taxonomic levels (species, genera and families) and at five levels of data transformation (raw abundances, percentages, square-root transformed percentages, fourth-root transformed percentages, presence-absence data). Species discriminate better between habitats than genera and families, but the differences between taxonomic levels are much stronger in the subtidal, where genera and families have more species than than in the intertidal. When all habitats are compared percentages and square-root transformed percentages discriminate equally well and perform better than higher levels of data transformation. Among nearshore and among subtidal habitats, however, the ability to discriminate between habitats increases with the level of data transformation (i.e., it is best for fourth-root transformed percentages and presence-absence data). The impact of decreasing taxonomic resolution is of minor importance in nearshore habitats, which are characterized by similar assemblages showing strong dominance of few widely distributed species, and many families represented by only one species (77.9%). Consequently, the differentiation between nearshore habitats is much weaker compared to subtidal assemblages. The latter are characterized by more distinct, relatively even assemblages with comparatively few families represented by only one species (64.2%) and many rare taxa, whose importance is emphasized by higher levels of data transformation.
NASA Astrophysics Data System (ADS)
Hirt, Christian; Kuhn, Michael
2017-08-01
Theoretically, spherical harmonic (SH) series expansions of the external gravitational potential are guaranteed to converge outside the Brillouin sphere enclosing all field-generating masses. Inside that sphere, the series may be convergent or may be divergent. The series convergence behavior is a highly unstable quantity that is little studied for high-resolution mass distributions. Here we shed light on the behavior of SH series expansions of the gravitational potential of the Moon. We present a set of systematic numerical experiments where the gravity field generated by the topographic masses is forward-modeled in spherical harmonics and with numerical integration techniques at various heights and different levels of resolution, increasing from harmonic degree 90 to 2160 ( 61 to 2.5 km scales). The numerical integration is free from any divergence issues and therefore suitable to reliably assess convergence versus divergence of the SH series. Our experiments provide unprecedented detailed insights into the divergence issue. We show that the SH gravity field of degree-180 topography is convergent anywhere in free space. When the resolution of the topographic mass model is increased to degree 360, divergence starts to affect very high degree gravity signals over regions deep inside the Brillouin sphere. For degree 2160 topography/gravity models, severe divergence (with several 1000 mGal amplitudes) prohibits accurate gravity modeling over most of the topography. As a key result, we formulate a new hypothesis to predict divergence: if the potential degree variances show a minimum, then the SH series expansions diverge somewhere inside the Brillouin sphere and modeling of the internal potential becomes relevant.
The EOS CERES Global Cloud Mask
NASA Technical Reports Server (NTRS)
Berendes, T. A.; Welch, R. M.; Trepte, Q.; Schaaf, C.; Baum, B. A.
1996-01-01
To detect long-term climate trends, it is essential to produce long-term and consistent data sets from a variety of different satellite platforms. With current global cloud climatology data sets, such as the International Satellite Cloud Climatology Experiment (ISCCP) or CLAVR (Clouds from Advanced Very High Resolution Radiometer), one of the first processing steps is to determine whether an imager pixel is obstructed between the satellite and the surface, i.e., determine a cloud 'mask.' A cloud mask is essential to studies monitoring changes over ocean, land, or snow-covered surfaces. As part of the Earth Observing System (EOS) program, a series of platforms will be flown beginning in 1997 with the Tropical Rainfall Measurement Mission (TRMM) and subsequently the EOS-AM and EOS-PM platforms in following years. The cloud imager on TRMM is the Visible/Infrared Sensor (VIRS), while the Moderate Resolution Imaging Spectroradiometer (MODIS) is the imager on the EOS platforms. To be useful for long term studies, a cloud masking algorithm should produce consistent results between existing (AVHRR) data, and future VIRS and MODIS data. The present work outlines both existing and proposed approaches to detecting cloud using multispectral narrowband radiance data. Clouds generally are characterized by higher albedos and lower temperatures than the underlying surface. However, there are numerous conditions when this characterization is inappropriate, most notably over snow and ice of the cloud types, cirrus, stratocumulus and cumulus are the most difficult to detect. Other problems arise when analyzing data from sun-glint areas over oceans or lakes over deserts or over regions containing numerous fires and smoke. The cloud mask effort builds upon operational experience of several groups that will now be discussed.
NASA Astrophysics Data System (ADS)
Fernández, V.; Dietrich, D. E.; Haney, R. L.; Tintoré, J.
In situ and satellite data obtained during the last ten years have shown that the circula- tion in the Mediterranean Sea is extremely complex in space, with significant features ranging from mesoscale to sub-basin and basin scale, and highly variable in time, with mesoscale to seasonal and interannual signals. Also, the steep bottom topography and the variable atmospheric conditions from one sub-basin to another, make the circula- tion to be composed of numerous energetic and narrow coastal currents, density fronts and mesoscale structures that interact at sub-basin scale with the large scale circula- tion. To simulate numerically and better understand these features, besides high grid resolution, a low numerical dispersion and low physical dissipation ocean model is required. We present the results from a 1/8z horizontal resolution numerical simula- tion of the Mediterranean Sea using DieCAST ocean model, which meets the above requirements since it is stable with low general dissipation and uses accurate fourth- order-accurate approximations with low numerical dispersion. The simulations are carried out with climatological surface forcing using monthly mean winds and relax- ation towards climatological values of temperature and salinity. The model reproduces the main features of the large basin scale circulation, as well as the seasonal variabil- ity of sub-basin scale currents that are well documented by observations in straits and channels. In addition, DieCAST brings out natural fronts and eddies that usually do not appear in numerical simulations of the Mediterranean and that lead to a natural interannual variability. The role of this intrinsic variability in the general circulation will be discussed.
The X CO Conversion Factor from Galactic Multiphase ISM Simulations
NASA Astrophysics Data System (ADS)
Gong, Munan; Ostriker, Eve C.; Kim, Chang-Goo
2018-05-01
{CO}(J=1{--}0) line emission is a widely used observational tracer of molecular gas, rendering essential the X CO factor, which is applied to convert CO luminosity to {{{H}}}2 mass. We use numerical simulations to study how X CO depends on numerical resolution, non-steady-state chemistry, physical environment, and observational beam size. Our study employs 3D magnetohydrodynamics (MHD) simulations of galactic disks with solar neighborhood conditions, where star formation and the three-phase interstellar medium (ISM) are self-consistently regulated by gravity and stellar feedback. Synthetic CO maps are obtained by postprocessing the MHD simulations with chemistry and radiation transfer. We find that CO is only an approximate tracer of {{{H}}}2. On parsec scales, W CO is more fundamentally a measure of mass-weighted volume density, rather than {{{H}}}2 column density. Nevertheless, < {X}{{CO}} > =(0.7{\\textstyle {--}}1.0)× {10}20 {{{cm}}}-2 {{{K}}}-1 {{{km}}}-1 {{s}}, which is consistent with observations and insensitive to the evolutionary ISM state or radiation field strength if steady-state chemistry is assumed. Due to non-steady-state chemistry, younger molecular clouds have slightly lower < {X}CO}> and flatter profiles of X CO versus extinction than older ones. The {CO}-dark {{{H}}}2 fraction is 26%–79%, anticorrelated with the average extinction. As the observational beam size increases from 1 to 100 pc, < {X}CO}> increases by a factor of ∼2. Under solar neighborhood conditions, < {X}CO}> in molecular clouds is converged at a numerical resolution of 2 pc. However, the total CO abundance and luminosity are not converged even at the numerical resolution of 1 pc. Our simulations successfully reproduce the observed variations of X CO on parsec scales, as well as the dependence of X CO on extinction and the CO excitation temperature.
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...
2016-12-20
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
Adolescent Pregnancy: An Interdisciplinary Problem
ERIC Educational Resources Information Center
Duxbury, Mitzi
1976-01-01
Deals with the scope of adolescent pregnancy both numerically and in human terms, pregnancy resolution, long term effects on the mother, associated medical factors, and implications for educational personnel. (Author/RK)
Non-oscillatory central differencing for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Nessyahu, Haim; Tadmor, Eitan
1988-01-01
Many of the recently developed high resolution schemes for hyperbolic conservation laws are based on upwind differencing. The building block for these schemes is the averaging of an appropriate Godunov solver; its time consuming part involves the field-by-field decomposition which is required in order to identify the direction of the wind. Instead, the use of the more robust Lax-Friedrichs (LxF) solver is proposed. The main advantage is simplicity: no Riemann problems are solved and hence field-by-field decompositions are avoided. The main disadvantage is the excessive numerical viscosity typical to the LxF solver. This is compensated for by using high-resolution MUSCL-type interpolants. Numerical experiments show that the quality of results obtained by such convenient central differencing is comparable with those of the upwind schemes.
Observations and Modeling of the Green Ocean Amazon 2014/15. CHUVA Field Campaign Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machado, L. A. T.
2016-03-01
The physical processes inside clouds are one of the most unknown components of weather and climate systems. A description of cloud processes through the use of standard meteorological parameters in numerical models has to be strongly improved to accurately describe the characteristics of hydrometeors, latent heating profiles, radiative balance, air entrainment, and cloud updrafts and downdrafts. Numerical models have been improved to run at higher spatial resolutions where it is necessary to explicitly describe these cloud processes. For instance, to analyze the effects of global warming in a given region it is necessary to perform simulations taking into account allmore » of the cloud processes described above. Another important application that requires this knowledge is satellite precipitation estimation. The analysis will be performed focusing on the microphysical evolution and cloud life cycle, different precipitation estimation algorithms, the development of thunderstorms and lightning formation, processes in the boundary layer, and cloud microphysical modeling. This project intends to extend the knowledge of these cloud processes to reduce the uncertainties in precipitation estimation, mainly from warm clouds, and, consequently, improve knowledge of the water and energy budget and cloud microphysics.« less
Computational Simulation of Acoustic Modes in Rocket Combustors
NASA Technical Reports Server (NTRS)
Harper, Brent (Technical Monitor); Merkle, C. L.; Sankaran, V.; Ellis, M.
2004-01-01
A combination of computational fluid dynamic analysis and analytical solutions is being used to characterize the dominant modes in liquid rocket engines in conjunction with laboratory experiments. The analytical solutions are based on simplified geometries and flow conditions and are used for careful validation of the numerical formulation. The validated computational model is then extended to realistic geometries and flow conditions to test the effects of various parameters on chamber modes, to guide and interpret companion laboratory experiments in simplified combustors, and to scale the measurements to engine operating conditions. In turn, the experiments are used to validate and improve the model. The present paper gives an overview of the numerical and analytical techniques along with comparisons illustrating the accuracy of the computations as a function of grid resolution. A representative parametric study of the effect of combustor mean flow Mach number and combustor aspect ratio on the chamber modes is then presented for both transverse and longitudinal modes. The results show that higher mean flow Mach numbers drive the modes to lower frequencies. Estimates of transverse wave mechanics in a high aspect ratio combustor are then contrasted with longitudinal modes in a long and narrow combustor to provide understanding of potential experimental simulations.
Numerical Simulation of Shock Wave Propagation in Fractured Cortical Bone
NASA Astrophysics Data System (ADS)
Padilla, Frédéric; Cleveland, Robin
2009-04-01
Shock waves (SW) are considered a promising method to treat bone non unions, but the associated mechanisms of action are not well understood. In this study, numerical simulations are used to quantify the stresses induced by SWs in cortical bone tissue. We use a 3D FDTD code to solve the linear lossless equations that describe wave propagation in solids and fluids. A 3D model of a fractured rat femur was obtained from micro-CT data with a resolution of 32 μm. The bone was subject to a plane SW pulse with a peak positive pressure of 40 MPa and peak negative pressure of -8 MPa. During the simulations the principal tensile stress and maximum shear stress were tracked throughout the bone. It was found that the simulated stresses in a transverse plane relative to the bone axis may reach values higher than the tensile and shear strength of the bone tissue (around 50 MPa). These results suggest that the stresses induced by the SW may be large enough to initiate local micro-fractures, which may in turn trigger the start of bone healing for the case of a non union.
A high resolution cavity BPM for the CLIC Test Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chritin, N.; Schmickler, H.; Soby, L.
2010-08-01
In frame of the development of a high resolution BPM system for the CLIC Main Linac we present the design of a cavity BPM prototype. It consists of a waveguide loaded dipole mode resonator and a monopole mode reference cavity, both operating at 15 GHz, to be compatible with the bunch frequencies at the CLIC Test Facility. Requirements, design concept, numerical analysis, and practical considerations are discussed.
High spatial resolution passive microwave sounding systems
NASA Technical Reports Server (NTRS)
Staelin, D. H.; Rosenkranz, P. W.; Bonanni, P. G.; Gasiewski, A. W.
1986-01-01
Two extensive series of flights aboard the ER-2 aircraft were conducted with the MIT 118 GHz imaging spectrometer together with a 53.6 GHz nadir channel and a TV camera record of the mission. Other microwave sensors, including a 183 GHz imaging spectrometer were flown simultaneously by other research groups. Work also continued on evaluating the impact of high-resolution passive microwave soundings upon numerical weather prediction models.
Resolving the fine-scale structure in turbulent Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Scheel, Janet D.; Emran, Mohammad S.; Schumacher, Jörg
2013-11-01
We present high-resolution direct numerical simulation studies of turbulent Rayleigh-Bénard convection in a closed cylindrical cell with an aspect ratio of one. The focus of our analysis is on the finest scales of convective turbulence, in particular the statistics of the kinetic energy and thermal dissipation rates in the bulk and the whole cell. The fluctuations of the energy dissipation field can directly be translated into a fluctuating local dissipation scale which is found to develop ever finer fluctuations with increasing Rayleigh number. The range of these scales as well as the probability of high-amplitude dissipation events decreases with increasing Prandtl number. In addition, we examine the joint statistics of the two dissipation fields and the consequences of high-amplitude events. We have also investigated the convergence properties of our spectral element method and have found that both dissipation fields are very sensitive to insufficient resolution. We demonstrate that global transport properties, such as the Nusselt number, and the energy balances are partly insensitive to insufficient resolution and yield correct results even when the dissipation fields are under-resolved. Our present numerical framework is also compared with high-resolution simulations which use a finite difference method. For most of the compared quantities the agreement is found to be satisfactory.
A Conceptual Framework for SAHRA Integrated Multi-resolution Modeling in the Rio Grande Basin
NASA Astrophysics Data System (ADS)
Liu, Y.; Gupta, H.; Springer, E.; Wagener, T.; Brookshire, D.; Duffy, C.
2004-12-01
The sustainable management of water resources in a river basin requires an integrated analysis of the social, economic, environmental and institutional dimensions of the problem. Numerical models are commonly used for integration of these dimensions and for communication of the analysis results to stakeholders and policy makers. The National Science Foundation Science and Technology Center for Sustainability of semi-Arid Hydrology and Riparian Areas (SAHRA) has been developing integrated multi-resolution models to assess impacts of climate variability and land use change on water resources in the Rio Grande Basin. These models not only couple natural systems such as surface and ground waters, but will also include engineering, economic and social components that may be involved in water resources decision-making processes. This presentation will describe the conceptual framework being developed by SAHRA to guide and focus the multiple modeling efforts and to assist the modeling team in planning, data collection and interpretation, communication, evaluation, etc. One of the major components of this conceptual framework is a Conceptual Site Model (CSM), which describes the basin and its environment based on existing knowledge and identifies what additional information must be collected to develop technically sound models at various resolutions. The initial CSM is based on analyses of basin profile information that has been collected, including a physical profile (e.g., topographic and vegetative features), a man-made facility profile (e.g., dams, diversions, and pumping stations), and a land use and ecological profile (e.g., demographics, natural habitats, and endangered species). Based on the initial CSM, a Conceptual Physical Model (CPM) is developed to guide and evaluate the selection of a model code (or numerical model) for each resolution to conduct simulations and predictions. A CPM identifies, conceptually, all the physical processes and engineering and socio-economic activities occurring (or to occur) in the real system that the corresponding numerical models are required to address, such as riparian evapotranspiration responses to vegetation change and groundwater pumping impacts on soil moisture contents. Simulation results from different resolution models and observations of the real system will then be compared to evaluate the consistency among the CSM, the CPMs, and the numerical models, and feedbacks will be used to update the models. In a broad sense, the evaluation of the models (conceptual or numerical), as well as the linkages between them, can be viewed as a part of the overall conceptual framework. As new data are generated and understanding improves, the models will evolve, and the overall conceptual framework is refined. The development of the conceptual framework becomes an on-going process. We will describe the current state of this framework and the open questions that have to be addressed in the future.
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory
2010-01-01
The slide presentation discusses the integration of 1-kilometer spatial resolution land temperature data from the Moderate Resolution Imaging Spectroradiometer (MODIS), with 8-day temporal resolution, into the NASA Monsoon-Asia Integrated Regional Study (MAIRS) Data Center. The data will be available for analysis and visualization in the Giovanni data system. It discusses the NASA MAIRS Data Center, presents an introduction to the data access tools, and an introduction of Products available from the service, discusses the higher resolution Land Surface Temperature (LST) and presents preliminary results of LST Trends over China.
Medical imaging feasibility in body fluids using Markov chains
NASA Astrophysics Data System (ADS)
Kavehrad, M.; Armstrong, A. D.
2017-02-01
A relatively wide field-of-view and high resolution imaging is necessary for navigating the scope within the body, inspecting tissue, diagnosing disease, and guiding surgical interventions. As the large number of modes available in the multimode fibers (MMF) provides higher resolution, MMFs could replace the millimeters-thick bundles of fibers and lenses currently used in endoscopes. However, attributes of body fluids and obscurants such as blood, impose perennial limitations on resolution and reliability of optical imaging inside human body. To design and evaluate optimum imaging techniques that operate under realistic body fluids conditions, a good understanding of the channel (medium) behavior is necessary. In most prior works, Monte-Carlo Ray Tracing (MCRT) algorithm has been used to analyze the channel behavior. This task is quite numerically intensive. The focus of this paper is on investigating the possibility of simplifying this task by a direct extraction of state transition matrices associated with standard Markov modeling from the MCRT computer simulations programs. We show that by tracing a photon's trajectory in the body fluids via a Markov chain model, the angular distribution can be calculated by simple matrix multiplications. We also demonstrate that the new approach produces result that are close to those obtained by MCRT and other known methods. Furthermore, considering the fact that angular, spatial, and temporal distributions of energy are inter-related, mixing time of Monte- Carlo Markov Chain (MCMC) for different types of liquid concentrations is calculated based on Eigen-analysis of the state transition matrix and possibility of imaging in scattering media are investigated. To this end, we have started to characterize the body fluids that reduce the resolution of imaging [1].
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.
2014-12-01
Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.
NASA Technical Reports Server (NTRS)
Black, Carrie; Germaschewski, Kai; Bhattacharjee, Amitava; Ng, C. S.
2013-01-01
It has been demonstrated that in the presence of weak collisions, described by the Lenard-Bernstein collision operator, the Landau-damped solutions become true eigenmodes of the system and constitute a complete set. We present numerical results from an Eulerian Vlasov code that incorporates the Lenard-Bernstein collision operator. The effect of the collisions on the numerical recursion phenomenon seen in Vlasov codes is discussed. The code is benchmarked against exact linear eigenmode solutions in the presence of weak collisions, and a spectrum of Landau-damped solutions is determined within the limits of numerical resolution. Tests of the orthogonality and the completeness relation are presented.
Time multiplexing based extended depth of focus imaging.
Ilovitsh, Asaf; Zalevsky, Zeev
2016-01-01
We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.
NASA Astrophysics Data System (ADS)
Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo
2017-07-01
A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.
NASA Astrophysics Data System (ADS)
Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.
2015-10-01
An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.
Numerical Simulation and Scaling Analysis of Cell Printing
NASA Astrophysics Data System (ADS)
Qiao, Rui; He, Ping
2011-11-01
Cell printing, i.e., printing three dimensional (3D) structures of cells held in a tissue matrix, is gaining significant attention in the biomedical community. The key idea is to use inkjet printer or similar devices to print cells into 3D patterns with a resolution comparable to the size of mammalian cells. Achieving such a resolution in vitro can lead to breakthroughs in areas such as organ transplantation. Although the feasibility of cell printing has been demonstrated recently, the printing resolution and cell viability remain to be improved. Here we investigate a unit operation in cell printing, namely, the impact of a cell-laden droplet into a pool of highly viscous liquids. The droplet and cell dynamics are quantified using both direct numerical simulation and scaling analysis. These studies indicate that although cell experienced significant stress during droplet impact, the duration of such stress is very short, which helps explain why many cells can survive the cell printing process. These studies also revealed that cell membrane can be temporarily ruptured during cell printing, which is supported by indirect experimental evidence.
Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.
Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner
2011-09-26
Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America
NASA Astrophysics Data System (ADS)
Mizyuk, Artem; Senderov, Maxim; Korotaev, Gennady
2016-04-01
Large number of numerical ocean models were implemented for the Black Sea basin during last two decades. They reproduce rather similar structure of synoptical variability of the circulation. Since 00-s numerical studies of the mesoscale structure are carried out using high performance computing (HPC). With the growing capacity of computing resources it is now possible to reconstruct the Black Sea currents with spatial resolution of several hundreds meters. However, how realistic these results can be? In the proposed study an attempt is made to understand which spatial scales are reproduced by ocean model in the Black Sea. Simulations are made using parallel version of NEMO (Nucleus for European Modelling of the Ocean). A two regional configurations with spatial resolutions 5 km and 2.5 km are described. Comparison of the SST from simulations with two spatial resolutions shows rather qualitative difference of the spatial structures. Results of high resolution simulation are compared also with satellite observations and observation-based products from Copernicus using spatial correlation and spectral analysis. Spatial scales of correlations functions for simulated and observed SST are rather close and differs much from satellite SST reanalysis. Evolution of spectral density for modelled SST and reanalysis showed agreed time periods of small scales intensification. Using of the spectral analysis for satellite measurements is complicated due to gaps. The research leading to this results has received funding from Russian Science Foundation (project № 15-17-20020)
Haldar, Justin P; Leahy, Richard M
2013-05-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.
All-optical optoacoustic microscopy system based on probe beam deflection technique
NASA Astrophysics Data System (ADS)
Maswadi, Saher M.; Tsyboulskic, Dmitri; Roth, Caleb C.; Glickman, Randolph D.; Beier, Hope T.; Oraevsky, Alexander A.; Ibey, Bennett L.
2016-03-01
It is difficult to achieve sub-micron resolution in backward mode OA microscopy using conventional piezoelectric detectors, because of wavefront distortions caused by components placed in the optical path, between the sample and the objective lens, that are required to separate the acoustic wave from the optical beam. As an alternate approach, an optoacoustic microscope (OAM) was constructed using the probe beam deflection technique (PBDT) to detect laserinduced acoustic signals. The all-optical OAM detects laser-generated pressure waves using a probe beam passing through a coupling medium, such as water, filling the space between the microscope objective lens and sample. The acoustic waves generated in the sample propagate through the coupling medium, causing transient changes in the refractive index that deflect the probe beam. These deflections are measured with a high-speed, balanced photodiode position detector. The deflection amplitude is directly proportional to the magnitude of the acoustic pressure wave, and provides the data required for image reconstruction. The sensitivity of the PBDT detector expressed as noise equivalent pressure was 12 Pa, comparable to that of existing high-performance ultrasound detectors. Because of the unimpeded working distance, a high numerical aperture objective lens, i.e. NA = 1, was employed in the OAM to achieve near diffraction-limited lateral resolution of 0.5 μm at 532nm. The all-optical OAM provides several benefits over current piezoelectric detector-based systems, such as increased lateral and axial resolution, higher sensitivity, robustness, and potentially more compatibility with multimodal instruments.
What is the diffraction limit? From Airy to Abbe using direct numerical integration
NASA Astrophysics Data System (ADS)
Calm, Y. M.; Merlo, J. M.; Burns, M. J.; Kempa, K.; Naughton, M. J.
The resolution of a conventional optical microscope is sometimes taken from Airy's point spread function (PSF), 0 . 61 λ / NA , and sometimes from Abbe, λ / 2 NA , where NA is the numerical aperture, however modern fluorescence and near-field optical microscopies achieve spatial resolution far better than either of these limits. There is a new category of 2D metamaterials called planar optical elements (POEs), which have a microscopic thickness (< λ), macroscopic transverse dimensions (> 100 λ), and are composed of an array of nanostructured light scatterers. POEs are found in a range of micro- and nano-photonic technologies, and will influence the future optical nanoscopy. With this pretext, we shed some light on the 'diffraction limit' by numerically evaluating Kirchhoff's scalar formulae (in their exact form) and identifying the features of highly non-paraxial, 3D PSFs. We show that the Airy and Abbe criteria are connected, and we comment on the design rules for a particular type of POE: the flat lens. This work is supported by the W. M. Keck Foundation.
High-NA EUV lithography enabling Moore's law in the next decade
NASA Astrophysics Data System (ADS)
van Schoot, Jan; Troost, Kars; Bornebroek, Frank; van Ballegoij, Rob; Lok, Sjoerd; Krabbendam, Peter; Stoeldraijer, Judon; Loopstra, Erik; Benschop, Jos P.; Finders, Jo; Meiling, Hans; van Setten, Eelco; Kneer, Bernhard; Kuerz, Peter; Kaiser, Winfried; Heil, Tilmann; Migura, Sascha; Neumann, Jens Timo
2017-10-01
While EUV systems equipped with a 0.33 Numerical Aperture lenses are readying to start volume manufacturing, ASML and Zeiss are ramping up their activities on a EUV exposure tool with Numerical Aperture of 0.55. The purpose of this scanner, targeting an ultimate resolution of 8nm, is to extend Moore's law throughout the next decade. A novel, anamorphic lens design, capable of providing the required Numerical Aperture has been investigated; This lens will be paired with new, faster stages and more accurate sensors enabling Moore's law economical requirements, as well as the tight focus and overlay control needed for future process nodes. The tighter focus and overlay control budgets, as well as the anamorphic optics, will drive innovations in the imaging and OPC modelling. Furthermore, advances in resist and mask technology will be required to image lithography features with less than 10nm resolution. This paper presents an overview of the target specifications, key technology innovations and imaging simulations demonstrating the advantages as compared to 0.33NA and showing the capabilities of the next generation EUV systems.
Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer
2016-12-01
Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.
NASA Astrophysics Data System (ADS)
Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck
2018-02-01
The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.
Laskar, Junaid M; Shravan Kumar, P; Herminghaus, Stephan; Daniels, Karen E; Schröter, Matthias
2016-04-20
Optically transparent immersion liquids with refractive index (n∼1.77) to match the sapphire-based aplanatic numerical aperture increasing lens (aNAIL) are necessary for achieving deep 3D imaging with high spatial resolution. We report that antimony tribromide (SbBr3) salt dissolved in liquid diiodomethane (CH2I2) provides a new high refractive index immersion liquid for optics applications. The refractive index is tunable from n=1.74 (pure) to n=1.873 (saturated), by adjusting either salt concentration or temperature; this allows it to match (or even exceed) the refractive index of sapphire. Importantly, the solution gives excellent light transmittance in the ultraviolet to near-infrared range, an improvement over commercially available immersion liquids. This refractive-index-matched immersion liquid formulation has enabled us to develop a sapphire-based aNAIL objective that has both high numerical aperture (NA=1.17) and long working distance (WD=12 mm). This opens up new possibilities for deep 3D imaging with high spatial resolution.
NASA Astrophysics Data System (ADS)
Verscharen, D.; Klein, K. G.; Chandran, B. D. G.; Stevens, M. L.; Salem, C. S.; Bale, S. D.
2017-12-01
The Arbitrary Linear Plasma Solver (ALPS) is a parallelized numerical code that solves the dispersion relation in a hot (even relativistic) magnetized plasma with an arbitrary number of particle species with arbitrary gyrotropic equilibrium distribution functions for any direction of wave propagation with respect to the background field. In this way, ALPS retains generality and overcomes the shortcomings of previous (bi-)Maxwellian solvers for the plasma dispersion relations. The unprecedented high-resolution particle and field data products from Parker Solar Probe (PSP) and Solar Orbiter (SO) will require novel theoretical tools. ALPS is one such tool, and its use will make possible new investigations into the role of non-Maxwellian distributions in the near-Sun solar wind. It can be applied to numerous high-velocity-resolution systems, ranging from current space missions to numerical simulations. We will briefly discuss the ALPS algorithm and demonstrate its functionality based on previous solar-wind measurements. We will then highlight our plans for future applications of ALPS to PSP and SO observations.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjogreen, B.; Sandham, N. D.; Hadjadj, A.; Kwak, Dochan (Technical Monitor)
2000-01-01
In a series of papers, Olsson (1994, 1995), Olsson & Oliger (1994), Strand (1994), Gerritsen Olsson (1996), Yee et al. (1999a,b, 2000) and Sandham & Yee (2000), the issue of nonlinear stability of the compressible Euler and Navier-Stokes Equations, including physical boundaries, and the corresponding development of the discrete analogue of nonlinear stable high order schemes, including boundary schemes, were developed, extended and evaluated for various fluid flows. High order here refers to spatial schemes that are essentially fourth-order or higher away from shock and shear regions. The objective of this paper is to give an overview of the progress of the low dissipative high order shock-capturing schemes proposed by Yee et al. (1999a,b, 2000). This class of schemes consists of simple non-dissipative high order compact or non-compact central spatial differencings and adaptive nonlinear numerical dissipation operators to minimize the use of numerical dissipation. The amount of numerical dissipation is further minimized by applying the scheme to the entropy splitting form of the inviscid flux derivatives, and by rewriting the viscous terms to minimize odd-even decoupling before the application of the central scheme (Sandham & Yee). The efficiency and accuracy of these scheme are compared with spectral, TVD and fifth- order WENO schemes. A new approach of Sjogreen & Yee (2000) utilizing non-orthogonal multi-resolution wavelet basis functions as sensors to dynamically determine the appropriate amount of numerical dissipation to be added to the non-dissipative high order spatial scheme at each grid point will be discussed. Numerical experiments of long time integration of smooth flows, shock-turbulence interactions, direct numerical simulations of a 3-D compressible turbulent plane channel flow, and various mixing layer problems indicate that these schemes are especially suitable for practical complex problems in nonlinear aeroacoustics, rotorcraft dynamics, direct numerical simulation or large eddy simulation of compressible turbulent flows at various speeds including high-speed shock-turbulence interactions, and general long time wave propagation problems. These schemes, including entropy splitting, have also been extended to freestream preserving schemes on curvilinear moving grids for a thermally perfect gas (Vinokur & Yee 2000).
Discrete bisoliton fiber laser
Liu, X. M.; Han, X. X.; Yao, X. K.
2016-01-01
Dissipative solitons, which result from the intricate balance between dispersion and nonlinearity as well as gain and loss, are of the fundamental scientific interest and numerous important applications. Here, we report a fiber laser that generates bisoliton – two consecutive dissipative solitons that preserve a fixed separation between them. Deviations from this separation result in its restoration. It is also found that these bisolitons have multiple discrete equilibrium distances with the quantized separations, as is confirmed by the theoretical analysis and the experimental observations. The main feature of our laser is the anomalous dispersion that is increased by an order of magnitude in comparison to previous studies. Then the spectral filtering effect plays a significant role in pulse-shaping. The proposed laser has the potential applications in optical communications and high-resolution optics for coding and transmission of information in higher-level modulation formats. PMID:27767075
Ellipsoidal and parabolic glass capillaries as condensers for x-ray microscopes.
Zeng, Xianghui; Duewer, Fred; Feser, Michael; Huang, Carson; Lyon, Alan; Tkachuk, Andrei; Yun, Wenbing
2008-05-01
Single-bounce ellipsoidal and paraboloidal glass capillary focusing optics have been fabricated for use as condenser lenses for both synchrotron and tabletop x-ray microscopes in the x-ray energy range of 2.5-18 keV. The condenser numerical apertures (NAs) of these devices are designed to match the NA of x-ray zone plate objectives, which gives them a great advantage over zone plate condensers in laboratory microscopes. The fabricated condensers have slope errors as low as 20 murad rms. These capillaries provide a uniform hollow-cone illumination with almost full focusing efficiency, which is much higher than what is available with zone plate condensers. Sub-50 nm resolution at 8 keV x-ray energy was achieved by utilizing this high-efficiency condenser in a laboratory microscope based on a rotating anode generator.
NASA Astrophysics Data System (ADS)
Naughten, Kaitlin A.; Meissner, Katrin J.; Galton-Fenzi, Benjamin K.; England, Matthew H.; Timmermann, Ralph; Hellmer, Hartmut H.; Hattermann, Tore; Debernard, Jens B.
2018-04-01
An increasing number of Southern Ocean models now include Antarctic ice-shelf cavities, and simulate thermodynamics at the ice-shelf/ocean interface. This adds another level of complexity to Southern Ocean simulations, as ice shelves interact directly with the ocean and indirectly with sea ice. Here, we present the first model intercomparison and evaluation of present-day ocean/sea-ice/ice-shelf interactions, as simulated by two models: a circumpolar Antarctic configuration of MetROMS (ROMS: Regional Ocean Modelling System coupled to CICE: Community Ice CodE) and the global model FESOM (Finite Element Sea-ice Ocean Model), where the latter is run at two different levels of horizontal resolution. From a circumpolar Antarctic perspective, we compare and evaluate simulated ice-shelf basal melting and sub-ice-shelf circulation, as well as sea-ice properties and Southern Ocean water mass characteristics as they influence the sub-ice-shelf processes. Despite their differing numerical methods, the two models produce broadly similar results and share similar biases in many cases. Both models reproduce many key features of observations but struggle to reproduce others, such as the high melt rates observed in the small warm-cavity ice shelves of the Amundsen and Bellingshausen seas. Several differences in model design show a particular influence on the simulations. For example, FESOM's greater topographic smoothing can alter the geometry of some ice-shelf cavities enough to affect their melt rates; this improves at higher resolution, since less smoothing is required. In the interior Southern Ocean, the vertical coordinate system affects the degree of water mass erosion due to spurious diapycnal mixing, with MetROMS' terrain-following coordinate leading to more erosion than FESOM's z coordinate. Finally, increased horizontal resolution in FESOM leads to higher basal melt rates for small ice shelves, through a combination of stronger circulation and small-scale intrusions of warm water from offshore.
Contrasting model complexity under a changing climate in a headwaters catchment.
NASA Astrophysics Data System (ADS)
Foster, L.; Williams, K. H.; Maxwell, R. M.
2017-12-01
Alpine, snowmelt-dominated catchments are the source of water for more than 1/6th of the world's population. These catchments are topographically complex, leading to steep weather gradients and nonlinear relationships between water and energy fluxes. Recent evidence suggests that alpine systems are more sensitive to climate warming, but these regions are vastly simplified in climate models and operational water management tools due to computational limitations. Simultaneously, point-scale observations are often extrapolated to larger regions where feedbacks can both exacerbate or mitigate locally observed changes. It is critical to determine whether projected climate impacts are robust to different methodologies, including model complexity. Using high performance computing and an integrated model of a representative headwater catchment we determined the hydrologic response from 30 projected climate changes to precipitation, temperature and vegetation for the Rocky Mountains. Simulations were run with 100m and 1km resolution, and with and without lateral subsurface flow in order to vary model complexity. We found that model complexity alters nonlinear relationships between water and energy fluxes. Higher-resolution models predicted larger changes per degree of temperature increase than lower resolution models, suggesting that reductions to snowpack, surface water, and groundwater due to warming may be underestimated in simple models. Increases in temperature were found to have a larger impact on water fluxes and stores than changes in precipitation, corroborating previous research showing that mountain systems are significantly more sensitive to temperature changes than to precipitation changes and that increases in winter precipitation are unlikely to compensate for increased evapotranspiration in a higher energy environment. These numerical experiments help to (1) bracket the range of uncertainty in published literature of climate change impacts on headwater hydrology; (2) characterize the role of precipitation and temperature changes on water supply for snowmelt-dominated downstream basins; and (3) identify which climate impacts depend on the scale of simulation.
Xia, J.; Miller, R.D.; Xu, Y.
2008-01-01
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (>2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. We employed a data-resolution matrix to select data that would be well predicted and we find that there are advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher-mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher-mode data are normally more accurately predicted than fundamental-mode data because of restrictions on the data kernel for the inversion system. We used synthetic and real-world examples to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher-mode data in inversion can provide better results. We also calculated model-resolution matrices in these examples to show the potential of increasing model resolution with selected surface-wave data. ?? Birkhaueser 2008.
Achieving superresolution with illumination-enhanced sparsity.
Yu, Jiun-Yann; Becker, Stephen R; Folberth, James; Wallin, Bruce F; Chen, Simeng; Cogswell, Carol J
2018-04-16
Recent advances in superresolution fluorescence microscopy have been limited by a belief that surpassing two-fold resolution enhancement of the Rayleigh resolution limit requires stimulated emission or the fluorophore to undergo state transitions. Here we demonstrate a new superresolution method that requires only image acquisitions with a focused illumination spot and computational post-processing. The proposed method utilizes the focused illumination spot to effectively reduce the object size and enhance the object sparsity and consequently increases the resolution and accuracy through nonlinear image post-processing. This method clearly resolves 70nm resolution test objects emitting ~530nm light with a 1.4 numerical aperture (NA) objective, and, when imaging through a 0.5NA objective, exhibits high spatial frequencies comparable to a 1.4NA widefield image, both demonstrating a resolution enhancement above two-fold of the Rayleigh resolution limit. More importantly, we examine how the resolution increases with photon numbers, and show that the more-than-two-fold enhancement is achievable with realistic photon budgets.
Minimal modeling of the extratropical general circulation
NASA Technical Reports Server (NTRS)
O'Brien, Enda; Branscome, Lee E.
1989-01-01
The ability of low-order, two-layer models to reproduce basic features of the mid-latitude general circulation is investigated. Changes in model behavior with increased spectral resolution are examined in detail. Qualitatively correct time-mean heat and momentum balances are achieved in a beta-plane channel model which includes the first and third meridional modes. This minimal resolution also reproduces qualitatively realistic surface and upper-level winds and mean meridional circulations. Higher meridional resolution does not result in substantial changes in the latitudinal structure of the circulation. A qualitatively correct kinetic energy spectrum is produced when the resolution is high enough to include several linearly stable modes. A model with three zonal waves and the first three meridional modes has a reasonable energy spectrum and energy conversion cycle, while also satisfying heat and momentum budget requirements. This truncation reproduces the basic mechanisms and zonal circulation features that are obtained at higher resolution. The model performance improves gradually with higher resolution and is smoothly dependent on changes in external parameters.
2008-01-24
This image demonstrates the first detection of Pluto using the high-resolution mode on the NASA New Horizons Long-Range Reconnaissance Imager. The mode provides a clear separation between Pluto and numerous nearby background stars.
Hardware problems encountered in solar heating and cooling systems
NASA Technical Reports Server (NTRS)
Cash, M.
1978-01-01
Numerous problems in the design, production, installation, and operation of solar energy systems are discussed. Described are hardware problems, which range from simple to obscure and complex, and their resolution.
NASA Astrophysics Data System (ADS)
Gomes, J. L.; Chou, S. C.; Yaguchi, S. M.
2012-04-01
Physics parameterizations and the model vertical and horizontal resolutions, for example, can significantly contribute to the uncertainty in the numerical weather predictions, especially at regions with complex topography. The objective of this study is to assess the influences of model precipitation production schemes and horizontal resolution on the diurnal cycle of precipitation in the Eta Model . The model was run in hydrostatic mode at 3- and 5-km grid sizes, the vertical resolution was set to 50 layers, and the time steps to 6 and 10 s, respectively. The initial and boundary conditions were taken from ERA-Interim reanalysis. Over the sea the 0.25-deg sea surface temperature from NOAA was used. The model was setup to run for each resolution over Angra dos Reis, located in the Southeast region of Brazil, for the rainy period between 18 December 2009 and 01 de January 2010, the model simulation range was 48 hours. In one set of runs the cumulus parameterization was switched off, in this case the model precipitation was fully simulated by cloud microphysics scheme, and in the other set the model was run with weak cumulus convection. The results show that as the model horizontal resolution increases from 5 to 3 km, the spatial pattern of the precipitation hardly changed, although the maximum precipitation core increased in magnitude. Daily data from automatic station data was used to evaluate the runs and shows that the diurnal cycle of temperature and precipitation were better simulated for 3 km when compared against observations. The model configuration results without cumulus convection shows a small contraction in the precipitating area and an increase in the simulated maximum values. The diurnal cycle of precipitation was better simulated with some activity of the cumulus convection scheme. The skill scores for the period and for different forecast ranges are higher at weak and moderate precipitation rates.
Molloy, Erin K; Meyerand, Mary E; Birn, Rasmus M
2014-02-01
Functional MRI blood oxygen level-dependent (BOLD) signal changes can be subtle, motivating the use of imaging parameters and processing strategies that maximize the temporal signal-to-noise ratio (tSNR) and thus the detection power of neuronal activity-induced fluctuations. Previous studies have shown that acquiring data at higher spatial resolutions results in greater percent BOLD signal changes, and furthermore that spatially smoothing higher resolution fMRI data improves tSNR beyond that of data originally acquired at a lower resolution. However, higher resolution images come at the cost of increased acquisition time, and the number of image volumes also influences detectability. The goal of our study is to determine how the detection power of neuronally induced BOLD fluctuations acquired at higher spatial resolutions and then spatially smoothed compares to data acquired at the lower resolutions with the same imaging duration. The number of time points acquired during a given amount of imaging time is a practical consideration given the limited ability of certain populations to lie still in the MRI scanner. We compare acquisitions at three different in-plane spatial resolutions (3.50×3.50mm(2), 2.33×2.33mm(2), 1.75×1.75mm(2)) in terms of their tSNR, contrast-to-noise ratio, and the power to detect both task-related activation and resting-state functional connectivity. The impact of SENSE acceleration, which speeds up acquisition time increasing the number of images collected, is also evaluated. Our results show that after spatially smoothing the data to the same intrinsic resolution, lower resolution acquisitions have a slightly higher detection power of task-activation in some, but not all, brain areas. There were no significant differences in functional connectivity as a function of resolution after smoothing. Similarly, the reduced tSNR of fMRI data acquired with a SENSE factor of 2 is offset by the greater number of images acquired, resulting in few significant differences in detection power of either functional activation or connectivity after spatial smoothing. © 2013.
Zhang, Peng; Liu, Ru-Xun; Wong, S C
2005-05-01
This paper develops macroscopic traffic flow models for a highway section with variable lanes and free-flow velocities, that involve spatially varying flux functions. To address this complex physical property, we develop a Riemann solver that derives the exact flux values at the interface of the Riemann problem. Based on this solver, we formulate Godunov-type numerical schemes to solve the traffic flow models. Numerical examples that simulate the traffic flow around a bottleneck that arises from a drop in traffic capacity on the highway section are given to illustrate the efficiency of these schemes.
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
Concentrating small particles in protoplanetary disks through the streaming instability
NASA Astrophysics Data System (ADS)
Yang, C.-C.; Johansen, A.; Carrera, D.
2017-10-01
Laboratory experiments indicate that direct growth of silicate grains via mutual collisions can only produce particles up to roughly millimeters in size. On the other hand, recent simulations of the streaming instability have shown that mm/cm-sized particles require an excessively high metallicity for dense filaments to emerge. Using a numerical algorithm for stiff mutual drag force, we perform simulations of small particles with significantly higher resolutions and longer simulation times than in previous investigations. We find that particles of dimensionless stopping time τs = 10-2 and 10-3 - representing cm- and mm-sized particles interior of the water ice line - concentrate themselves via the streaming instability at a solid abundance of a few percent. We thus revise a previously published critical solid abundance curve for the regime of τs ≪ 1. The solid density in the concentrated regions reaches values higher than the Roche density, indicating that direct collapse of particles down to mm sizes into planetesimals is possible. Our results hence bridge the gap in particle size between direct dust growth limited by bouncing and the streaming instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruess, K.; Oldenburg, C.; Moridis, G.
1997-12-31
This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less
Goodman, Thomas C.; Hardies, Stephen C.; Cortez, Carlos; Hillen, Wolfgang
1981-01-01
Computer programs are described that direct the collection, processing, and graphical display of numerical data obtained from high resolution thermal denaturation (1-3) and circular dichroism (4) studies. Besides these specific applications, the programs may also be useful, either directly or as programming models, in other types of spectrophotometric studies employing computers, programming languages, or instruments similar to those described here (see Materials and Methods). PMID:7335498
A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Kim, Hyun Dae; Liu, Nan-Suey
1992-01-01
A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.
Recovery of Sparse Positive Signals on the Sphere from Low Resolution Measurements
NASA Astrophysics Data System (ADS)
Bendory, Tamir; Eldar, Yonina C.
2015-12-01
This letter considers the problem of recovering a positive stream of Diracs on a sphere from its projection onto the space of low-degree spherical harmonics, namely, from its low-resolution version. We suggest recovering the Diracs via a tractable convex optimization problem. The resulting recovery error is proportional to the noise level and depends on the density of the Diracs. We validate the theory by numerical experiments.
2008-07-01
operators in Hilbert spaces. The homogenization procedure through successive multi- resolution projections is presented, followed by a numerical example of...is intended to be essentially self-contained. The mathematical ( Greenberg 1978; Gilbert 2006) and signal processing (Strang and Nguyen 1995...literature listed in the references. The ideas behind multi-resolution analysis unfold from the theory of linear operators in Hilbert spaces (Davis 1975
Signal Characteristics of Super-Resolution Near-Field Structure Disks with 100 GB Capacity
NASA Astrophysics Data System (ADS)
Kim, Jooho; Hwang, Inoh; Kim, Hyunki; Park, Insik; Tominaga, Junji
2005-05-01
We report the basic characteristics of super resolution near-field structure (Super-RENS) media at a blue laser optical system (laser wavelength 405 nm, numerical aperture 0.85). Using a novel write once read many (WORM) structure for a blue laser system, we obtained a carrier-to-noise ratio (CNR) above 33 dB from the signal of the 37.5 nm mark length, which is equivalent to a 100 GB capacity with a 0.32 micrometer track pitch, and an eye pattern for 50 GB (2T: 75 nm) capacity using a patterned signal. Using a novel super-resolution material (tellurium, Te) with low super-resolution readout power, we also improved the read stability.
Soft x-ray holographic tomography for biological specimens
NASA Astrophysics Data System (ADS)
Gao, Hongyi; Chen, Jianwen; Xie, Honglan; Li, Ruxin; Xu, Zhizhan; Jiang, Shiping; Zhang, Yuxuan
2003-10-01
In this paper, we present some experimental results on X -ray holography, holographic tomography, and a new holographic tomography method called pre-amplified holographic tomography is proposed. Due to the shorter wavelength and the larger penetration depths, X-rays provide the potential of higher resolution in imaging techniques, and have the ability to image intact, living, hydrated cells w ithout slicing, dehydration, chemical fixation or stain. Recently, using X-ray source in National Synchrotron Radiation Laboratory in Hefei, we have successfully performed some soft X-ray holography experiments on biological specimen. The specimens used in the experiments was the garlic clove epidermis, we got their X-ray hologram, and then reconstructed them by computer programs, the feature of the cell walls, the nuclei and some cytoplasm were clearly resolved. However, there still exist some problems in realization of practical 3D microscopic imaging due to the near-unity refractive index of the matter. There is no X-ray optics having a sufficient high numerical aperture to achieve a depth resolution that is comparable to the transverse resolution. On the other hand, computer tomography needs a record of hundreds of views of the test object at different angles for high resolution. This is because the number of views required for a densely packed object is equal to the object radius divided by the desired depth resolution. Clearly, it is impractical for a radiation-sensitive biological specimen. Moreover, the X-ray diffraction effect makes projection data blur, this badly degrades the resolution of the reconstructed image. In order to observe 3D structure of the biological specimens, McNulty proposed a new method for 3D imaging called "holographic tomography (HT)" in which several holograms of the specimen are recorded from various illumination directions and combined in the reconstruction step. This permits the specimens to be sampled over a wide range of spatial frequencies to improve the depth resolution. In NSRL, we performed soft X-ray holographic tomography experiments. The specimen was the spider filaments and PM M A as recording medium. By 3D CT reconstruction of the projection data, three dimensional density distribution of the specimen was obtained. Also, we developed a new X-ray holographic tomography m ethod called pre-amplified holographic tomography. The method permits a digital real-time 3D reconstruction with high-resolution and a simple and compact experimental setup as well.
Birefringence of single and bundled microtubules.
Oldenbourg, R; Salmon, E D; Tran, P T
1998-01-01
We have measured the birefringence of microtubules (MTs) and of MT-based macromolecular assemblies in vitro and in living cells by using the new Pol-Scope. A single microtubule in aqueous suspension and imaged with a numerical aperture of 1.4 had a peak retardance of 0.07 nm. The peak retardance of a small bundle increased linearly with the number of MTs in the bundle. Axonemes (prepared from sea urchin sperm) had a peak retardance 20 times higher than that of single MTs, in accordance with the nine doublets and two singlets arrangement of parallel MTs in the axoneme. Measured filament retardance decreased when the filament was defocused or the numerical aperture of the imaging system was decreased. However, the retardance "area," which we defined as the image retardance integrated along a line perpendicular to the filament axis, proved to be independent of focus and of numerical aperture. These results are in good agreement with a theory that we developed for measuring retardances with imaging optics. Our theoretical concept is based on Wiener's theory of mixed dielectrics, which is well established for nonimaging applications. We extend its use to imaging systems by considering the coherence region defined by the optical set-up. Light scattered from within that region interferes coherently in the image point. The presence of a filament in the coherence region leads to a polarization dependent scattering cross section and to a finite retardance measured in the image point. Similar to resolution measurements, the linear dimension of the coherence region for retardance measurements is on the order lambda/(2 NA), where lambda is the wavelength of light and NA is the numerical aperture of the illumination and imaging lenses.
Evaluation of the UnTRIM model for 3-D tidal circulation
Cheng, R.T.; Casulli, V.; ,
2001-01-01
A family of numerical models, known as the TRIM models, shares the same modeling philosophy for solving the shallow water equations. A characteristic analysis of the shallow water equations points out that the numerical instability is controlled by the gravity wave terms in the momentum equations and by the transport terms in the continuity equation. A semi-implicit finite-difference scheme has been formulated so that these terms and the vertical diffusion terms are treated implicitly and the remaining terms explicitly to control the numerical stability and the computations are carried out over a uniform finite-difference computational mesh without invoking horizontal or vertical coordinate transformations. An unstructured grid version of TRIM model is introduced, or UnTRIM (pronounces as "you trim"), which preserves these basic numerical properties and modeling philosophy, only the computations are carried out over an unstructured orthogonal grid. The unstructured grid offers the flexibilities in representing complex study areas so that fine grid resolution can be placed in regions of interest, and coarse grids are used to cover the remaining domain. Thus, the computational efforts are concentrated in areas of importance, and an overall computational saving can be achieved because the total number of grid-points is dramatically reduced. To use this modeling approach, an unstructured grid mesh must be generated to properly reflect the properties of the domain of the investigation. The new modeling flexibility in grid structure is accompanied by new challenges associated with issues of grid generation. To take full advantage of this new model flexibility, the model grid generation should be guided by insights into the physics of the problems; and the insights needed may require a higher degree of modeling skill.
Birefringence of single and bundled microtubules.
Oldenbourg, R; Salmon, E D; Tran, P T
1998-01-01
We have measured the birefringence of microtubules (MTs) and of MT-based macromolecular assemblies in vitro and in living cells by using the new Pol-Scope. A single microtubule in aqueous suspension and imaged with a numerical aperture of 1.4 had a peak retardance of 0.07 nm. The peak retardance of a small bundle increased linearly with the number of MTs in the bundle. Axonemes (prepared from sea urchin sperm) had a peak retardance 20 times higher than that of single MTs, in accordance with the nine doublets and two singlets arrangement of parallel MTs in the axoneme. Measured filament retardance decreased when the filament was defocused or the numerical aperture of the imaging system was decreased. However, the retardance "area," which we defined as the image retardance integrated along a line perpendicular to the filament axis, proved to be independent of focus and of numerical aperture. These results are in good agreement with a theory that we developed for measuring retardances with imaging optics. Our theoretical concept is based on Wiener's theory of mixed dielectrics, which is well established for nonimaging applications. We extend its use to imaging systems by considering the coherence region defined by the optical set-up. Light scattered from within that region interferes coherently in the image point. The presence of a filament in the coherence region leads to a polarization dependent scattering cross section and to a finite retardance measured in the image point. Similar to resolution measurements, the linear dimension of the coherence region for retardance measurements is on the order lambda/(2 NA), where lambda is the wavelength of light and NA is the numerical aperture of the illumination and imaging lenses. PMID:9449366
NASA Astrophysics Data System (ADS)
Grose, C. J.
2008-05-01
Numerical geodynamics models of heat transfer are typically thought of as specialized topics of research requiring knowledge of specialized modelling software, linux platforms, and state-of-the-art finite-element codes. I have implemented analytical and numerical finite-difference techniques with Microsoft Excel 2007 spreadsheets to solve for complex solid-earth heat transfer problems for use by students, teachers, and practicing scientists without specialty in geodynamics modelling techniques and applications. While implementation of equations for use in Excel spreadsheets is occasionally cumbersome, once case boundary structure and node equations are developed, spreadsheet manipulation becomes routine. Model experimentation by modifying parameter values, geometry, and grid resolution makes Excel a useful tool whether in the classroom at the undergraduate or graduate level or for more engaging student projects. Furthermore, the ability to incorporate complex geometries and heat-transfer characteristics makes it ideal for first and occasionally higher order geodynamics simulations to better understand and constrain the results of professional field research in a setting that does not require the constraints of state-of-the-art modelling codes. The straightforward expression and manipulation of model equations in excel can also serve as a medium to better understand the confusing notations of advanced mathematical problems. To illustrate the power and robustness of computation and visualization in spreadsheet models I focus primarily on one-dimensional analytical and two-dimensional numerical solutions to two case problems: (i) the cooling of oceanic lithosphere and (ii) temperatures within subducting slabs. Excel source documents will be made available.
NASA Astrophysics Data System (ADS)
Keitel, David; Forteza, Xisco Jiménez; Husa, Sascha; London, Lionel; Bernuzzi, Sebastiano; Harms, Enno; Nagar, Alessandro; Hannam, Mark; Khan, Sebastian; Pürrer, Michael; Pratten, Geraint; Chaurasia, Vivek
2017-07-01
For a brief moment, a binary black hole (BBH) merger can be the most powerful astrophysical event in the visible Universe. Here we present a model fit for this gravitational-wave peak luminosity of nonprecessing quasicircular BBH systems as a function of the masses and spins of the component black holes, based on numerical relativity (NR) simulations and the hierarchical fitting approach introduced by X. Jiménez-Forteza et al. [Phys. Rev. D 95, 064024 (2017)., 10.1103/PhysRevD.95.064024]. This fit improves over previous results in accuracy and parameter-space coverage and can be used to infer posterior distributions for the peak luminosity of future astrophysical signals like GW150914 and GW151226. The model is calibrated to the ℓ≤6 modes of 378 nonprecessing NR simulations up to mass ratios of 18 and dimensionless spin magnitudes up to 0.995, and includes unequal-spin effects. We also constrain the fit to perturbative numerical results for large mass ratios. Studies of key contributions to the uncertainty in NR peak luminosities, such as (i) mode selection, (ii) finite resolution, (iii) finite extraction radius, and (iv) different methods for converting NR waveforms to luminosity, allow us to use NR simulations from four different codes as a homogeneous calibration set. This study of systematic fits to combined NR and large-mass-ratio data, including higher modes, also paves the way for improved inspiral-merger-ringdown waveform models.
A Study of the Extratropical Tropopause from Observations and Models
NASA Astrophysics Data System (ADS)
Wang, Shu Meir
The extratropical tropopause is a familiar feature in meteorology; however, the understanding of the mechanisms for its existence, formation, maintenance and sharpness is still an active area of research. Son and Povalni (2007) used a simple general circulation model to produce the TIL (Tropopause Inversion Layer), and they found that the extratropical tropopause is more sensitive to the change of the horizontal resolution than to the change of the vertical resolution. The extratropical tropopause is sharper and lower in higher horizontal resolution. They also successfully mimicked the seasonal variation of the extratropical tropopause by changing the Equator-to-Pole temperature difference. They found these features of the extratropical tropopause, but they did not explain why these features were seen in their simplified model. In this research, we try to explain why these features of the extratropical tropopause are seen from both observations and the models. I have shown in my MS thesis that the distance from the jet is more associated with the extratropical tropopause than is the upper tropospheric relative vorticity (Wirth, 2001) from observations. In this research, the reproduction of the work is done from both the idealized and the full model run, and the results are similar to those from the observations, which show that even on synoptic time scales, the distance from the jet is more important in determining the extratropical tropopause height than is the upper tropospheric relative vorticity. It also explains the seasonal variations of the extratropical tropopause since the jet is more poleward in summer than in winter (the Equator-to-Pole temperature difference is smaller in summer than in winter), thus there is larger area at south of the jet which means the extratropical tropopause is sharper and higher at midlatitudes in summer than in winter. We believe that baroclinic mixing of PV is the key factor that sharpens the extratropical tropopause, and adequate horizontal resolution is needed to resolve the baroclinic mixing and the small-scale filamentary structures. We used many methods in this study to show that there is more baroclinic activity seen in higher horizontal resolution. We also compared the correlations of the tropopause height with three variations in different quantities (PV fluxes, the upper tropospheric vorticity, and heat fluxes), and found that the correlations of the tropopause height and PV fluxes are the highest among the three. Thus, we conclude that baroclinic mixing is the most important factor that controls the extratropical tropopause sharpness. This also explains why the extratropical tropopause is sharper at midlatitudes when higher horizontal resolution is used (see figure 2.4 in the thesis and figure 2 in Son and Polvani's (2007)) since there is more baroclinic activity in the higher horizontal resolution models. Since there is more baroclinic activity seen in higher horizontal resolution, the baroclinic eddy drag is larger, which intensifies the thermally direct cell. The stronger thermally direct cell with higher horizontal resolution has greater downward motion in higher latitudes, and thus lowers the extratropical tropopause more in higher horizontal resolution models, which explains why the extratropical tropopause is lower in higher horizontal than in lower horizontal resolution models, as in Son and Polvani's (2007) paper.
National Centers for Environmental Prediction
Contacts Change Log Events Calendar Numerical Forecast Systems Link to NOAA/ESRL Rapid Refresh page [< ;--click here] Link to NOAA/ESRL High-Resolution Rapid Refresh page [<--click here] NOAA / National
Braun, Katharina; Böhnke, Frank; Stark, Thomas
2012-06-01
We present a complete geometric model of the human cochlea, including the segmentation and reconstruction of the fluid-filled chambers scala tympani and scala vestibuli, the lamina spiralis ossea and the vibrating structure (cochlear partition). Future fluid-structure coupled simulations require a reliable geometric model of the cochlea. The aim of this study was to present an anatomical model of the human cochlea, which can be used for further numerical calculations. Using high resolution micro-computed tomography (µCT), we obtained images of a cut human temporal bone with a spatial resolution of 5.9 µm. Images were manually segmented to obtain the three-dimensional reconstruction of the cochlea. Due to the high resolution of the µCT data, a detailed examination of the geometry of the twisted cochlear partition near the oval and the round window as well as the precise illustration of the helicotrema was possible. After reconstruction of the lamina spiralis ossea, the cochlear partition and the curved geometry of the scala vestibuli and the scala tympani were presented. The obtained data sets were exported as standard lithography (stl) files. These files represented a complete framework for future numerical simulations of mechanical (acoustic) wave propagation on the cochlear partition in the form of mathematical mechanical cochlea models. Additional quantitative information concerning heights, lengths and volumes of the scalae was found and compared with previous results.
NASA Technical Reports Server (NTRS)
Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.
2014-01-01
A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
Sea breeze: Induced mesoscale systems and severe weather
NASA Technical Reports Server (NTRS)
Nicholls, M. E.; Pielke, R. A.; Cotton, W. R.
1990-01-01
Sea-breeze-deep convective interactions over the Florida peninsula were investigated using a cloud/mesoscale numerical model. The objective was to gain a better understanding of sea-breeze and deep convective interactions over the Florida peninsula using a high resolution convectively explicit model and to use these results to evaluate convective parameterization schemes. A 3-D numerical investigation of Florida convection was completed. The Kuo and Fritsch-Chappell parameterization schemes are summarized and evaluated.
Evans, Alistair R.; McHenry, Colin R.
2015-01-01
The reliability of finite element analysis (FEA) in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context. PMID:26056620
Quantifying errors in trace species transport modeling.
Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M
2008-12-16
One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Touma, Rony; Zeidan, Dia
In this paper we extend a central finite volume method on nonuniform grids to the case of drift-flux two-phase flow problems. The numerical base scheme is an unstaggered, non oscillatory, second-order accurate finite volume scheme that evolves a piecewise linear numerical solution on a single grid and uses dual cells intermediately while updating the numerical solution to avoid the resolution of the Riemann problems arising at the cell interfaces. We then apply the numerical scheme and solve a classical drift-flux problem. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the potentialmore » of the proposed scheme.« less
Zradziński, Patryk
2013-06-01
According to international guidelines, the assessment of biophysical effects of exposure to electromagnetic fields (EMF) generated by hand-operated sources needs the evaluation of induced electric field (E(in)) or specific energy absorption rate (SAR) caused by EMF inside a worker's body and is usually done by the numerical simulations with different protocols applied to these two exposure cases. The crucial element of these simulations is the numerical phantom of the human body. Procedures of E(in) and SAR evaluation due to compliance analysis with exposure limits have been defined in Institute of Electrical and Electronics Engineers standards and International Commission on Non-Ionizing Radiation Protection guidelines, but a detailed specification of human body phantoms has not been described. An analysis of the properties of over 30 human body numerical phantoms was performed which has been used in recently published investigations related to the assessment of EMF exposure by various sources. The differences in applicability of these phantoms in the evaluation of E(in) and SAR while operating industrial devices and SAR while using mobile communication handsets are discussed. The whole human body numerical phantom dimensions, posture, spatial resolution and electric contact with the ground constitute the key parameters in modeling the exposure related to industrial devices, while modeling the exposure from mobile communication handsets, which needs only to represent the exposed part of the human body nearest to the handset, mainly depends on spatial resolution of the phantom. The specification and standardization of these parameters of numerical human body phantoms are key requirements to achieve comparable and reliable results from numerical simulations carried out for compliance analysis against exposure limits or within the exposure assessment in EMF-related epidemiological studies.
Numerical solution of the exterior oblique derivative BVP using the direct BEM formulation
NASA Astrophysics Data System (ADS)
Čunderlík, Róbert; Špir, Róbert; Mikula, Karol
2016-04-01
The fixed gravimetric boundary value problem (FGBVP) represents an exterior oblique derivative problem for the Laplace equation. A direct formulation of the boundary element method (BEM) for the Laplace equation leads to a boundary integral equation (BIE) where a harmonic function is represented as a superposition of the single-layer and double-layer potential. Such a potential representation is applied to obtain a numerical solution of FGBVP. The oblique derivative problem is treated by a decomposition of the gradient of the unknown disturbing potential into its normal and tangential components. Our numerical scheme uses the collocation with linear basis functions. It involves a triangulated discretization of the Earth's surface as our computational domain considering its complicated topography. To achieve high-resolution numerical solutions, parallel implementations using the MPI subroutines as well as an iterative elimination of far zones' contributions are performed. Numerical experiments present a reconstruction of a harmonic function above the Earth's topography given by the spherical harmonic approach, namely by the EGM2008 geopotential model up to degree 2160. The SRTM30 global topography model is used to approximate the Earth's surface by the triangulated discretization. The obtained BEM solution with the resolution 0.05 deg (12,960,002 nodes) is compared with EGM2008. The standard deviation of residuals 5.6 cm indicates a good agreement. The largest residuals are obviously in high mountainous regions. They are negative reaching up to -0.7 m in Himalayas and about -0.3 m in Andes and Rocky Mountains. A local refinement in the area of Slovakia confirms an improvement of the numerical solution in this mountainous region despite of the fact that the Earth's topography is here considered in more details.
Psychosocial Maturity and Conflict Resolution Management of Higher Secondary School Students
ERIC Educational Resources Information Center
Jaseena M.P.M., Fathima; P., Divya
2014-01-01
The aim of the study is to find out the extent and difference in the mean scores of Psychosocial Maturity and Conflict Resolution Management of Higher secondary school students of Kerala. A survey technique was used for the study. Sample consists of 685 higher secondary students by giving due representation other criteria. Findings revealed that…
Estimating the numerical diapycnal mixing in an eddy-permitting ocean model
NASA Astrophysics Data System (ADS)
Megann, Alex
2018-01-01
Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution
NASA Technical Reports Server (NTRS)
Jonathan L. Case; Kumar, Sujay V.; Srikishen, Jayanthi; Jedlovec, Gary J.
2010-01-01
One of the most challenging weather forecast problems in the southeastern U.S. is daily summertime pulse-type convection. During the summer, atmospheric flow and forcing are generally weak in this region; thus, convection typically initiates in response to local forcing along sea/lake breezes, and other discontinuities often related to horizontal gradients in surface heating rates. Numerical simulations of pulse convection usually have low skill, even in local predictions at high resolution, due to the inherent chaotic nature of these precipitation systems. Forecast errors can arise from assumptions within parameterization schemes, model resolution limitations, and uncertainties in both the initial state of the atmosphere and land surface variables such as soil moisture and temperature. For this study, it is hypothesized that high-resolution, consistent representations of surface properties such as soil moisture, soil temperature, and sea surface temperature (SST) are necessary to better simulate the interactions between the surface and atmosphere, and ultimately improve predictions of summertime pulse convection. This paper describes a sensitivity experiment using the Weather Research and Forecasting (WRF) model. Interpolated land and ocean surface fields from a large-scale model are replaced with high-resolution datasets provided by unique NASA assets in an experimental simulation: the Land Information System (LIS) and Moderate Resolution Imaging Spectroradiometer (MODIS) SSTs. The LIS is run in an offline mode for several years at the same grid resolution as the WRF model to provide compatible land surface initial conditions in an equilibrium state. The MODIS SSTs provide detailed analyses of SSTs over the oceans and large lakes compared to current operational products. The WRF model runs initialized with the LIS+MODIS datasets result in a reduction in the overprediction of rainfall areas; however, the skill is almost equally as low in both experiments using traditional verification methodologies. Output from object-based verification within NCAR s Meteorological Evaluation Tools reveals that the WRF runs initialized with LIS+MODIS data consistently generated precipitation objects that better matched observed precipitation objects, especially at higher precipitation intensities. The LIS+MODIS runs produced on average a 4% increase in matched precipitation areas and a simultaneous 4% decrease in unmatched areas during three months of daily simulations.
NASA Astrophysics Data System (ADS)
Bracco, Annalisa; Choi, Jun; Joshi, Keshav; Luo, Hao; McWilliams, James C.
2016-05-01
This study examines the mesoscale and submesoscale circulations along the continental slope in the northern Gulf of Mexico at depths greater than 1000 m. The investigation is performed using a regional model run at two horizontal grid resolutions, 5 km and 1.6 km, over a 3 year period, from January 2010 to December 2012. Ageostrophic submesoscale eddies and vorticity filaments populate the continental slope, and they are stronger and more abundant in the simulation at higher resolution, as to be expected. They are formed from horizontal shear layers at the edges of highly intermittent, bottom-intensified, along-slope boundary currents and in the cores of those currents where they are confined to steep slopes. Two different flow regimes are identified. The first applies to the De Soto Canyon that is characterized by weak mean currents and, in the high-resolution run, by intense but few submesoscale eddies that form near preferentially along the Florida continental slope. The second is found in the remainder of the domain, where the mean currents are stronger and the circulation is highly variable in both space and time, and the vorticity field is populated, in the high-resolution case, by numerous vorticity filaments and short-lived eddies. Lagrangian tracers are deployed at different times along the continental shelf below 1000 m depth to quantify the impact of the submesoscale currents on transport and mixing. The modeled absolute dispersion is, on average, independent of horizontal resolution, while mixing, quantified by finite-size Lyapunov exponents and vertical relative dispersion, increases when submesoscale processes are present. Dispersion in the De Soto Canyon is smaller than in the rest of the model domain and less affected by resolution. This is further confirmed comparing the evolution of passive dye fields deployed in De Soto Canyon near the Macondo Prospect, where the Deepwater Horizon rig exploded in 2010, and at the largest known natural hydrocarbon seep in the northern Gulf, known as GC600, located a few hundred kilometers to the west of the rig wellhead.
Bermingham, Douglas; Hill, Robert D.; Woltz, Dan; Gardner, Michael K.
2013-01-01
The goals of this study were to assess the primary effects of the use of cognitive strategy and a combined measure of numeric ability on recall of every-day numeric information (i.e. prices). Additionally, numeric ability was assessed as a moderator in the relationship between strategy use and memory for prices. One hundred participants memorized twelve prices that varied from 1 to 6 digits; they recalled these immediately and after 7 days. The use of strategies, assessed through self-report, was associated with better overall recall, but not forgetting. Numeric ability was not associated with either better overall recall or forgetting. A small moderating interaction was found, in which higher levels of numeric ability enhanced the beneficial effects of strategy use on overall recall. Exploratory analyses found two further small moderating interactions: simple strategy use enhanced overall recall at higher levels of numeric ability, compared to complex strategy use; and complex strategy use was associated with lower levels of forgetting, but only at higher levels of numeric ability, compared to the simple strategy use. These results provide support for an objective measure of numeric ability, as well as adding to the literature on memory and the benefits of cognitive strategy use. PMID:23483964
Bermingham, Douglas; Hill, Robert D; Woltz, Dan; Gardner, Michael K
2013-01-01
The goals of this study were to assess the primary effects of the use of cognitive strategy and a combined measure of numeric ability on recall of every-day numeric information (i.e. prices). Additionally, numeric ability was assessed as a moderator in the relationship between strategy use and memory for prices. One hundred participants memorized twelve prices that varied from 1 to 6 digits; they recalled these immediately and after 7 days. The use of strategies, assessed through self-report, was associated with better overall recall, but not forgetting. Numeric ability was not associated with either better overall recall or forgetting. A small moderating interaction was found, in which higher levels of numeric ability enhanced the beneficial effects of strategy use on overall recall. Exploratory analyses found two further small moderating interactions: simple strategy use enhanced overall recall at higher levels of numeric ability, compared to complex strategy use; and complex strategy use was associated with lower levels of forgetting, but only at higher levels of numeric ability, compared to the simple strategy use. These results provide support for an objective measure of numeric ability, as well as adding to the literature on memory and the benefits of cognitive strategy use.
The precipitation forecast sensitivity to data assimilation on a very high resolution domain
NASA Astrophysics Data System (ADS)
Palamarchuk, Iuliia; Ivanov, Sergiy; Ruban, Igor
2016-04-01
Last developments in computing technologies allow the implementation of a very high resolution in numerical weather prediction models. Due to that fact, simulation and quantitative analysis of mesoscale processes with a horizontal scale of few kilometers become available. This is crucially important in studies of precipitation including their life-cycle. However, new opportunities generate prerequisites to revising existing knowledge, both in meteorology and numerics. The latter associates, in particular, with formulation of the initial conditions involving the data assimilation. Depending on applied techniques, observational data types and spatial resolution the precipitation prediction appears quite sensitive. The impact of the data assimilation on resulting fields is presented using the Harmonie-38h1.2 model with the AROME physical package. The numerical experiments were performed for the Finland domain with the horizontal grid of 2.5 km and 65 vertical levels for the August 2010 period covering the BaltRad experiment. The initial conditions formulation included downscaling from the MARS archive and involving observations through 3DVAR data assimilation. The treatment of both conventional and radar observations in numerical experiments was used. The earlier included the SYNOP, SHIP, PILOT, TEMP, AIREP and DRIBU types. The background error covariances required for the variational assimilation have already been computed from the ensemble perturbed analysis with the purely statistical balance by the HIRLAM community. Deviations among the model runs started from the MARS, conventional and radar data assimilation were complex. In the focus therefore is to know how the model system reacts on involvement of observations. The contribution from observed variables included in the control vector, such as humidity and temperature, was expected to be largest. Nevertheless, revealing of such impact is not so straightforward task. Major changes occur within the lower 3-km layer of the atmosphere for all predicted variables. However, those changes were not directly associated with observation locations, as it often shows single observation experiments. Moreover, the model response to observations with lead time produces weak mesoscale spots of opposite signs. Special attention is paid to precipitation, cloud and rain water, vertical velocity fields. A complex chain of interactions among radiation, temperature, humidity, stratification and other atmospheric characteristics results in changes of local updraft and downdraft flows and following cloud formation processes and precipitation release. One can assume that those features would arise due to both, atmospheric physics and numeric effects. The latter becomes more evident in simulations on very high resolution domains.
Numerical simulations of significant orographic precipitation in Madeira island
NASA Astrophysics Data System (ADS)
Couto, Flavio Tiago; Ducrocq, Véronique; Salgado, Rui; Costa, Maria João
2016-03-01
High-resolution simulations of high precipitation events with the MESO-NH model are presented, and also used to verify that increasing horizontal resolution in zones of complex orography, such as in Madeira island, improve the simulation of the spatial distribution and total precipitation. The simulations succeeded in reproducing the general structure of the cloudy systems over the ocean in the four periods considered of significant accumulated precipitation. The accumulated precipitation over the Madeira was better represented with the 0.5 km horizontal resolution and occurred under four distinct synoptic situations. Different spatial patterns of the rainfall distribution over the Madeira have been identified.
Final Report: Closeout of the Award NO. DE-FG02-98ER62618 (M.S. Fox-Rabinovitz, P.I.)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox-Rabinovitz, M. S.
The final report describes the study aimed at exploring the variable-resolution stretched-grid (SG) approach to decadal regional climate modeling using advanced numerical techniques. The obtained results have shown that variable-resolution SG-GCMs using stretched grids with fine resolution over the area(s) of interest, is a viable established approach to regional climate modeling. The developed SG-GCMs have been extensively used for regional climate experimentation. The SG-GCM simulations are aimed at studying the U.S. regional climate variability with an emphasis on studying anomalous summer climate events, the U.S. droughts and floods.
Toroidal sensor arrays for real-time photoacoustic imaging
NASA Astrophysics Data System (ADS)
Bychkov, Anton S.; Cherepetskaya, Elena B.; Karabutov, Alexander A.; Makarov, Vladimir A.
2017-07-01
This article addresses theoretical and numerical investigation of image formation in photoacoustic (PA) imaging with complex-shaped concave sensor arrays. The spatial resolution and the size of sensitivity region of PA and laser ultrasonic (LU) imaging systems are assessed using sensitivity maps and spatial resolution maps in the image plane. This paper also discusses the relationship between the size of high-sensitivity regions and the spatial resolution of real-time imaging systems utilizing toroidal arrays. It is shown that the use of arrays with toroidal geometry significantly improves the diagnostic capabilities of PA and LU imaging to investigate biological objects, rocks, and composite materials.
Three-dimensional wide-field pump-probe structured illumination microscopy
Kim, Yang-Hyo; So, Peter T.C.
2017-01-01
We propose a new structured illumination scheme for achieving depth resolved wide-field pump-probe microscopy with sub-diffraction limit resolution. By acquiring coherent pump-probe images using a set of 3D structured light illumination patterns, a 3D super-resolution pump-probe image can be reconstructed. We derive the theoretical framework to describe the coherent image formation and reconstruction scheme for this structured illumination pump-probe imaging system and carry out numerical simulations to investigate its imaging performance. The results demonstrate a lateral resolution improvement by a factor of three and providing 0.5 µm level axial optical sectioning. PMID:28380860
Differential absorption lidars for remote sensing of atmospheric pressure and temperature profiles
NASA Technical Reports Server (NTRS)
Korb, C. Laurence; Schwemmer, Geary K.; Famiglietti, Joseph; Walden, Harvey; Prasad, Coorg
1995-01-01
A near infrared differential absorption lidar technique is developed using atmospheric oxygen as a tracer for high resolution vertical profiles of pressure and temperature with high accuracy. Solid-state tunable lasers and high-resolution spectrum analyzers are developed to carry out ground-based and airborne measurement demonstrations and results of the measurements presented. Numerical error analysis of high-altitude airborne and spaceborne experiments is carried out, and system concepts developed for their implementation.
Towards an Optimal Noise Versus Resolution Trade-Off in Wind Scatterometry
NASA Technical Reports Server (NTRS)
Williams, Brent A.
2011-01-01
This paper approaches the noise versus resolution trade-off in wind scatterometry from a field-wise retrieval perspective. Theoretical considerations are discussed and practical implementation using a MAP estimator is applied to the Sea-Winds scatterometer. The approach is compared to conventional approaches as well as numerical weather predictions. The new approach incorporates knowledge of the wind spectrum to reduce the impact of components of the wind signal that are expected to be noisy.
NASA Astrophysics Data System (ADS)
O'Neill, A.
2015-12-01
The Coastal Storm Modeling System (CoSMoS) is a numerical modeling scheme used to predict coastal flooding due to sea level rise and storms influenced by climate change, currently in use in central California and in development for Southern California (Pt. Conception to the Mexican border). Using a framework of circulation, wave, analytical, and Bayesian models at different geographic scales, high-resolution results are translated as relevant hazards projections at the local scale that include flooding, wave heights, coastal erosion, shoreline change, and cliff failures. Ready access to accurate, high-resolution coastal flooding data is critical for further validation and refinement of CoSMoS and improved coastal hazard projections. High-resolution Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) provides an exceptional data source as appropriately-timed flights during extreme tides or storms provide a geographically-extensive method for determining areas of inundation and flooding extent along expanses of complex and varying coastline. Landward flood extents are numerically identified via edge-detection in imagery from single flights, and can also be ascertained via change detection using additional flights and imagery collected during average wave/tide conditions. The extracted flooding positions are compared against CoSMoS results for similar tide, water level, and storm-intensity conditions, allowing for robust testing and validation of CoSMoS and providing essential feedback for supporting regional and local model improvement.
Cardiac fibrillation risk of Taser weapons.
Leitgeb, Norbert
2014-06-01
The debate on potential health hazards associated with delivering electric discharges to incapacitated subjects, in particular on whether electric discharge weapons are lethal, less lethal or non-lethal, is still controversial. The cardiac fibrillation risks of Taser weapons X26 and X3 have been investigated by measuring the delivered high-tension pulses in dependence on load impedance. Excitation thresholds and sinus-to-Taser conversion factors have been determined by numerical modeling of endocardial, myocardial, and epicardial cells. Detailed quantitative assessment of cardiac electric exposure has been performed by numerical simulation at the normal-weighted anatomical model NORMAN. The impact of anatomical variation has been quantified at an overweight model (Visible Man), both with a spatial resolution of 2 × 2 × 2 mm voxels. Spacing and location of dart electrodes were systematically varied and the worst-case position determined. Based on volume-weighted cardiac exposure assessment, the fibrillation probability of the worst-case hit was determined to 30% (Taser X26) and 9% (Taser X3). The overall risk assessment of Taser application accounting for realistic spatial hit distributions was derived from training sessions of police officers under realistic scenarios and by accounting for the influence of body (over-)weight as well as gender. The analysis of the results showed that the overall fibrillation risk of Taser use is not negligible. It is higher at Taser X26 than at Taser X3 and amounts to about 1% for Europeans with an about 20% higher risk for Asians. Results demonstrate that enhancement as well as further reduction of fibrillation risk depends on responsible use or abuse of Taser weapons.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
NASA Astrophysics Data System (ADS)
Palma, J. L.; Rodrigues, C. V.; Lopes, A. S.; Carneiro, A. M. C.; Coelho, R. P. C.; Gomes, V. C.
2017-12-01
With the ever increasing accuracy required from numerical weather forecasts, there is pressure to increase the resolution and fidelity employed in computational micro-scale flow models. However, numerical studies of complex terrain flows are fundamentally bound by the digital representation of the terrain and land cover. This work assess the impact of the surface description on micro-scale simulation results at a highly complex site in Perdigão, Portugal, characterized by a twin parallel ridge topography, densely forested areas and an operating wind turbine. Although Coriolis and stratification effects cannot be ignored, the study is done under neutrally stratified atmosphere and static inflow conditions. The understanding gained here will later carry over to WRF-coupled simulations, where those conditions do not apply and the flow physics is more accurately modelled. With access to very fine digital mappings (<1m horizontal resolution) of both topography and land cover (roughness and canopy cover, both obtained through aerial LIDAR scanning of the surface) the impact of each element of the surface description on simulation results can be individualized, in order to estimate the resolution required to satisfactorily resolve them. Starting from the bare topographic description, in its coursest form, these include: a) the surface roughness mapping, b) the operating wind turbine, c) the canopy cover, as either body forces or added surface roughness (akin to meso-scale modelling), d) high resolution topography and surface cover mapping. Each of these individually will have an impact near the surface, including the rotor swept area of modern wind turbines. Combined they will considerably change flow up to boundary layer heights. Sensitivity to these elements cannot be generalized and should be assessed case-by-case. This type of in-depth study, unfeasible using WRF-coupled simulations, should provide considerable insight when spatially allocating mesh resolution for accurate resolution of complex flows.
Supernova feedback in numerical simulations of galaxy formation: separating physics from numerics
NASA Astrophysics Data System (ADS)
Smith, Matthew C.; Sijacki, Debora; Shen, Sijing
2018-07-01
While feedback from massive stars exploding as supernovae (SNe) is thought to be one of the key ingredients regulating galaxy formation, theoretically it is still unclear how the available energy couples to the interstellar medium and how galactic scale outflows are launched. We present a novel implementation of six sub-grid SN feedback schemes in the moving-mesh code AREPO, including injections of thermal and/or kinetic energy, two parametrizations of delayed cooling feedback and a `mechanical' feedback scheme that injects the correct amount of momentum depending on the relevant scale of the SN remnant resolved. All schemes make use of individually time-resolved SN events. Adopting isolated disc galaxy set-ups at different resolutions, with the highest resolution runs reasonably resolving the Sedov-Taylor phase of the SN, we aim to find a physically motivated scheme with as few tunable parameters as possible. As expected, simple injections of energy overcool at all but the highest resolution. Our delayed cooling schemes result in overstrong feedback, destroying the disc. The mechanical feedback scheme is efficient at suppressing star formation, agrees well with the Kennicutt-Schmidt relation, and leads to converged star formation rates and galaxy morphologies with increasing resolution without fine-tuning any parameters. However, we find it difficult to produce outflows with high enough mass loading factors at all but the highest resolution, indicating either that we have oversimplified the evolution of unresolved SN remnants, require other stellar feedback processes to be included, and require a better star formation prescription or most likely some combination of these issues.
Supernova feedback in numerical simulations of galaxy formation: separating physics from numerics
NASA Astrophysics Data System (ADS)
Smith, Matthew C.; Sijacki, Debora; Shen, Sijing
2018-04-01
While feedback from massive stars exploding as supernovae (SNe) is thought to be one of the key ingredients regulating galaxy formation, theoretically it is still unclear how the available energy couples to the interstellar medium and how galactic scale outflows are launched. We present a novel implementation of six sub-grid SN feedback schemes in the moving-mesh code AREPO, including injections of thermal and/or kinetic energy, two parametrizations of delayed cooling feedback and a `mechanical' feedback scheme that injects the correct amount of momentum depending on the relevant scale of the SN remnant resolved. All schemes make use of individually time-resolved SN events. Adopting isolated disk galaxy setups at different resolutions, with the highest resolution runs reasonably resolving the Sedov-Taylor phase of the SN, we aim to find a physically motivated scheme with as few tunable parameters as possible. As expected, simple injections of energy overcool at all but the highest resolution. Our delayed cooling schemes result in overstrong feedback, destroying the disk. The mechanical feedback scheme is efficient at suppressing star formation, agrees well with the Kennicutt-Schmidt relation and leads to converged star formation rates and galaxy morphologies with increasing resolution without fine tuning any parameters. However, we find it difficult to produce outflows with high enough mass loading factors at all but the highest resolution, indicating either that we have oversimplified the evolution of unresolved SN remnants, require other stellar feedback processes to be included, require a better star formation prescription or most likely some combination of these issues.
NASA Astrophysics Data System (ADS)
George, D. L.; Iverson, R. M.
2012-12-01
Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.
Mesosacle eddies in a high resolution OGCM and coupled ocean-atmosphere GCM
NASA Astrophysics Data System (ADS)
Yu, Y.; Liu, H.; Lin, P.
2017-12-01
The present study described high-resolution climate modeling efforts including oceanic, atmospheric and coupled general circulation model (GCM) at the state key laboratory of numerical modeling for atmospheric sciences and geophysical fluid dynamics (LASG), Institute of Atmospheric Physics (IAP). The high-resolution OGCM is established based on the latest version of the LASG/IAP Climate system Ocean Model (LICOM2.1), but its horizontal resolution and vertical resolution are increased to 1/10° and 55 layers, respectively. Forced by the surface fluxes from the reanalysis and observed data, the model has been integrated for approximately more than 80 model years. Compared with the simulation of the coarse-resolution OGCM, the eddy-resolving OGCM not only better simulates the spatial-temporal features of mesoscale eddies and the paths and positions of western boundary currents but also reproduces the large meander of the Kuroshio Current and its interannual variability. Another aspect, namely, the complex structures of equatorial Pacific currents and currents in the coastal ocean of China, are better captured due to the increased horizontal and vertical resolution. Then we coupled the high resolution OGCM to NCAR CAM4 with 25km resolution, in which the mesoscale air-sea interaction processes are better captured.
Evaluation of the spline reconstruction technique for PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra; Gaitanis, Anastasios
2014-04-15
Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors havemore » implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real point-source studies. In all simulated phantoms, the SRT exhibits higher contrast and lower bias than FBP at all noise levels, by increasing the COV in the reconstructed images. Finally, in real studies, whereas the contrast of the cold chambers are similar for both algorithms, the SRT reconstructed images of the NEMA phantom exhibit slightly higher COV values than those of FBP. In the Derenzo phantom, SRT resolves the 2-mm separated holes slightly better than FBP. The small-animal and human reconstructions via SRT exhibit slightly higher resolution and contrast than the FBP reconstructions. Conclusions: The SRT provides images of higher resolution, higher contrast, and lower bias than FBP, by increasing slightly the noise in the reconstructed images. Furthermore, it eliminates streak artifacts outside the object boundary. Unlike other analytic algorithms, the reconstruction time of SRT is comparable with that of FBP. The source code for SRT will become available in a future release of STIR.« less
Health and Performance of Antarctic Winter-Over Personnel: A Follow-Up Study.
1985-06-01
PALINKAS JUN 95 I UNCLSSIFIED NHRSCHSC05-18-1 F/O6/1NL IlflONfl w2.2 1.2511.4 11.IIII1III MICROCOPY RESOLUTION TEST CHART NATIONAL BURtAU OF STANOAROS...isolation during this period is associated with numerous social and psychological strebsors, in addition to physiological changes. Objective The...winter. The pro- longed isolation during this period is associated with numerous social and psychological stressors. Polarization of subgroups of civilian
Ultra-high spatial resolution multi-energy CT using photon counting detector technology
NASA Astrophysics Data System (ADS)
Leng, S.; Gutjahr, R.; Ferrero, A.; Kappler, S.; Henning, A.; Halaweish, A.; Zhou, W.; Montoya, J.; McCollough, C.
2017-03-01
Two ultra-high-resolution (UHR) imaging modes, each with two energy thresholds, were implemented on a research, whole-body photon-counting-detector (PCD) CT scanner, referred to as sharp and UHR, respectively. The UHR mode has a pixel size of 0.25 mm at iso-center for both energy thresholds, with a collimation of 32 × 0.25 mm. The sharp mode has a 0.25 mm pixel for the low-energy threshold and 0.5 mm for the high-energy threshold, with a collimation of 48 × 0.25 mm. Kidney stones with mixed mineral composition and lung nodules with different shapes were scanned using both modes, and with the standard imaging mode, referred to as macro mode (0.5 mm pixel and 32 × 0.5 mm collimation). Evaluation and comparison of the three modes focused on the ability to accurately delineate anatomic structures using the high-spatial resolution capability and the ability to quantify stone composition using the multi-energy capability. The low-energy threshold images of the sharp and UHR modes showed better shape and texture information due to the achieved higher spatial resolution, although noise was also higher. No noticeable benefit was shown in multi-energy analysis using UHR compared to standard resolution (macro mode) when standard doses were used. This was due to excessive noise in the higher resolution images. However, UHR scans at higher dose showed improvement in multi-energy analysis over macro mode with regular dose. To fully take advantage of the higher spatial resolution in multi-energy analysis, either increased radiation dose, or application of noise reduction techniques, is needed.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen
1998-01-01
A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.
Impact of tropical cyclones on modeled extreme wind-wave climate
Timmermans, Ben; Stone, Daithi; Wehner, Michael; ...
2017-02-16
Here, the effect of forcing wind resolution on the extremes of global wind-wave climate are investigated in numerical simulations. Forcing winds from the Community Atmosphere Model at horizontal resolutions of ~1.0° and ~0.25° are used to drive Wavewatch III. Differences in extreme wave height are found to manifest most strongly in tropical cyclone (TC) regions, emphasizing the need for high-resolution forcing in those areas. Comparison with observations typically show improvement in performance with increased forcing resolution, with a strong influence in the tail of the distribution, although simulated extremes can exceed observations. A simulation for the end of the 21stmore » century under a RCP 8.5 type emission scenario suggests further increases in extreme wave height in TC regions.« less
Impact of tropical cyclones on modeled extreme wind-wave climate
NASA Astrophysics Data System (ADS)
Timmermans, Ben; Stone, Dáithí; Wehner, Michael; Krishnan, Harinarayan
2017-02-01
The effect of forcing wind resolution on the extremes of global wind-wave climate are investigated in numerical simulations. Forcing winds from the Community Atmosphere Model at horizontal resolutions of ˜1.0° and ˜0.25° are used to drive Wavewatch III. Differences in extreme wave height are found to manifest most strongly in tropical cyclone (TC) regions, emphasizing the need for high-resolution forcing in those areas. Comparison with observations typically show improvement in performance with increased forcing resolution, with a strong influence in the tail of the distribution, although simulated extremes can exceed observations. A simulation for the end of the 21st century under a RCP 8.5 type emission scenario suggests further increases in extreme wave height in TC regions.
Zonal wavefront sensing with enhanced spatial resolution.
Pathak, Biswajit; Boruah, Bosanta R
2016-12-01
In this Letter, we introduce a scheme to enhance the spatial resolution of a zonal wavefront sensor. The zonal wavefront sensor comprises an array of binary gratings implemented by a ferroelectric spatial light modulator (FLCSLM) followed by a lens, in lieu of the array of lenses in the Shack-Hartmann wavefront sensor. We show that the fast response of the FLCSLM device facilitates quick display of several laterally shifted binary grating patterns, and the programmability of the device enables simultaneous capturing of each focal spot array. This eventually leads to a wavefront estimation with an enhanced spatial resolution without much sacrifice on the sensor frame rate, thus making the scheme suitable for high spatial resolution measurement of transient wavefronts. We present experimental and numerical simulation results to demonstrate the importance of the proposed wavefront sensing scheme.
Low-Order Aberrations in Band-limited Lyot Coronagraphs
NASA Astrophysics Data System (ADS)
Sivaramakrishnan, Anand; Soummer, Rémi; Sivaramakrishnan, Allic V.; Lloyd, James P.; Oppenheimer, Ben R.; Makidon, Russell B.
2005-12-01
We study the way Lyot coronagraphs with unapodized entrance pupils respond to small, low-order phase aberrations. This study is applicable to ground-based adaptive optics coronagraphs operating at 90% and higher Strehl ratios, as well as to some space-based coronagraphs with intrinsically higher Strehl ratio imaging. We utilize a second-order expansion of the monochromatic point-spread function (written as a power spectrum of a power series in the phase aberration over clear aperture) to derive analytical expressions for the response of a ``band-limited'' Lyot coronagraph (BLC) to small, low-order, phase aberrations. The BLC possesses a focal plane mask with an occulting spot whose opacity profile is a spatially band-limited function rather than a hard-edged, opaque disk. The BLC is, to first order, insensitive to tilt and astigmatism. Undersizing the stop in the reimaged pupil plane (the Lyot plane) following the focal plane mask can alleviate second-order effects of astigmatism, at the expense of system throughput and angular resolution. The optimal degree of such undersizing depends on individual instrument designs and goals. Our analytical work engenders physical insight and complements existing numerical work on this subject. Our methods can be extended to treat the passage of higher order aberrations through band-limited Lyot coronagraphs by using our polynomial decomposition or an analogous Fourier approach.
Prospective treatment planning to improve locoregional hyperthermia for oesophageal cancer.
Kok, H P; van Haaren, P M A; van de Kamer, J B; Zum Vörde Sive Vörding, P J; Wiersma, J; Hulshof, M C C M; Geijsen, E D; van Lanschot, J J B; Crezee, J
2006-08-01
In the Academic Medical Center (AMC) Amsterdam, locoregional hyperthermia for oesophageal tumours is applied using the 70 MHz AMC-4 phased array system. Due to the occurrence of treatment-limiting hot spots in normal tissue and systemic stress at high power, the thermal dose achieved in the tumour can be sub-optimal. The large number of degrees of freedom of the heating device, i.e. the amplitudes and phases of the antennae, makes it difficult to avoid treatment-limiting hot spots by intuitive amplitude/phase steering. Prospective hyperthermia treatment planning combined with high resolution temperature-based optimization was applied to improve hyperthermia treatment of patients with oesophageal cancer. All hyperthermia treatments were performed with 'standard' clinical settings. Temperatures were measured systemically, at the location of the tumour and near the spinal cord, which is an organ at risk. For 16 patients numerically optimized settings were obtained from treatment planning with temperature-based optimization. Steady state tumour temperatures were maximized, subject to constraints to normal tissue temperatures. At the start of 48 hyperthermia treatments in these 16 patients temperature rise (DeltaT) measurements were performed by applying a short power pulse with the numerically optimized amplitude/phase settings, with the clinical settings and with mixed settings, i.e. numerically optimized amplitudes combined with clinical phases. The heating efficiency of the three settings was determined by the measured DeltaT values and the DeltaT-ratio between the DeltaT in the tumour (DeltaToes) and near the spinal cord (DeltaTcord). For a single patient the steady state temperature distribution was computed retrospectively for all three settings, since the temperature distributions may be quite different. To illustrate that the choice of the optimization strategy is decisive for the obtained settings, a numerical optimization on DeltaT-ratio was performed for this patient and the steady state temperature distribution for the obtained settings was computed. A higher DeltaToes was measured with the mixed settings compared to the calculated and clinical settings; DeltaTcord was higher with the mixed settings compared to the clinical settings. The DeltaT-ratio was approximately 1.5 for all three settings. These results indicate that the most effective tumour heating can be achieved with the mixed settings. DeltaT is proportional to the Specific Absorption Rate (SAR) and a higher SAR results in a higher steady state temperature, which implies that mixed settings are likely to provide the most effective heating at steady state as well. The steady state temperature distributions for the clinical and mixed settings, computed for the single patient, showed some locations where temperatures exceeded the normal tissue constraints used in the optimization. This demonstrates that the numerical optimization did not prescribe the mixed settings, because it had to comply with the constraints set to the normal tissue temperatures. However, the predicted hot spots are not necessarily clinically relevant. Numerical optimization on DeltaT-ratio for this patient yielded a very high DeltaT-ratio ( approximately 380), albeit at the cost of excessive heating of normal tissue and lower steady state tumour temperatures compared to the conventional optimization. Treatment planning can be valuable to improve hyperthermia treatments. A thorough discussion on clinically relevant objectives and constraints is essential.
Numerical methods for systems of conservation laws of mixed type using flux splitting
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1990-01-01
The essentially non-oscillatory (ENO) finite difference scheme is applied to systems of conservation laws of mixed hyperbolic-elliptic type. A flux splitting, with the corresponding Jacobi matrices having real and positive/negative eigenvalues, is used. The hyperbolic ENO operator is applied separately. The scheme is numerically tested on the van der Waals equation in fluid dynamics. Convergence was observed with good resolution to weak solutions for various Riemann problems, which are then numerically checked to be admissible as the viscosity-capillarity limits. The interesting phenomena of the shrinking of elliptic regions if they are present in the initial conditions were also observed.