On controlling nonlinear dissipation in high order filter methods for ideal and non-ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjogreen, B.
2004-01-01
The newly developed adaptive numerical dissipation control in spatially high order filter schemes for the compressible Euler and Navier-Stokes equations has been recently extended to the ideal and non-ideal magnetohydrodynamics (MHD) equations. These filter schemes are applicable to complex unsteady MHD high-speed shock/shear/turbulence problems. They also provide a natural and efficient way for the minimization of Div(B) numerical error. The adaptive numerical dissipation mechanism consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. The numerical dissipation considered consists of high order linear dissipation for the suppression of high frequency oscillation and the nonlinear dissipative portion of high-resolution shock-capturing methods for discontinuity capturing. The applicable nonlinear dissipative portion of high-resolution shock-capturing methods is very general. The objective of this paper is to investigate the performance of three commonly used types of nonlinear numerical dissipation for both the ideal and non-ideal MHD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-04-01
The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less
Guide-star-based computational adaptive optics for broadband interferometric tomography
Adie, Steven G.; Shemonski, Nathan D.; Graf, Benedikt W.; Ahmad, Adeel; Scott Carney, P.; Boppart, Stephen A.
2012-01-01
We present a method for the numerical correction of optical aberrations based on indirect sensing of the scattered wavefront from point-like scatterers (“guide stars”) within a three-dimensional broadband interferometric tomogram. This method enables the correction of high-order monochromatic and chromatic aberrations utilizing guide stars that are revealed after numerical compensation of defocus and low-order aberrations of the optical system. Guide-star-based aberration correction in a silicone phantom with sparse sub-resolution-sized scatterers demonstrates improvement of resolution and signal-to-noise ratio over a large isotome. Results in highly scattering muscle tissue showed improved resolution of fine structure over an extended volume. Guide-star-based computational adaptive optics expands upon the use of image metrics for numerically optimizing the aberration correction in broadband interferometric tomography, and is analogous to phase-conjugation and time-reversal methods for focusing in turbid media. PMID:23284179
Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods
NASA Astrophysics Data System (ADS)
Kozdon, J. E.; Wilcox, L.
2013-12-01
Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.
A Class of High-Resolution Explicit and Implicit Shock-Capturing Methods
NASA Technical Reports Server (NTRS)
Yee, H. C.
1994-01-01
The development of shock-capturing finite difference methods for hyperbolic conservation laws has been a rapidly growing area for the last decade. Many of the fundamental concepts, state-of-the-art developments and applications to fluid dynamics problems can only be found in meeting proceedings, scientific journals and internal reports. This paper attempts to give a unified and generalized formulation of a class of high-resolution, explicit and implicit shock capturing methods, and to illustrate their versatility in various steady and unsteady complex shock waves, perfect gases, equilibrium real gases and nonequilibrium flow computations. These numerical methods are formulated for the purpose of ease and efficient implementation into a practical computer code. The various constructions of high-resolution shock-capturing methods fall nicely into the present framework and a computer code can be implemented with the various methods as separate modules. Included is a systematic overview of the basic design principle of the various related numerical methods. Special emphasis will be on the construction of the basic nonlinear, spatially second and third-order schemes for nonlinear scalar hyperbolic conservation laws and the methods of extending these nonlinear scalar schemes to nonlinear systems via the approximate Riemann solvers and flux-vector splitting approaches. Generalization of these methods to efficiently include real gases and large systems of nonequilibrium flows will be discussed. Some perbolic conservation laws to problems containing stiff source terms and terms and shock waves are also included. The performance of some of these schemes is illustrated by numerical examples for one-, two- and three-dimensional gas-dynamics problems. The use of the Lax-Friedrichs numerical flux to obtain high-resolution shock-capturing schemes is generalized. This method can be extended to nonlinear systems of equations without the use of Riemann solvers or flux-vector splitting approaches and thus provides a large savings for multidimensional, equilibrium real gases and nonequilibrium flow computations.
Numerical Hydrodynamics in Special Relativity.
Martí, J M; Müller, E
1999-01-01
This review is concerned with a discussion of numerical methods for the solution of the equations of special relativistic hydrodynamics (SRHD). Particular emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods in SRHD. Results obtained with different numerical SRHD methods are compared, and two astrophysical applications of SRHD flows are discussed. An evaluation of the various numerical methods is given and future developments are analyzed. Supplementary material is available for this article at 10.12942/lrr-1999-3.
Thermodynamical effects and high resolution methods for compressible fluid flows
NASA Astrophysics Data System (ADS)
Li, Jiequan; Wang, Yue
2017-08-01
One of the fundamental differences of compressible fluid flows from incompressible fluid flows is the involvement of thermodynamics. This difference should be manifested in the design of numerical schemes. Unfortunately, the role of entropy, expressing irreversibility, is often neglected even though the entropy inequality, as a conceptual derivative, is verified for some first order schemes. In this paper, we refine the GRP solver to illustrate how the thermodynamical variation is integrated into the design of high resolution methods for compressible fluid flows and demonstrate numerically the importance of thermodynamic effects in the resolution of strong waves. As a by-product, we show that the GRP solver works for generic equations of state, and is independent of technical arguments.
Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes
NASA Astrophysics Data System (ADS)
Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.
2015-12-01
Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.
NASA Technical Reports Server (NTRS)
Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)
1985-01-01
Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.
A developed nearly analytic discrete method for forward modeling in the frequency domain
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai
2018-02-01
High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.
High Order Finite Difference Methods with Subcell Resolution for 2D Detonation Waves
NASA Technical Reports Server (NTRS)
Wang, W.; Shu, C. W.; Yee, H. C.; Sjogreen, B.
2012-01-01
In simulating hyperbolic conservation laws in conjunction with an inhomogeneous stiff source term, if the solution is discontinuous, spurious numerical results may be produced due to different time scales of the transport part and the source term. This numerical issue often arises in combustion and high speed chemical reacting flows.
A flux splitting scheme with high-resolution and robustness for discontinuities
NASA Technical Reports Server (NTRS)
Wada, Yasuhiro; Liou, Meng-Sing
1994-01-01
A flux splitting scheme is proposed for the general nonequilibrium flow equations with an aim at removing numerical dissipation of Van-Leer-type flux-vector splittings on a contact discontinuity. The scheme obtained is also recognized as an improved Advection Upwind Splitting Method (AUSM) where a slight numerical overshoot immediately behind the shock is eliminated. The proposed scheme has favorable properties: high-resolution for contact discontinuities; conservation of enthalpy for steady flows; numerical efficiency; applicability to chemically reacting flows. In fact, for a single contact discontinuity, even if it is moving, this scheme gives the numerical flux of the exact solution of the Riemann problem. Various numerical experiments including that of a thermo-chemical nonequilibrium flow were performed, which indicate no oscillation and robustness of the scheme for shock/expansion waves. A cure for carbuncle phenomenon is discussed as well.
Resolving the fine-scale structure in turbulent Rayleigh-Benard convection
NASA Astrophysics Data System (ADS)
Scheel, Janet; Emran, Mohammad; Schumacher, Joerg
2013-11-01
Results from high-resolution direct numerical simulations of turbulent Rayleigh-Benard convection in a cylindrical cell with an aspect ratio of one will be presented. We focus on the finest scales of convective turbulence, in particular the statistics of the kinetic energy and thermal dissipation rates in the bulk and the whole cell. These dissipation rates as well as the local dissipation scales are compared for different Rayleigh and Prandtl numbers. We also have investigated the convergence properties of our spectral element method and have found that both dissipation fields are very sensitive to insufficient resolution. We also demonstrate that global transport properties, such as the Nusselt number and the energy balances, are partly insensitive to insufficient resolution and yield consistent results even when the dissipation fields are under-resolved. Our present numerical framework is also compared with high-resolution simulations which use a finite difference method. For most of the compared quantities the agreement is found to be satisfactory.
High-resolution digital holography with the aid of coherent diffraction imaging.
Jiang, Zhilong; Veetil, Suhas P; Cheng, Jun; Liu, Cheng; Wang, Ling; Zhu, Jianqiang
2015-08-10
The image reconstructed in ordinary digital holography was unable to bring out desired resolution in comparison to photographic materials; thus making it less preferable for many interesting applications. A method is proposed to enhance the resolution of digital holography in all directions by placing a random phase plate between the specimen and the electronic camera and then using an iterative approach to do the reconstruction. With this method, the resolution is improved remarkably in comparison to ordinary digital holography. Theoretical analysis is supported by numerical simulation. The feasibility of the method is also studied experimentally.
Microsphere-assisted super-resolution imaging with enlarged numerical aperture by semi-immersion
NASA Astrophysics Data System (ADS)
Wang, Fengge; Yang, Songlin; Ma, Huifeng; Shen, Ping; Wei, Nan; Wang, Meng; Xia, Yang; Deng, Yun; Ye, Yong-Hong
2018-01-01
Microsphere-assisted imaging is an extraordinary simple technology that can obtain optical super-resolution under white-light illumination. Here, we introduce a method to improve the resolution of a microsphere lens by increasing its numerical aperture. In our proposed structure, BaTiO3 glass (BTG) microsphere lenses are semi-immersed in a S1805 layer with a refractive index of 1.65, and then, the semi-immersed microspheres are fully embedded in an elastomer with an index of 1.4. We experimentally demonstrate that this structure, in combination with a conventional optical microscope, can clearly resolve a two-dimensional 200-nm-diameter hexagonally close-packed (hcp) silica microsphere array. On the contrary, the widely used structure where BTG microsphere lenses are fully immersed in a liquid or elastomer cannot even resolve a 250-nm-diameter hcp silica microsphere array. The improvement in resolution through the proposed structure is due to an increase in the effective numerical aperture by semi-immersing BTG microsphere lenses in a high-refractive-index S1805 layer. Our results will inform on the design of microsphere-based high-resolution imaging systems.
Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging
NASA Astrophysics Data System (ADS)
Micó, Vicente; Zalevsky, Zeev
2010-07-01
Digital in-line holographic microscopy (DIHM) is a modern approach capable of achieving micron-range lateral and depth resolutions in three-dimensional imaging. DIHM in combination with numerical imaging reconstruction uses an extremely simplified setup while retaining the advantages provided by holography with enhanced capabilities derived from algorithmic digital processing. We introduce superresolved DIHM incoming from time and angular multiplexing of the sample spatial frequency information and yielding in the generation of a synthetic aperture (SA). The SA expands the cutoff frequency of the imaging system, allowing submicron resolutions in both transversal and axial directions. The proposed approach can be applied when imaging essentially transparent (low-concentration dilutions) and static (slow dynamics) samples. Validation of the method for both a synthetic object (U.S. Air Force resolution test) to quantify the resolution improvement and a biological specimen (sperm cells biosample) are reported showing the generation of high synthetic numerical aperture values working without lenses.
NASA Astrophysics Data System (ADS)
Liu, Hailiang; Wang, Zhongming
2017-01-01
We design an arbitrary-order free energy satisfying discontinuous Galerkin (DG) method for solving time-dependent Poisson-Nernst-Planck systems. Both the semi-discrete and fully discrete DG methods are shown to satisfy the corresponding discrete free energy dissipation law for positive numerical solutions. Positivity of numerical solutions is enforced by an accuracy-preserving limiter in reference to positive cell averages. Numerical examples are presented to demonstrate the high resolution of the numerical algorithm and to illustrate the proven properties of mass conservation, free energy dissipation, as well as the preservation of steady states.
Multi-slice ptychography with large numerical aperture multilayer Laue lenses
Ozturk, Hande; Yan, Hanfei; He, Yan; ...
2018-05-09
Here, the highly convergent x-ray beam focused by multilayer Laue lenses with large numerical apertures is used as a three-dimensional (3D) probe to image layered structures with an axial separation larger than the depth of focus. Instead of collecting weakly scattered high-spatial-frequency signals, the depth-resolving power is provided purely by the intense central cone diverged from the focused beam. Using the multi-slice ptychography method combined with the on-the-fly scan scheme, two layers of nanoparticles separated by 10 μm are successfully reconstructed with 8.1 nm lateral resolution and with a dwell time as low as 0.05 s per scan point. Thismore » approach obtains high-resolution images with extended depth of field, which paves the way for multi-slice ptychography as a high throughput technique for high-resolution 3D imaging of thick samples.« less
Multi-slice ptychography with large numerical aperture multilayer Laue lenses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozturk, Hande; Yan, Hanfei; He, Yan
Here, the highly convergent x-ray beam focused by multilayer Laue lenses with large numerical apertures is used as a three-dimensional (3D) probe to image layered structures with an axial separation larger than the depth of focus. Instead of collecting weakly scattered high-spatial-frequency signals, the depth-resolving power is provided purely by the intense central cone diverged from the focused beam. Using the multi-slice ptychography method combined with the on-the-fly scan scheme, two layers of nanoparticles separated by 10 μm are successfully reconstructed with 8.1 nm lateral resolution and with a dwell time as low as 0.05 s per scan point. Thismore » approach obtains high-resolution images with extended depth of field, which paves the way for multi-slice ptychography as a high throughput technique for high-resolution 3D imaging of thick samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kupferman, R.
The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Sownak; Li, Baojiu; He, Jian-hua
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less
Speeding up N-body simulations of modified gravity: chameleon screening models
NASA Astrophysics Data System (ADS)
Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo
2017-02-01
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.
High resolution surface plasmon microscopy for cell imaging
NASA Astrophysics Data System (ADS)
Argoul, F.; Monier, K.; Roland, T.; Elezgaray, J.; Berguiga, L.
2010-04-01
We introduce a new non-labeling high resolution microscopy method for cellular imaging. This method called SSPM (Scanning Surface Plasmon Microscopy) pushes down the resolution limit of surface plasmon resonance imaging (SPRi) to sub-micronic scales. High resolution SPRi is obtained by the surface plasmon lauching with a high numerical aperture objective lens. The advantages of SPPM compared to other high resolution SPRi's rely on three aspects; (i) the interferometric detection of the back reflected light after plasmon excitation, (ii) the twodimensional scanning of the sample for image reconstruction, (iii) the radial polarization of light, enhancing both resolution and sensitivity. This microscope can afford a lateral resolution of - 150 nm in liquid environment and - 200 nm in air. We present in this paper images of IMR90 fibroblasts obtained with SSPM in dried environment. Internal compartments such as nucleus, nucleolus, mitochondria, cellular and nuclear membrane can be recognized without labelling. We propose an interpretation of the ability of SSPM to reveal high index contrast zones by a local decomposition of the V (Z) function describing the response of the SSPM.
Coincidental match of numerical simulation and physics
NASA Astrophysics Data System (ADS)
Pierre, B.; Gudmundsson, J. S.
2010-08-01
Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.
A class of high resolution explicit and implicit shock-capturing methods
NASA Technical Reports Server (NTRS)
Yee, H. C.
1989-01-01
An attempt is made to give a unified and generalized formulation of a class of high resolution, explicit and implicit shock capturing methods, and to illustrate their versatility in various steady and unsteady complex shock wave computations. Included is a systematic review of the basic design principle of the various related numerical methods. Special emphasis is on the construction of the basis nonlinear, spatially second and third order schemes for nonlinear scalar hyperbolic conservation laws and the methods of extending these nonlinear scalar schemes to nonlinear systems via the approximate Riemann solvers and the flux vector splitting approaches. Generalization of these methods to efficiently include equilibrium real gases and large systems of nonequilibrium flows are discussed. Some issues concerning the applicability of these methods that were designed for homogeneous hyperbolic conservation laws to problems containing stiff source terms and shock waves are also included. The performance of some of these schemes is illustrated by numerical examples for 1-, 2- and 3-dimensional gas dynamics problems.
Obtaining high-resolution velocity spectra using weighted semblance
NASA Astrophysics Data System (ADS)
Ebrahimi, Saleh; Kahoo, Amin Roshandel; Porsani, Milton J.; Kalateh, Ali Nejati
2017-02-01
Velocity analysis employs coherency measurement along a hyperbolic or non-hyperbolic trajectory time window to build velocity spectra. Accuracy and resolution are strictly related to the method of coherency measurements. Semblance, the most common coherence measure, has poor resolution velocity which affects one's ability to distinguish and pick distinct peaks. Increase the resolution of the semblance velocity spectra causes the accuracy of estimated velocity for normal moveout correction and stacking is improved. The low resolution of semblance spectra depends on its low sensitivity to velocity changes. In this paper, we present a new weighted semblance method that ensures high-resolution velocity spectra. To increase the resolution of semblance spectra, we introduce two weighting functions based on the first to second singular values ratio of the time window and the position of the seismic wavelet in the time window to the semblance equation. We test the method on both synthetic and real field data to compare the resolution of weighted and conventional semblance methods. Numerical examples with synthetic and real seismic data indicate that the new proposed weighted semblance method provides higher resolution than conventional semblance and can separate the reflectors which are mixed in the semblance spectrum.
A high-resolution Godunov method for compressible multi-material flow on overlapping grids
NASA Astrophysics Data System (ADS)
Banks, J. W.; Schwendeman, D. W.; Kapila, A. K.; Henshaw, W. D.
2007-04-01
A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on a uniform-pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on the Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of a planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.
NASA Astrophysics Data System (ADS)
Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah
2018-04-01
This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen
1998-01-01
A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.
Resolving the fine-scale structure in turbulent Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Scheel, Janet D.; Emran, Mohammad S.; Schumacher, Jörg
2013-11-01
We present high-resolution direct numerical simulation studies of turbulent Rayleigh-Bénard convection in a closed cylindrical cell with an aspect ratio of one. The focus of our analysis is on the finest scales of convective turbulence, in particular the statistics of the kinetic energy and thermal dissipation rates in the bulk and the whole cell. The fluctuations of the energy dissipation field can directly be translated into a fluctuating local dissipation scale which is found to develop ever finer fluctuations with increasing Rayleigh number. The range of these scales as well as the probability of high-amplitude dissipation events decreases with increasing Prandtl number. In addition, we examine the joint statistics of the two dissipation fields and the consequences of high-amplitude events. We have also investigated the convergence properties of our spectral element method and have found that both dissipation fields are very sensitive to insufficient resolution. We demonstrate that global transport properties, such as the Nusselt number, and the energy balances are partly insensitive to insufficient resolution and yield correct results even when the dissipation fields are under-resolved. Our present numerical framework is also compared with high-resolution simulations which use a finite difference method. For most of the compared quantities the agreement is found to be satisfactory.
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
Dual-TRACER: High resolution fMRI with constrained evolution reconstruction.
Li, Xuesong; Ma, Xiaodong; Li, Lyu; Zhang, Zhe; Zhang, Xue; Tong, Yan; Wang, Lihong; Sen Song; Guo, Hua
2018-01-01
fMRI with high spatial resolution is beneficial for studies in psychology and neuroscience, but is limited by various factors such as prolonged imaging time, low signal to noise ratio and scarcity of advanced facilities. Compressed Sensing (CS) based methods for accelerating fMRI data acquisition are promising. Other advanced algorithms like k-t FOCUSS or PICCS have been developed to improve performance. This study aims to investigate a new method, Dual-TRACER, based on Temporal Resolution Acceleration with Constrained Evolution Reconstruction (TRACER), for accelerating fMRI acquisitions using golden angle variable density spiral. Both numerical simulations and in vivo experiments at 3T were conducted to evaluate and characterize this method. Results show that Dual-TRACER can provide functional images with a high spatial resolution (1×1mm 2 ) under an acceleration factor of 20 while maintaining hemodynamic signals well. Compared with other investigated methods, dual-TRACER provides a better signal recovery, higher fMRI sensitivity and more reliable activation detection. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Boxi, Lin; Chao, Yan; Shusheng, Chen
2017-10-01
This work focuses on the numerical dissipation features of high-order flux reconstruction (FR) method combined with different numerical fluxes in turbulence flows. The famous Roe and AUSM+ numerical fluxes together with their corresponding low-dissipation enhanced versions (LMRoe, SLAU2) and higher resolution variants (HR-LMRoe, HR-SLAU2) are incorporated into FR framework, and the dissipation interplay of these combinations is investigated in implicit large eddy simulation. The numerical dissipation stemming from these convective numerical fluxes is quantified by simulating the inviscid Gresho vortex, the transitional Taylor-Green vortex and the homogenous decaying isotropic turbulence. The results suggest that low-dissipation enhanced versions are preferential both in high-order and low-order cases to their original forms, while the use of HR-SLAU2 has marginal improvements and the HR-LMRoe leads to degenerated solution with high-order. In high-order the effects of numerical fluxes are reduced, and their viscosity may not be dissipative enough to provide physically consistent turbulence when under-resolved.
Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling
NASA Astrophysics Data System (ADS)
Rastigejev, Y.
2011-12-01
Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.
Numerical computation of linear instability of detonations
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry; Kasimov, Aslan
2017-11-01
We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.
Evaluation of coarse scale land surface remote sensing albedo product over rugged terrain
NASA Astrophysics Data System (ADS)
Wen, J.; Xinwen, L.; You, D.; Dou, B.
2017-12-01
Satellite derived Land surface albedo is an essential climate variable which controls the earth energy budget and it can be used in applications such as climate change, hydrology, and numerical weather prediction. The accuracy and uncertainty of surface albedo products should be evaluated with a reliable reference truth data prior to applications. And more literatures investigated the validation methods about the albedo validation in a flat or homogenous surface. However, the albedo performance over rugged terrain is still unknow due to the validation method limited. A multi-validation strategy is implemented to give a comprehensive albedo validation, which will involve the high resolution albedo processing, high resolution albedo validation based on in situ albedo, and the method to upscale the high resolution albedo to a coarse scale albedo. Among them, the high resolution albedo generation and the upscale method is the core step for the coarse scale albedo validation. In this paper, the high resolution albedo is generated by Angular Bin algorithm. And a albedo upscale method over rugged terrain is developed to obtain the coarse scale albedo truth. The in situ albedo located 40 sites in mountain area are selected globally to validate the high resolution albedo, and then upscaled to the coarse scale albedo by the upscale method. This paper takes MODIS and GLASS albedo product as a example, and the prelimarily results show the RMSE of MODIS and GLASS albedo product over rugged terrain are 0.047 and 0.057, respectively under the RMSE with 0.036 of high resolution albedo.
NASA Astrophysics Data System (ADS)
Font, J. A.; Ibanez, J. M.; Marti, J. M.
1993-04-01
Some numerical solutions via local characteristic approach have been obtained describing multidimensional flows. These solutions have been used as tests of a two- dimensional code which extends some high-resolution shock-captunng methods, designed recently to solve nonlinear hyperbolic systems of conservation laws. K words: HYDRODYNAMICS - BLACK HOLE - RELATIVITY - SHOCK WAVES
Goodman, Thomas C.; Hardies, Stephen C.; Cortez, Carlos; Hillen, Wolfgang
1981-01-01
Computer programs are described that direct the collection, processing, and graphical display of numerical data obtained from high resolution thermal denaturation (1-3) and circular dichroism (4) studies. Besides these specific applications, the programs may also be useful, either directly or as programming models, in other types of spectrophotometric studies employing computers, programming languages, or instruments similar to those described here (see Materials and Methods). PMID:7335498
A High-Resolution Godunov Method for Compressible Multi-Material Flow on Overlapping Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banks, J W; Schwendeman, D W; Kapila, A K
2006-02-13
A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on amore » uniform pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of an planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.« less
NASA Astrophysics Data System (ADS)
Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen
2016-11-01
In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.
NASA Technical Reports Server (NTRS)
Hussaini, M. Y. (Editor); Kumar, A. (Editor); Salas, M. D. (Editor)
1993-01-01
The purpose here is to assess the state of the art in the areas of numerical analysis that are particularly relevant to computational fluid dynamics (CFD), to identify promising new developments in various areas of numerical analysis that will impact CFD, and to establish a long-term perspective focusing on opportunities and needs. Overviews are given of discretization schemes, computational fluid dynamics, algorithmic trends in CFD for aerospace flow field calculations, simulation of compressible viscous flow, and massively parallel computation. Also discussed are accerelation methods, spectral and high-order methods, multi-resolution and subcell resolution schemes, and inherently multidimensional schemes.
High-speed X-ray microscopy by use of high-resolution zone plates and synchrotron radiation.
Hou, Qiyue; Wang, Zhili; Gao, Kun; Pan, Zhiyun; Wang, Dajiang; Ge, Xin; Zhang, Kai; Hong, Youli; Zhu, Peiping; Wu, Ziyu
2012-09-01
X-ray microscopy based on synchrotron radiation has become a fundamental tool in biology and life sciences to visualize the morphology of a specimen. These studies have particular requirements in terms of radiation damage and the image exposure time, which directly determines the total acquisition speed. To monitor and improve these key parameters, we present a novel X-ray microscopy method using a high-resolution zone plate as the objective and the matching condenser. Numerical simulations based on the scalar wave field theory validate the feasibility of the method and also indicate the performance of X-ray microscopy is optimized most with sub-10-nm-resolution zone plates. The proposed method is compatible with conventional X-ray microscopy techniques, such as computed tomography, and will find wide applications in time-resolved and/or dose-sensitive studies such as living cell imaging.
Hu, Zhen-Hua; Huang, Teng; Wang, Ying-Ping; Ding, Lei; Zheng, Hai-Yang; Fang, Li
2011-06-01
Taking solar source as radiation in the near-infrared high-resolution absorption spectrum is widely used in remote sensing of atmospheric parameters. The present paper will take retrieval of the concentration of CO2 for example, and study the effect of solar spectra resolution. Retrieving concentrations of CO2 by using high resolution absorption spectra, a method which uses the program provided by AER to calculate the solar spectra at the top of atmosphere as radiation and combine with the HRATS (high resolution atmospheric transmission simulation) to simulate retrieving concentration of CO2. Numerical simulation shows that the accuracy of solar spectrum is important to retrieval, especially in the hyper-resolution spectral retrieavl, and the error of retrieval concentration has poor linear relation with the resolution of observation, but there is a tendency that the decrease in the resolution requires low resolution of solar spectrum. In order to retrieve the concentration of CO2 of atmosphere, the authors' should take full advantage of high-resolution solar spectrum at the top of atmosphere.
NASA Astrophysics Data System (ADS)
de Smet, Jeroen H.; van den Berg, Arie P.; Vlaar, Nico J.; Yuen, David A.
2000-03-01
Purely advective transport of composition is of major importance in the Geosciences, and efficient and accurate solution methods are needed. A characteristics-based method is used to solve the transport equation. We employ a new hybrid interpolation scheme, which allows for the tuning of stability and accuracy through a threshold parameter ɛth. Stability is established by bilinear interpolations, and bicubic splines are used to maintain accuracy. With this scheme, numerical instabilities can be suppressed by allowing numerical diffusion to work in time and locally in space. The scheme can be applied efficiently for preliminary modelling purposes. This can be followed by detailed high-resolution experiments. First, the principal effects of this hybrid interpolation method are illustrated and some tests are presented for numerical solutions of the transport equation. Second, we illustrate that this approach works successfully for a previously developed continental evolution model for the convecting upper mantle. In this model the transport equation contains a source term, which describes the melt production in pressure-released partial melting. In this model, a characteristic phenomenon of small-scale melting diapirs is observed (De Smet et al.1998; De Smet et al. 1999). High-resolution experiments with grid cells down to 700m horizontally and 515m vertically result in highly detailed observations of the diapiric melting phenomenon.
NASA Technical Reports Server (NTRS)
Mankbadi, M. R.; Georgiadis, N. J.; DeBonis, J. R.
2015-01-01
The objective of this work is to compare a high-order solver with a low-order solver for performing large-eddy simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the high-order method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.
Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.
Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner
2011-09-26
Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America
Correction of eddy current distortions in high angular resolution diffusion imaging.
Zhuang, Jiancheng; Lu, Zhong-Lin; Vidal, Christine Bouteiller; Damasio, Hanna
2013-06-01
To correct distortions caused by eddy currents induced by large diffusion gradients during high angular resolution diffusion imaging without any auxiliary reference scans. Image distortion parameters were obtained by image coregistration, performed only between diffusion-weighted images with close diffusion gradient orientations. A linear model that describes distortion parameters (translation, scale, and shear) as a function of diffusion gradient directions was numerically computed to allow individualized distortion correction for every diffusion-weighted image. The assumptions of the algorithm were successfully verified in a series of experiments on phantom and human scans. Application of the proposed algorithm in high angular resolution diffusion images markedly reduced eddy current distortions when compared to results obtained with previously published methods. The method can correct eddy current artifacts in the high angular resolution diffusion images, and it avoids the problematic procedure of cross-correlating images with significantly different contrasts resulting from very different gradient orientations or strengths. Copyright © 2012 Wiley Periodicals, Inc.
Improving PET spatial resolution and detectability for prostate cancer imaging
NASA Astrophysics Data System (ADS)
Bal, H.; Guerin, L.; Casey, M. E.; Conti, M.; Eriksson, L.; Michel, C.; Fanti, S.; Pettinato, C.; Adler, S.; Choyke, P.
2014-08-01
Prostate cancer, one of the most common forms of cancer among men, can benefit from recent improvements in positron emission tomography (PET) technology. In particular, better spatial resolution, lower noise and higher detectability of small lesions could be greatly beneficial for early diagnosis and could provide a strong support for guiding biopsy and surgery. In this article, the impact of improved PET instrumentation with superior spatial resolution and high sensitivity are discussed, together with the latest development in PET technology: resolution recovery and time-of-flight reconstruction. Using simulated cancer lesions, inserted in clinical PET images obtained with conventional protocols, we show that visual identification of the lesions and detectability via numerical observers can already be improved using state of the art PET reconstruction methods. This was achieved using both resolution recovery and time-of-flight reconstruction, and a high resolution image with 2 mm pixel size. Channelized Hotelling numerical observers showed an increase in the area under the LROC curve from 0.52 to 0.58. In addition, a relationship between the simulated input activity and the area under the LROC curve showed that the minimum detectable activity was reduced by more than 23%.
Computer synthesis of high resolution electron micrographs
NASA Technical Reports Server (NTRS)
Nathan, R.
1976-01-01
Specimen damage, spherical aberration, low contrast and noisy sensors combine to prevent direct atomic viewing in a conventional electron microscope. The paper describes two methods for obtaining ultra-high resolution in biological specimens under the electron microscope. The first method assumes the physical limits of the electron objective lens and uses a series of dark field images of biological crystals to obtain direct information on the phases of the Fourier diffraction maxima; this information is used in an appropriate computer to synthesize a large aperture lens for a 1-A resolution. The second method assumes there is sufficient amplitude scatter from images recorded in focus which can be utilized with a sensitive densitometer and computer contrast stretching to yield fine structure image details. Cancer virus characterization is discussed as an illustrative example. Numerous photographs supplement the text.
High-order centered difference methods with sharp shock resolution
NASA Technical Reports Server (NTRS)
Gustafsson, Bertil; Olsson, Pelle
1994-01-01
In this paper we consider high-order centered finite difference approximations of hyperbolic conservation laws. We propose different ways of adding artificial viscosity to obtain sharp shock resolution. For the Riemann problem we give simple explicit formulas for obtaining stationary one and two-point shocks. This can be done for any order of accuracy. It is shown that the addition of artificial viscosity is equivalent to ensuring the Lax k-shock condition. We also show numerical experiments that verify the theoretical results.
A direct method for unfolding the resolution function from measurements of neutron induced reactions
NASA Astrophysics Data System (ADS)
Žugec, P.; Colonna, N.; Sabate-Gilarte, M.; Vlachoudis, V.; Massimi, C.; Lerendegui-Marco, J.; Stamatopoulos, A.; Bacak, M.; Warren, S. G.; n TOF Collaboration
2017-12-01
The paper explores the numerical stability and the computational efficiency of a direct method for unfolding the resolution function from the measurements of the neutron induced reactions. A detailed resolution function formalism is laid out, followed by an overview of challenges present in a practical implementation of the method. A special matrix storage scheme is developed in order to facilitate both the memory management of the resolution function matrix, and to increase the computational efficiency of the matrix multiplication and decomposition procedures. Due to its admirable computational properties, a Cholesky decomposition is at the heart of the unfolding procedure. With the smallest but necessary modification of the matrix to be decomposed, the method is successfully applied to system of 105 × 105. However, the amplification of the uncertainties during the direct inversion procedures limits the applicability of the method to high-precision measurements of neutron induced reactions.
NASA Technical Reports Server (NTRS)
Mankbadi, Mina R.; Georgiadis, Nicholas J.; DeBonis, James R.
2015-01-01
The objective of this work is to compare a high-order solver with a low-order solver for performing Large-Eddy Simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the highorder method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.
High-order ENO schemes applied to two- and three-dimensional compressible flow
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Erlebacher, Gordon; Zang, Thomas A.; Whitaker, David; Osher, Stanley
1991-01-01
High order essentially non-oscillatory (ENO) finite difference schemes are applied to the 2-D and 3-D compressible Euler and Navier-Stokes equations. Practical issues, such as vectorization, efficiency of coding, cost comparison with other numerical methods, and accuracy degeneracy effects, are discussed. Numerical examples are provided which are representative of computational problems of current interest in transition and turbulence physics. These require both nonoscillatory shock capturing and high resolution for detailed structures in the smooth regions and demonstrate the advantage of ENO schemes.
Online Wavelet Complementary velocity Estimator.
Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin
2018-02-01
In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Achieving superresolution with illumination-enhanced sparsity.
Yu, Jiun-Yann; Becker, Stephen R; Folberth, James; Wallin, Bruce F; Chen, Simeng; Cogswell, Carol J
2018-04-16
Recent advances in superresolution fluorescence microscopy have been limited by a belief that surpassing two-fold resolution enhancement of the Rayleigh resolution limit requires stimulated emission or the fluorophore to undergo state transitions. Here we demonstrate a new superresolution method that requires only image acquisitions with a focused illumination spot and computational post-processing. The proposed method utilizes the focused illumination spot to effectively reduce the object size and enhance the object sparsity and consequently increases the resolution and accuracy through nonlinear image post-processing. This method clearly resolves 70nm resolution test objects emitting ~530nm light with a 1.4 numerical aperture (NA) objective, and, when imaging through a 0.5NA objective, exhibits high spatial frequencies comparable to a 1.4NA widefield image, both demonstrating a resolution enhancement above two-fold of the Rayleigh resolution limit. More importantly, we examine how the resolution increases with photon numbers, and show that the more-than-two-fold enhancement is achievable with realistic photon budgets.
Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia; ...
2016-01-01
Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia
Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.
Semi-Lagrangian particle methods for high-dimensional Vlasov-Poisson systems
NASA Astrophysics Data System (ADS)
Cottet, Georges-Henri
2018-07-01
This paper deals with the implementation of high order semi-Lagrangian particle methods to handle high dimensional Vlasov-Poisson systems. It is based on recent developments in the numerical analysis of particle methods and the paper focuses on specific algorithmic features to handle large dimensions. The methods are tested with uniform particle distributions in particular against a recent multi-resolution wavelet based method on a 4D plasma instability case and a 6D gravitational case. Conservation properties, accuracy and computational costs are monitored. The excellent accuracy/cost trade-off shown by the method opens new perspective for accurate simulations of high dimensional kinetic equations by particle methods.
High-resolution method for evolving complex interface networks
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-04-01
In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.
NASA Astrophysics Data System (ADS)
Renko, Tanja; Ivušić, Sarah; Telišman Prtenjak, Maja; Šoljan, Vinko; Horvat, Igor
2018-03-01
In this study, a synoptic and mesoscale analysis was performed and Szilagyi's waterspout forecasting method was tested on ten waterspout events in the period of 2013-2016. Data regarding waterspout occurrences were collected from weather stations, an online survey at the official website of the National Meteorological and Hydrological Service of Croatia and eyewitness reports from newspapers and the internet. Synoptic weather conditions were analyzed using surface pressure fields, 500 hPa level synoptic charts, SYNOP reports and atmospheric soundings. For all observed waterspout events, a synoptic type was determined using the 500 hPa geopotential height chart. The occurrence of lightning activity was determined from the LINET lightning database, and waterspouts were divided into thunderstorm-related and "fair weather" ones. Mesoscale characteristics (with a focus on thermodynamic instability indices) were determined using the high-resolution (500 m grid length) mesoscale numerical weather model and model results were compared with the available observations. Because thermodynamic instability indices are usually insufficient for forecasting waterspout activity, the performance of the Szilagyi Waterspout Index (SWI) was tested using vertical atmospheric profiles provided by the mesoscale numerical model. The SWI successfully forecasted all waterspout events, even the winter events. This indicates that the Szilagyi's waterspout prognostic method could be used as a valid prognostic tool for the eastern Adriatic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassab, A.J.; Pollard, J.E.
An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less
Mariappan, Leo; Hu, Gang; He, Bin
2014-01-01
Purpose: Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. Methods: In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. Results: The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. Conclusions: The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction. PMID:24506649
Developing Local Scale, High Resolution, Data to Interface with Numerical Storm Models
NASA Astrophysics Data System (ADS)
Witkop, R.; Becker, A.; Stempel, P.
2017-12-01
High resolution, physical storm models that can rapidly predict storm surge, inundation, rainfall, wind velocity and wave height at the intra-facility scale for any storm affecting Rhode Island have been developed by Researchers at the University of Rhode Island's (URI's) Graduate School of Oceanography (GSO) (Ginis et al., 2017). At the same time, URI's Marine Affairs Department has developed methods that inhere individual geographic points into GSO's models and enable the models to accurately incorporate local scale, high resolution data (Stempel et al., 2017). This combination allows URI's storm models to predict any storm's impacts on individual Rhode Island facilities in near real time. The research presented here determines how a coastal Rhode Island town's critical facility managers (FMs) perceive their assets as being vulnerable to quantifiable hurricane-related forces at the individual facility scale and explores methods to elicit this information from FMs in a format usable for incorporation into URI's storm models.
Wide-aperture aspherical lens for high-resolution terahertz imaging
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Frolov, Maxim E.; Lebedev, Sergey P.; Reshetov, Igor V.; Spektor, Igor E.; Tolstoguzov, Viktor L.; Karasik, Valeriy E.; Khorokhorov, Alexei M.; Koshelev, Kirill I.; Schadko, Aleksander O.; Yurchenko, Stanislav O.; Zaytsev, Kirill I.
2017-01-01
In this paper, we introduce wide-aperture aspherical lens for high-resolution terahertz (THz) imaging. The lens has been designed and analyzed by numerical methods of geometrical optics and electrodynamics. It has been made of high-density polyethylene by shaping at computer-controlled lathe and characterized using a continuous-wave THz imaging setup based on a backward-wave oscillator and Golay detector. The concept of image contrast has been implemented to estimate image quality. According to the experimental data, the lens allows resolving two points spaced at 0.95λ distance with a contrast of 15%. To highlight high resolution in the THz images, the wide-aperture lens has been employed for studying printed electronic circuit board containing sub-wavelength-scale elements. The observed results justify the high efficiency of the proposed lens design.
Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye
2003-10-01
A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.
Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion
NASA Astrophysics Data System (ADS)
Hesser, T.; Farthing, M. W.; Brodie, K.
2016-02-01
The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.
Gerbich, Therese M.; Rana, Kishan; Suzuki, Aussie; Schaefer, Kristina N.; Heppert, Jennifer K.; Boothby, Thomas C.; Allbritton, Nancy L.; Gladfelter, Amy S.; Maddox, Amy S.
2018-01-01
Fluorescence microscopy is a powerful approach for studying subcellular dynamics at high spatiotemporal resolution; however, conventional fluorescence microscopy techniques are light-intensive and introduce unnecessary photodamage. Light-sheet fluorescence microscopy (LSFM) mitigates these problems by selectively illuminating the focal plane of the detection objective by using orthogonal excitation. Orthogonal excitation requires geometries that physically limit the detection objective numerical aperture (NA), thereby limiting both light-gathering efficiency (brightness) and native spatial resolution. We present a novel live-cell LSFM method, lateral interference tilted excitation (LITE), in which a tilted light sheet illuminates the detection objective focal plane without a sterically limiting illumination scheme. LITE is thus compatible with any detection objective, including oil immersion, without an upper NA limit. LITE combines the low photodamage of LSFM with high resolution, high brightness, and coverslip-based objectives. We demonstrate the utility of LITE for imaging animal, fungal, and plant model organisms over many hours at high spatiotemporal resolution. PMID:29490939
Characterization of the geometry and topology of DNA pictured as a discrete collection of atoms
Olson, Wilma K.
2014-01-01
The structural and physical properties of DNA are closely related to its geometry and topology. The classical mathematical treatment of DNA geometry and topology in terms of ideal smooth space curves was not designed to characterize the spatial arrangements of atoms found in high-resolution and simulated double-helical structures. We present here new and rigorous numerical methods for the rapid and accurate assessment of the geometry and topology of double-helical DNA structures in terms of the constituent atoms. These methods are well designed for large DNA datasets obtained in detailed numerical simulations or determined experimentally at high-resolution. We illustrate the usefulness of our methodology by applying it to the analysis of three canonical double-helical DNA chains, a 65-bp minicircle obtained in recent molecular dynamics simulations, and a crystallographic array of protein-bound DNA duplexes. Although we focus on fully base-paired DNA structures, our methods can be extended to treat the geometry and topology of melted DNA structures as well as to characterize the folding of arbitrary molecules such as RNA and cyclic peptides. PMID:24791158
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guba, O.; Taylor, M. A.; Ullrich, P. A.
2014-11-27
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...
2014-06-25
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods
NASA Astrophysics Data System (ADS)
Lemoine, Grady
Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
NASA Astrophysics Data System (ADS)
Al-Marouf, M.; Samtaney, R.
2017-05-01
We present an embedded ghost fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
VizieR Online Data Catalog: Spectroscopic analysis of 348 red giants (Zielinski+, 2012)
NASA Astrophysics Data System (ADS)
Zielinski, P.; Niedzielski, A.; Wolszczan, A.; Adamow, M.; Nowak, G.
2012-10-01
The atmospheric parameters were derived using a strictly spectroscopic method based on the LTE analysis of equivalent widths of FeI and FeII lines. With existing photometric data and the Hipparcos parallaxes, we estimated stellar masses and ages via evolutionary tracks fitting. The stellar radii were calculated from either estimated masses and the spectroscopic logg or from the spectroscopic Teff and estimated luminosities. The absolute radial velocities were obtained by cross-correlating spectra with a numerical template. Our high-quality, high-resolution optical spectra have been collected since 2004 with the Hobby-Eberly Telescope (HET), located in the McDonald Observatory. The telescope was equipped with the High Resolution Spectrograph (HRS; R~60000 resolution). (2 data files).
Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.
Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P
2015-02-20
Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.
Galerkin-collocation domain decomposition method for arbitrary binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-05-01
We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.
Joint denoising, demosaicing, and chromatic aberration correction for UHD video
NASA Astrophysics Data System (ADS)
Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank
2017-09-01
High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.
ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Mokhtari, Simin
1990-01-01
For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
NASA Astrophysics Data System (ADS)
Alexandrov, S. V.; Vaganov, A. V.; Shalaev, V. I.
2016-10-01
Processes of vortex structures formation and they interactions with the boundary layer in the hypersonic flow over delta wing with blunted leading edges are analyzed on the base of experimental investigations and numerical solutions of Navier-Stokes equations. Physical mechanisms of longitudinal vortexes formation, appearance of abnormal zones with high heat fluxes and early laminar turbulent transition are studied. These phenomena were observed in many high-speed wind tunnel experiments; however they were understood only using the detailed analysis of numerical modeling results with the high resolution. Presented results allowed explaining experimental phenomena. ANSYS CFX code (the DAFE MIPT license) on the grid with 50 million nodes was used for the numerical modeling. The numerical method was verified by comparison calculated heat flux distributions on the wing surface with experimental data.
Gaussian representation of high-intensity focused ultrasound beams.
Soneson, Joshua E; Myers, Matthew R
2007-11-01
A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.
Application of multi-grid method on the simulation of incremental forging processes
NASA Astrophysics Data System (ADS)
Ramadan, Mohamad; Khaled, Mahmoud; Fourment, Lionel
2016-10-01
Numerical simulation becomes essential in manufacturing large part by incremental forging processes. It is a splendid tool allowing to show physical phenomena however behind the scenes, an expensive bill should be paid, that is the computational time. That is why many techniques are developed to decrease the computational time of numerical simulation. Multi-Grid method is a numerical procedure that permits to reduce computational time of numerical calculation by performing the resolution of the system of equations on several mesh of decreasing size which allows to smooth faster the low frequency of the solution as well as its high frequency. In this paper a Multi-Grid method is applied to cogging process in the software Forge 3. The study is carried out using increasing number of degrees of freedom. The results shows that calculation time is divide by two for a mesh of 39,000 nodes. The method is promising especially if coupled with Multi-Mesh method.
NASA Technical Reports Server (NTRS)
Baldwin, B. S.; Maccormack, R. W.; Deiwert, G. S.
1975-01-01
The time-splitting explicit numerical method of MacCormack is applied to separated turbulent boundary layer flow problems. Modifications of this basic method are developed to counter difficulties associated with complicated geometry and severe numerical resolution requirements of turbulence model equations. The accuracy of solutions is investigated by comparison with exact solutions for several simple cases. Procedures are developed for modifying the basic method to improve the accuracy. Numerical solutions of high-Reynolds-number separated flows over an airfoil and shock-separated flows over a flat plate are obtained. A simple mixing length model of turbulence is used for the transonic flow past an airfoil. A nonorthogonal mesh of arbitrary configuration facilitates the description of the flow field. For the simpler geometry associated with the flat plate, a rectangular mesh is used, and solutions are obtained based on a two-equation differential model of turbulence.
Multibeam interferometric illumination as the primary source of resolution in optical microscopy
NASA Astrophysics Data System (ADS)
Ryu, J.; Hong, S. S.; Horn, B. K. P.; Freeman, D. M.; Mermelstein, M. S.
2006-04-01
High-resolution images of a fluorescent target were obtained using a low-resolution optical detector by illuminating the target with interference patterns produced with 31 coherent beams. The beams were arranged in a cone with 78° half angle to produce illumination patterns consistent with a numerical aperture of 0.98. High-resolution images were constructed from low-resolution images taken with 930 different illumination patterns. Results for optical detectors with numerical apertures of 0.1 and 0.2 were similar, demonstrating that the resolution is primarily determined by the illuminator and not by the low-resolution detector. Furthermore, the long working distance, large depth of field, and large field of view of the low-resolution detector are preserved.
Nanometric depth resolution from multi-focal images in microscopy.
Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H
2011-07-06
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.
Nanometric depth resolution from multi-focal images in microscopy
Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.
2011-01-01
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948
NASA Technical Reports Server (NTRS)
Wang, P.; Li, P.
1998-01-01
A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.
Mariappan, Leo; Hu, Gang; He, Bin
2014-02-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼ 1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction.
NASA Astrophysics Data System (ADS)
García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin
2014-10-01
Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
Compartmentalized Low-Rank Recovery for High-Resolution Lipid Unsuppressed MRSI
Bhattacharya, Ipshita; Jacob, Mathews
2017-01-01
Purpose To introduce a novel algorithm for the recovery of high-resolution magnetic resonance spectroscopic imaging (MRSI) data with minimal lipid leakage artifacts, from dual-density spiral acquisition. Methods The reconstruction of MRSI data from dual-density spiral data is formulated as a compartmental low-rank recovery problem. The MRSI dataset is modeled as the sum of metabolite and lipid signals, each of which is support limited to the brain and extracranial regions, respectively, in addition to being orthogonal to each other. The reconstruction problem is formulated as an optimization problem, which is solved using iterative reweighted nuclear norm minimization. Results The comparisons of the scheme against dual-resolution reconstruction algorithm on numerical phantom and in vivo datasets demonstrate the ability of the scheme to provide higher spatial resolution and lower lipid leakage artifacts. The experiments demonstrate the ability of the scheme to recover the metabolite maps, from lipid unsuppressed datasets with echo time (TE)=55 ms. Conclusion The proposed reconstruction method and data acquisition strategy provide an efficient way to achieve high-resolution metabolite maps without lipid suppression. This algorithm would be beneficial for fast metabolic mapping and extension to multislice acquisitions. PMID:27851875
NASA Astrophysics Data System (ADS)
Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric
2017-08-01
The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
Numerical Methods Using B-Splines
NASA Technical Reports Server (NTRS)
Shariff, Karim; Merriam, Marshal (Technical Monitor)
1997-01-01
The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.
Macro-actor execution on multilevel data-driven architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Najjar, W.
1988-12-31
The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.
NASA Astrophysics Data System (ADS)
O'Neill, A.
2015-12-01
The Coastal Storm Modeling System (CoSMoS) is a numerical modeling scheme used to predict coastal flooding due to sea level rise and storms influenced by climate change, currently in use in central California and in development for Southern California (Pt. Conception to the Mexican border). Using a framework of circulation, wave, analytical, and Bayesian models at different geographic scales, high-resolution results are translated as relevant hazards projections at the local scale that include flooding, wave heights, coastal erosion, shoreline change, and cliff failures. Ready access to accurate, high-resolution coastal flooding data is critical for further validation and refinement of CoSMoS and improved coastal hazard projections. High-resolution Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) provides an exceptional data source as appropriately-timed flights during extreme tides or storms provide a geographically-extensive method for determining areas of inundation and flooding extent along expanses of complex and varying coastline. Landward flood extents are numerically identified via edge-detection in imagery from single flights, and can also be ascertained via change detection using additional flights and imagery collected during average wave/tide conditions. The extracted flooding positions are compared against CoSMoS results for similar tide, water level, and storm-intensity conditions, allowing for robust testing and validation of CoSMoS and providing essential feedback for supporting regional and local model improvement.
NASA Technical Reports Server (NTRS)
Yee, H. C.
1995-01-01
Two classes of explicit compact high-resolution shock-capturing methods for the multidimensional compressible Euler equations for fluid dynamics are constructed. Some of these schemes can be fourth-order accurate away from discontinuities. For the semi-discrete case their shock-capturing properties are of the total variation diminishing (TVD), total variation bounded (TVB), total variation diminishing in the mean (TVDM), essentially nonoscillatory (ENO), or positive type of scheme for 1-D scalar hyperbolic conservation laws and are positive schemes in more than one dimension. These fourth-order schemes require the same grid stencil as their second-order non-compact cousins. One class does not require the standard matrix inversion or a special numerical boundary condition treatment associated with typical compact schemes. Due to the construction, these schemes can be viewed as approximations to genuinely multidimensional schemes in the sense that they might produce less distortion in spherical type shocks and are more accurate in vortex type flows than schemes based purely on one-dimensional extensions. However, one class has a more desirable high-resolution shock-capturing property and a smaller operation count in 3-D than the other class. The extension of these schemes to coupled nonlinear systems can be accomplished using the Roe approximate Riemann solver, the generalized Steger and Warming flux-vector splitting or the van Leer type flux-vector splitting. Modification to existing high-resolution second- or third-order non-compact shock-capturing computer codes is minimal. High-resolution shock-capturing properties can also be achieved via a variant of the second-order Lax-Friedrichs numerical flux without the use of Riemann solvers for coupled nonlinear systems with comparable operations count to their classical shock-capturing counterparts. The simplest extension to viscous flows can be achieved by using the standard fourth-order compact or non-compact formula for the viscous terms.
Safrani, Avner; Abdulhalim, Ibrahim
2011-06-20
Longitudinal spatial coherence (LSC) is determined by the spatial frequency content of an optical beam. The use of lenses with a high numerical aperture (NA) in full-field optical coherence tomography and a narrowband light source makes the LSC length much shorter than the temporal coherence length, hence suggesting that high-resolution 3D images of biological and multilayered samples can be obtained based on the low LSC. A simplified model is derived, supported by experimental results, which describes the expected interference output signal of multilayered samples when high-NA lenses are used together with a narrowband light source. An expression for the correction factor for the layer thickness determination is found valid for high-NA objectives. Additionally, the method was applied to a strongly scattering layer, demonstrating the potential of this method for high-resolution imaging of scattering media.
The effect of numerical methods on the simulation of mid-ocean ridge hydrothermal models
NASA Astrophysics Data System (ADS)
Carpio, J.; Braack, M.
2012-01-01
This work considers the effect of the numerical method on the simulation of a 2D model of hydrothermal systems located in the high-permeability axial plane of mid-ocean ridges. The behavior of hot plumes, formed in a porous medium between volcanic lava and the ocean floor, is very irregular due to convective instabilities. Therefore, we discuss and compare two different numerical methods for solving the mathematical model of this system. In concrete, we consider two ways to treat the temperature equation of the model: a semi-Lagrangian formulation of the advective terms in combination with a Galerkin finite element method for the parabolic part of the equations and a stabilized finite element scheme. Both methods are very robust and accurate. However, due to physical instabilities in the system at high Rayleigh number, the effect of the numerical method is significant with regard to the temperature distribution at a certain time instant. The good news is that relevant statistical quantities remain relatively stable and coincide for the two numerical schemes. The agreement is larger in the case of a mathematical model with constant water properties. In the case of a model with nonlinear dependence of the water properties on the temperature and pressure, the agreement in the statistics is clearly less pronounced. Hence, the presented work accentuates the need for a strengthened validation of the compatibility between numerical scheme (accuracy/resolution) and complex (realistic/nonlinear) models.
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
Numerical Hydrodynamics in Special Relativity.
Martí, José Maria; Müller, Ewald
2003-01-01
This review is concerned with a discussion of numerical methods for the solution of the equations of special relativistic hydrodynamics (SRHD). Particular emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods in SRHD. Results of a set of demanding test bench simulations obtained with different numerical SRHD methods are compared. Three applications (astrophysical jets, gamma-ray bursts and heavy ion collisions) of relativistic flows are discussed. An evaluation of various SRHD methods is presented, and future developments in SRHD are analyzed involving extension to general relativistic hydrodynamics and relativistic magneto-hydrodynamics. The review further provides FORTRAN programs to compute the exact solution of a 1D relativistic Riemann problem with zero and nonzero tangential velocities, and to simulate 1D relativistic flows in Cartesian Eulerian coordinates using the exact SRHD Riemann solver and PPM reconstruction. Supplementary material is available for this article at 10.12942/lrr-2003-7 and is accessible for authorized users.
The proliferation of non-indigenous species is a world-wide issue. Environmental managers need improved methods of detecting and monitoring the distribution of such invaders over large areas. In recent decades, numerous estuaries of the Pacific Northwest USA have experienced th...
Final Report for''Numerical Methods and Studies of High-Speed Reactive and Non-Reactive Flows''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwendeman, D W
2002-11-20
The work carried out under this subcontract involved the development and use of an adaptive numerical method for the accurate calculation of high-speed reactive flows on overlapping grids. The flow is modeled by the reactive Euler equations with an assumed equation of state and with various reaction rate models. A numerical method has been developed to solve the nonlinear hyperbolic partial differential equations in the model. The method uses an unsplit, shock-capturing scheme, and uses a Godunov-type scheme to compute fluxes and a Runge-Kutta error control scheme to compute the source term modeling the chemical reactions. An adaptive mesh refinementmore » (AMR) scheme has been implemented in order to locally increase grid resolution. The numerical method uses composite overlapping grids to handle complex flow geometries. The code is part of the ''Overture-OverBlown'' framework of object-oriented codes [1, 2], and the development has occurred in close collaboration with Bill Henshaw and David Brown, and other members of the Overture team within CASC. During the period of this subcontract, a number of tasks were accomplished, including: (1) an extension of the numerical method to handle ''ignition and grow'' reaction models and a JWL equations of state; (2) an improvement in the efficiency of the AMR scheme and the error estimator; (3) an addition of a scheme of numerical dissipation designed to suppress numerical oscillations/instabilities near expanding detonations and along grid overlaps; and (4) an exploration of the evolution to detonation in an annulus and of detonation failure in an expanding channel.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
Numerical solution of special ultra-relativistic Euler equations using central upwind scheme
NASA Astrophysics Data System (ADS)
Ghaffar, Tayabia; Yousaf, Muhammad; Qamar, Shamsul
2018-06-01
This article is concerned with the numerical approximation of one and two-dimensional special ultra-relativistic Euler equations. The governing equations are coupled first-order nonlinear hyperbolic partial differential equations. These equations describe perfect fluid flow in terms of the particle density, the four-velocity and the pressure. A high-resolution shock-capturing central upwind scheme is employed to solve the model equations. To avoid excessive numerical diffusion, the considered scheme avails the specific information of local propagation speeds. By using Runge-Kutta time stepping method and MUSCL-type initial reconstruction, we have obtained 2nd order accuracy of the proposed scheme. After discussing the model equations and the numerical technique, several 1D and 2D test problems are investigated. For all the numerical test cases, our proposed scheme demonstrates very good agreement with the results obtained by well-established algorithms, even in the case of highly relativistic 2D test problems. For validation and comparison, the staggered central scheme and the kinetic flux-vector splitting (KFVS) method are also implemented to the same model. The robustness and efficiency of central upwind scheme is demonstrated by the numerical results.
NASA Astrophysics Data System (ADS)
Lu, Tong; Wang, Yihan; Gao, Feng; Zhao, Huijuan; Ntziachristos, Vasilis; Li, Jiao
2018-02-01
Photoacoustic mesoscopy (PAMe), offering high-resolution (sub-100-μm) and high optical contrast imaging at the depth of 1-10 mm, generally obtains massive collection data using a high-frequency focused ultrasonic transducer. The spatial impulse response (SIR) of this focused transducer causes the distortion of measured signals in both duration and amplitude. Thus, the reconstruction method considering the SIR needs to be investigated in the computation-economic way for PAMe. Here, we present a modified back-projection algorithm, by introducing a SIR-dependent calibration process using a non-satationary convolution method. The proposed method is performed on numerical simulations and phantom experiments of microspheres with diameter of both 50 μm and 100 μm, and the improvement of image fidelity of this method is proved to be evident by methodology parameters. The results demonstrate that, the images reconstructed when the SIR of transducer is accounted for have higher contrast-to-noise ratio and more reasonable spatial resolution, compared to the common back-projection algorithm.
Satellite observed thermodynamics during FGGE
NASA Technical Reports Server (NTRS)
Smith, W. L.
1985-01-01
During the First Global Atmospheric Research Program (GARP) Global Experiment (FGGE), determinations of temperature and moisture were made from TIROS-N and NOAA-6 satellite infrared and microwave sounding radiance measurements. The data were processed by two methods differing principally in their horizontal resolution. At the National Earth Satellite Service (NESS) in Washington, D.C., the data were produced operationally with a horizontal resolution of 250 km for inclusion in the FGGE Level IIb data sets for application to large-scale numerical analysis and prediction models. High horizontal resolution (75 km) sounding data sets were produced using man-machine interactive methods for the special observing periods of FGGE at the NASA/Goddard Space Flight Center and archived as supplementary Level IIb. The procedures used for sounding retrieval and the characteristics and quality of these thermodynamic observations are given.
An Overview of Numerical Weather Prediction on Various Scales
NASA Astrophysics Data System (ADS)
Bao, J.-W.
2009-04-01
The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.
NREL: International Activities - Pakistan Resource Maps
. The high-resolution (1-km) annual wind power maps were developed using a numerical modeling approach along with NREL's empirical validation methodology. The high-resolution (10-km) annual and seasonal KB) | High-Res (ZIP 281 KB) 40-km Resolution Annual Maps (Direct) Low-Res (JPG 156 KB) | High-Res
Gridless, pattern-driven point cloud completion and extension
NASA Astrophysics Data System (ADS)
Gravey, Mathieu; Mariethoz, Gregoire
2016-04-01
While satellites offer Earth observation with a wide coverage, other remote sensing techniques such as terrestrial LiDAR can acquire very high-resolution data on an area that is limited in extension and often discontinuous due to shadow effects. Here we propose a numerical approach to merge these two types of information, thereby reconstructing high-resolution data on a continuous large area. It is based on a pattern matching process that completes the areas where only low-resolution data is available, using bootstrapped high-resolution patterns. Currently, the most common approach to pattern matching is to interpolate the point data on a grid. While this approach is computationally efficient, it presents major drawbacks for point clouds processing because a significant part of the information is lost in the point-to-grid resampling, and that a prohibitive amount of memory is needed to store large grids. To address these issues, we propose a gridless method that compares point clouds subsets without the need to use a grid. On-the-fly interpolation involves a heavy computational load, which is met by using a GPU high-optimized implementation and a hierarchical pattern searching strategy. The method is illustrated using data from the Val d'Arolla, Swiss Alps, where high-resolution terrestrial LiDAR data are fused with lower-resolution Landsat and WorldView-3 acquisitions, such that the density of points is homogeneized (data completion) and that it is extend to a larger area (data extension).
Ultra-sensitive magnetic microscopy with an optically pumped magnetometer
Kim, Young Jin; Savukov, Igor Mykhaylovich
2016-04-22
Optically pumped magnetometers (OPMs) based on lasers and alkali-metal vapor cells are currently the most sensitive non-cryogenic magnetic field sensors. Many applications in neuroscience and other fields require high-resolution, high-sensitivity magnetic microscopic measurements. In order to meet this demand we combined a cm-size spin-exchange relaxation-free (SERF) OPM and flux guides (FGs) to realize an ultra-sensitive FG-OPM magnetic microscope. The FGs serve to transmit the target magnetic flux to the OPM thus improving both the resolution and sensitivity to small magnetic objects. We investigated the performance of the FG-OPM device using experimental and numerical methods, and demonstrated that an optimized devicemore » can achieve a unique combination of high resolution (80 μm) and high sensitivity (8.1 pT/). Additionally, we also performed numerical calculations of the magnetic field distribution in the FGs to estimate the magnetic noise originating from the domain fluctuations in the material of the FGs. We anticipate many applications of the FG-OPM device such as the detection of micro-biological magnetic fields; the detection of magnetic nano-particles; and non-destructive testing. From our theoretical estimate, an FG-OPM could detect the magnetic field of a single neuron, which would be an important milestone in neuroscience.« less
Ultra-sensitive Magnetic Microscopy with an Optically Pumped Magnetometer
NASA Astrophysics Data System (ADS)
Kim, Young Jin; Savukov, Igor
2016-04-01
Optically pumped magnetometers (OPMs) based on lasers and alkali-metal vapor cells are currently the most sensitive non-cryogenic magnetic field sensors. Many applications in neuroscience and other fields require high-resolution, high-sensitivity magnetic microscopic measurements. In order to meet this demand we combined a cm-size spin-exchange relaxation-free (SERF) OPM and flux guides (FGs) to realize an ultra-sensitive FG-OPM magnetic microscope. The FGs serve to transmit the target magnetic flux to the OPM thus improving both the resolution and sensitivity to small magnetic objects. We investigated the performance of the FG-OPM device using experimental and numerical methods, and demonstrated that an optimized device can achieve a unique combination of high resolution (80 μm) and high sensitivity (8.1 pT/). In addition, we also performed numerical calculations of the magnetic field distribution in the FGs to estimate the magnetic noise originating from the domain fluctuations in the material of the FGs. We anticipate many applications of the FG-OPM device such as the detection of micro-biological magnetic fields; the detection of magnetic nano-particles; and non-destructive testing. From our theoretical estimate, an FG-OPM could detect the magnetic field of a single neuron, which would be an important milestone in neuroscience.
Ultra-sensitive magnetic microscopy with an optically pumped magnetometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young Jin; Savukov, Igor Mykhaylovich
Optically pumped magnetometers (OPMs) based on lasers and alkali-metal vapor cells are currently the most sensitive non-cryogenic magnetic field sensors. Many applications in neuroscience and other fields require high-resolution, high-sensitivity magnetic microscopic measurements. In order to meet this demand we combined a cm-size spin-exchange relaxation-free (SERF) OPM and flux guides (FGs) to realize an ultra-sensitive FG-OPM magnetic microscope. The FGs serve to transmit the target magnetic flux to the OPM thus improving both the resolution and sensitivity to small magnetic objects. We investigated the performance of the FG-OPM device using experimental and numerical methods, and demonstrated that an optimized devicemore » can achieve a unique combination of high resolution (80 μm) and high sensitivity (8.1 pT/). Additionally, we also performed numerical calculations of the magnetic field distribution in the FGs to estimate the magnetic noise originating from the domain fluctuations in the material of the FGs. We anticipate many applications of the FG-OPM device such as the detection of micro-biological magnetic fields; the detection of magnetic nano-particles; and non-destructive testing. From our theoretical estimate, an FG-OPM could detect the magnetic field of a single neuron, which would be an important milestone in neuroscience.« less
RIO: a new computational framework for accurate initial data of binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-06-01
We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.
NASA Astrophysics Data System (ADS)
Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin
2017-12-01
The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.
Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.
Assessing resolution in live cell structured illumination microscopy
NASA Astrophysics Data System (ADS)
Pospíšil, Jakub; Fliegel, Karel; Klíma, Miloš
2017-12-01
Structured Illumination Microscopy (SIM) is a powerful super-resolution technique, which is able to enhance the resolution of optical microscope beyond the Abbe diffraction limit. In the last decade, numerous SIM methods that achieve the resolution of 100 nm in the lateral dimension have been developed. The SIM setups with new high-speed cameras and illumination pattern generators allow rapid acquisition of the live specimen. Therefore, SIM is widely used for investigation of the live structures in molecular and live cell biology. Quantitative evaluation of resolution enhancement in a real sample is essential to describe the efficiency of super-resolution microscopy technique. However, measuring the resolution of a live cell sample is a challenging task. Based on our experimental findings, the widely used Fourier ring correlation (FRC) method does not seem to be well suited for measuring the resolution of SIM live cell video sequences. Therefore, the resolution assessing methods based on Fourier spectrum analysis are often used. We introduce a measure based on circular average power spectral density (PSDca) estimated from a single SIM image (one video frame). PSDca describes the distribution of the power of a signal with respect to its spatial frequency. Spatial resolution corresponds to the cut-off frequency in Fourier space. In order to estimate the cut-off frequency from a noisy signal, we use a spectral subtraction method for noise suppression. In the future, this resolution assessment approach might prove useful also for single-molecule localization microscopy (SMLM) live cell imaging.
A Numerical Model for Trickle Bed Reactors
NASA Astrophysics Data System (ADS)
Propp, Richard M.; Colella, Phillip; Crutchfield, William Y.; Day, Marcus S.
2000-12-01
Trickle bed reactors are governed by equations of flow in porous media such as Darcy's law and the conservation of mass. Our numerical method for solving these equations is based on a total-velocity splitting, sequential formulation which leads to an implicit pressure equation and a semi-implicit mass conservation equation. We use high-resolution finite-difference methods to discretize these equations. Our solution scheme extends previous work in modeling porous media flows in two ways. First, we incorporate physical effects due to capillary pressure, a nonlinear inlet boundary condition, spatial porosity variations, and inertial effects on phase mobilities. In particular, capillary forces introduce a parabolic component into the recast evolution equation, and the inertial effects give rise to hyperbolic nonconvexity. Second, we introduce a modification of the slope-limiting algorithm to prevent our numerical method from producing spurious shocks. We present a numerical algorithm for accommodating these difficulties, show the algorithm is second-order accurate, and demonstrate its performance on a number of simplified problems relevant to trickle bed reactor modeling.
A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus
NASA Astrophysics Data System (ADS)
Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei
2005-01-01
Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.
Numerical Simulation and Mechanical Design for TPS Electron Beam Position Monitors
NASA Astrophysics Data System (ADS)
Hsueh, H. P.; Kuan, C. K.; Ueng, T. S.; Hsiung, G. Y.; Chen, J. R.
2007-01-01
Comprehensive study on the mechanical design and numerical simulation for the high resolution electron beam position monitors are key steps to build the newly proposed 3rd generation synchrotron radiation research facility, Taiwan Photon Source (TPS). With more advanced electromagnetic simulation tool like MAFIA tailored specifically for particle accelerator, the design for the high resolution electron beam position monitors can be tested in such environment before they are experimentally tested. The design goal of our high resolution electron beam position monitors is to get the best resolution through sensitivity and signal optimization. The definitions and differences between resolution and sensitivity of electron beam position monitors will be explained. The design consideration is also explained. Prototype deign has been carried out and the related simulations were also carried out with MAFIA. The results are presented here. Sensitivity as high as 200 in x direction has been achieved in x direction at 500 MHz.
Jain, Kartik; Jiang, Jingfeng; Strother, Charles; Mardal, Kent-André
2016-11-01
Blood flow in intracranial aneurysms has, until recently, been considered to be disturbed but still laminar. Recent high resolution computational studies have demonstrated, in some situations, however, that the flow may exhibit high frequency fluctuations that resemble weakly turbulent or transitional flow. Due to numerous assumptions required for simplification in computational fluid dynamics (CFD) studies, the occurrence of these events, in vivo, remains unsettled. The detection of these fluctuations in aneurysmal blood flow, i.e., hemodynamics by CFD, poses additional challenges as such phenomena cannot be captured in clinical data acquisition with magnetic resonance (MR) due to inadequate temporal and spatial resolutions. The authors' purpose was to address this issue by comparing results from highly resolved simulations, conventional resolution laminar simulations, and MR measurements, identify the differences, and identify their causes. Two aneurysms in the basilar artery, one with disturbed yet laminar flow and the other with transitional flow, were chosen. One set of highly resolved direct numerical simulations using the lattice Boltzmann method (LBM) and another with adequate resolutions under laminar flow assumption were conducted using a commercially available ANSYS Fluent solver. The velocity fields obtained from simulation results were qualitatively and statistically compared against each other and with MR acquisition. Results from LBM, ANSYS Fluent, and MR agree well qualitatively and quantitatively for one of the aneurysms with laminar flow in which fluctuations were <80 Hz. The comparisons for the second aneurysm with high fluctuations of > ∼ 600 Hz showed vivid differences between LBM, ANSYS Fluent, and magnetic resonance imaging. After ensemble averaging and down-sampling to coarser space and time scales, these differences became minimal. A combination of MR derived data and CFD can be helpful in estimating the hemodynamic environment of intracranial aneurysms. Adequately resolved CFD would suffice gross assessment of hemodynamics, potentially in a clinical setting, and highly resolved CFD could be helpful in a detailed and retrospective understanding of the physiological mechanisms.
Latychevskaia, T; Chushkin, Y; Fink, H-W
2016-10-01
In coherent diffractive imaging, the resolution of the reconstructed object is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by postextrapolation of coherent diffraction images, such as diffraction patterns or holograms. We demonstrate that a diffraction pattern can unambiguously be extrapolated from only a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal is linearly proportional to the oversampling ratio. Although there could be in principle other methods to achieve extrapolation, we devote our discussion to employing iterative phase retrieval methods and demonstrate their limits. We present two numerical studies; namely, the extrapolation of diffraction patterns of nonbinary and that of phase objects together with a discussion of the optimal extrapolation procedure. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Spread spectrum phase modulation for coherent X-ray diffraction imaging.
Zhang, Xuesong; Jiang, Jing; Xiangli, Bin; Arce, Gonzalo R
2015-09-21
High dynamic range, phase ambiguity and radiation limited resolution are three challenging issues in coherent X-ray diffraction imaging (CXDI), which limit the achievable imaging resolution. This paper proposes a spread spectrum phase modulation (SSPM) method to address the aforementioned problems in a single strobe. The requirements on phase modulator parameters are presented, and a practical implementation of SSPM is discussed via ray optics analysis. Numerical experiments demonstrate the performance of SSPM under the constraint of available X-ray optics fabrication accuracy, showing its potential to real CXDI applications.
Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G
2008-11-24
An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.
High-resolution modeling of a marine ecosystem using the FRESCO hydroecological model
NASA Astrophysics Data System (ADS)
Zalesny, V. B.; Tamsalu, R.
2009-02-01
The FRESCO (Finnish Russian Estonian Cooperation) mathematical model describing a marine hydroecosystem is presented. The methodology of the numerical solution is based on the method of multicomponent splitting into physical and biological processes, spatial coordinates, etc. The model is used for the reproduction of physical and biological processes proceeding in the Baltic Sea. Numerical experiments are performed with different spatial resolutions for four marine basins that are enclosed into one another: the Baltic Sea, the Gulf of Finland, the Tallinn-Helsinki water area, and Tallinn Bay. Physical processes are described by the equations of nonhydrostatic dynamics, including the k-ω parametrization of turbulence. Biological processes are described by the three-dimensional equations of an aquatic ecosystem with the use of a size-dependent parametrization of biochemical reactions. The main goal of this study is to illustrate the efficiency of the developed numerical technique and to demonstrate the importance of a high spatial resolution for water basins that have complex bottom topography, such as the Baltic Sea. Detailed information about the atmospheric forcing, bottom topography, and coastline is very important for the description of coastal dynamics and specific features of a marine ecosystem. Experiments show that the spatial inhomogeneity of hydroecosystem fields is caused by the combined effect of upwelling, turbulent mixing, surface-wave breaking, and temperature variations, which affect biochemical reactions.
NASA Astrophysics Data System (ADS)
Zhang, Jiaying; Gang, Tie; Ye, Chaofeng; Cong, Sen
2018-04-01
Linear-chirp-Golay (LCG)-coded excitation combined with pulse compression is proposed in this paper to improve the time resolution and suppress sidelobe in ultrasonic testing. The LCG-coded excitation is binary complementary pair Golay signal with linear-chirp signal applied on every sub pulse. Compared with conventional excitation which is a common ultrasonic testing method using a brief narrow pulse as exciting signal, the performances of LCG-coded excitation, in terms of time resolution improvement and sidelobe suppression, are studied via numerical and experimental investigations. The numerical simulations are implemented using Matlab K-wave toolbox. It is seen from the simulation results that time resolution of LCG excitation is 35.5% higher and peak sidelobe level (PSL) is 57.6 dB lower than linear-chirp excitation with 2.4 MHz chirp bandwidth and 3 μs time duration. In the B-scan experiment, time resolution of LCG excitation is higher and PSL is lower than conventional brief pulse excitation and chirp excitation. In terms of time resolution, LCG-coded signal has better performance than chirp signal. Moreover, the impact of chirp bandwidth on LCG-coded signal is less than that on chirp signal. In addition, the sidelobe of LCG-coded signal is lower than that of chirp signal with pulse compression.
Riemann Solvers in Relativistic Hydrodynamics: Basics and Astrophysical Applications
NASA Astrophysics Data System (ADS)
Ibanez, Jose M.
2001-12-01
My contribution to these proceedings summarizes a general overview on t High Resolution Shock Capturing methods (HRSC) in the field of relativistic hydrodynamics with special emphasis on Riemann solvers. HRSC techniques achieve highly accurate numerical approximations (formally second order or better) in smooth regions of the flow, and capture the motion of unresolved steep gradients without creating spurious oscillations. In the first part I will show how these techniques have been extended to relativistic hydrodynamics, making it possible to explore some challenging astrophysical scenarios. I will review recent literature concerning the main properties of different special relativistic Riemann solvers, and discuss several 1D and 2D test problems which are commonly used to evaluate the performance of numerical methods in relativistic hydrodynamics. In the second part I will illustrate the use of HRSC methods in several astrophysical applications where special and general relativistic hydrodynamical processes play a crucial role.
The spectral cell method in nonlinear earthquake modeling
NASA Astrophysics Data System (ADS)
Giraldo, Daniel; Restrepo, Doriam
2017-12-01
This study examines the applicability of the spectral cell method (SCM) to compute the nonlinear earthquake response of complex basins. SCM combines fictitious-domain concepts with the spectral-version of the finite element method to solve the wave equations in heterogeneous geophysical domains. Nonlinear behavior is considered by implementing the Mohr-Coulomb and Drucker-Prager yielding criteria. We illustrate the performance of SCM with numerical examples of nonlinear basins exhibiting physically and computationally challenging conditions. The numerical experiments are benchmarked with results from overkill solutions, and using MIDAS GTS NX, a finite element software for geotechnical applications. Our findings show good agreement between the two sets of results. Traditional spectral elements implementations allow points per wavelength as low as PPW = 4.5 for high-order polynomials. Our findings show that in the presence of nonlinearity, high-order polynomials (p ≥ 3) require mesh resolutions above of PPW ≥ 10 to ensure displacement errors below 10%.
2015-09-01
NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios
NASA Astrophysics Data System (ADS)
Barrash, W.; Cardiff, M. A.; Kitanidis, P. K.
2012-12-01
The distribution of hydraulic conductivity (K) is a major control on groundwater flow and contaminant transport. Our limited ability to determine 3D heterogeneous distributions of K is a major reason for increased costs and uncertainties associated with virtually all aspects of groundwater contamination management (e.g., site investigations, risk assessments, remediation method selection/design/operation, monitoring system design/operation). Hydraulic tomography (HT) is an emerging method for directly estimating the spatially variable distribution of K - in a similar fashion to medical or geophysical imaging. Here we present results from 3D transient field-scale experiments (3DTHT) which capture the heterogeneous K distribution in a permeable, moderately heterogeneous, coarse fluvial unconfined aquifer at the Boise Hydrogeophysical Research Site (BHRS). The results are verified against high-resolution K profiles from multi-level slug tests at BHRS wells. The 3DTHT field system for well instrumentation and data acquisition/feedback is fully modular and portable, and the in-well packer-and-port system is easily assembled and disassembled without expensive support equipment or need for gas pressurization. Tests are run for 15-20 min and the aquifer is allowed to recover while the pumping equipment is repositioned between tests. The tomographic modeling software developed uses as input observations of temporal drawdown behavior from each of numerous zones isolated in numerous observation wells during a series of pumping tests conducted from numerous isolated intervals in one or more pumping wells. The software solves for distributed K (as well as storage parameters Ss and Sy, if desired) and estimates parameter uncertainties using: a transient 3D unconfined forward model in MODFLOW, the adjoint state method for calculating sensitivities (Clemo 2007), and the quasi-linear geostatistical inverse method (Kitanidis 1995) for the inversion. We solve for K at >100,000 sub-m3 (1m x 1m x 0.6m) locations in a 60m x 60m x 18m modeled volume of the BHRS, with the primary investigated volume approximately 12m x 8m x 16m. Computing times are reasonable on high-end desktop computers or small clusters; we are investigating additional efficiency improvements with massive parallelization. Results from complete coverage (1m-length zones) in one pumping well and five observation wells provide a basis for evaluating method resolution capabilities by comparing K statistics from solutions with all tests and observations against partial test and observation coverage, and against independent K measurements at wells with multi-level slug tests. From these analyses we show that 3DTHT compares well with slug test results, and high-resolution information on heterogeneity is lost rapidly with reduction in test or observation coverage.
Planet-disc interactions with Discontinuous Galerkin Methods using GPUs
NASA Astrophysics Data System (ADS)
Velasco Romero, David A.; Veiga, Maria Han; Teyssier, Romain; Masset, Frédéric S.
2018-05-01
We present a two-dimensional Cartesian code based on high order discontinuous Galerkin methods, implemented to run in parallel over multiple GPUs. A simple planet-disc setup is used to compare the behaviour of our code against the behaviour found using the FARGO3D code with a polar mesh. We make use of the time dependence of the torque exerted by the disc on the planet as a mean to quantify the numerical viscosity of the code. We find that the numerical viscosity of the Keplerian flow can be as low as a few 10-8r2Ω, r and Ω being respectively the local orbital radius and frequency, for fifth order schemes and resolution of ˜10-2r. Although for a single disc problem a solution of low numerical viscosity can be obtained at lower computational cost with FARGO3D (which is nearly an order of magnitude faster than a fifth order method), discontinuous Galerkin methods appear promising to obtain solutions of low numerical viscosity in more complex situations where the flow cannot be captured on a polar or spherical mesh concentric with the disc.
Numerical solutions of the semiclassical Boltzmann ellipsoidal-statistical kinetic model equation
Yang, Jaw-Yen; Yan, Chin-Yuan; Huang, Juan-Chen; Li, Zhihui
2014-01-01
Computations of rarefied gas dynamical flows governed by the semiclassical Boltzmann ellipsoidal-statistical (ES) kinetic model equation using an accurate numerical method are presented. The semiclassical ES model was derived through the maximum entropy principle and conserves not only the mass, momentum and energy, but also contains additional higher order moments that differ from the standard quantum distributions. A different decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. The numerical method in phase space combines the discrete-ordinate method in momentum space and the high-resolution shock capturing method in physical space. Numerical solutions of two-dimensional Riemann problems for two configurations covering various degrees of rarefaction are presented and various contours of the quantities unique to this new model are illustrated. When the relaxation time becomes very small, the main flow features a display similar to that of ideal quantum gas dynamics, and the present solutions are found to be consistent with existing calculations for classical gas. The effect of a parameter that permits an adjustable Prandtl number in the flow is also studied. PMID:25104904
NASA Astrophysics Data System (ADS)
Li, Hao; Liu, Wenzhong; Zhang, Hao F.
2015-10-01
Rodent models are indispensable in studying various retinal diseases. Noninvasive, high-resolution retinal imaging of rodent models is highly desired for longitudinally investigating the pathogenesis and therapeutic strategies. However, due to severe aberrations, the retinal image quality in rodents can be much worse than that in humans. We numerically and experimentally investigated the influence of chromatic aberration and optical illumination bandwidth on retinal imaging. We confirmed that the rat retinal image quality decreased with increasing illumination bandwidth. We achieved the retinal image resolution of 10 μm using a 19 nm illumination bandwidth centered at 580 nm in a home-built fundus camera. Furthermore, we observed higher chromatic aberration in albino rat eyes than in pigmented rat eyes. This study provides a design guide for high-resolution fundus camera for rodents. Our method is also beneficial to dispersion compensation in multiwavelength retinal imaging applications.
A high-order staggered finite-element vertical discretization for non-hydrostatic atmospheric models
Guerra, Jorge E.; Ullrich, Paul A.
2016-06-01
Atmospheric modeling systems require economical methods to solve the non-hydrostatic Euler equations. Two major differences between hydrostatic models and a full non-hydrostatic description lies in the vertical velocity tendency and numerical stiffness associated with sound waves. In this work we introduce a new arbitrary-order vertical discretization entitled the staggered nodal finite-element method (SNFEM). Our method uses a generalized discrete derivative that consistently combines the discontinuous Galerkin and spectral element methods on a staggered grid. Our combined method leverages the accurate wave propagation and conservation properties of spectral elements with staggered methods that eliminate stationary (2Δ x) modes. Furthermore, high-order accuracymore » also eliminates the need for a reference state to maintain hydrostatic balance. In this work we demonstrate the use of high vertical order as a means of improving simulation quality at relatively coarse resolution. We choose a test case suite that spans the range of atmospheric flows from predominantly hydrostatic to nonlinear in the large-eddy regime. Lastly, our results show that there is a distinct benefit in using the high-order vertical coordinate at low resolutions with the same robust properties as the low-order alternative.« less
A high-order staggered finite-element vertical discretization for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerra, Jorge E.; Ullrich, Paul A.
Atmospheric modeling systems require economical methods to solve the non-hydrostatic Euler equations. Two major differences between hydrostatic models and a full non-hydrostatic description lies in the vertical velocity tendency and numerical stiffness associated with sound waves. In this work we introduce a new arbitrary-order vertical discretization entitled the staggered nodal finite-element method (SNFEM). Our method uses a generalized discrete derivative that consistently combines the discontinuous Galerkin and spectral element methods on a staggered grid. Our combined method leverages the accurate wave propagation and conservation properties of spectral elements with staggered methods that eliminate stationary (2Δ x) modes. Furthermore, high-order accuracymore » also eliminates the need for a reference state to maintain hydrostatic balance. In this work we demonstrate the use of high vertical order as a means of improving simulation quality at relatively coarse resolution. We choose a test case suite that spans the range of atmospheric flows from predominantly hydrostatic to nonlinear in the large-eddy regime. Lastly, our results show that there is a distinct benefit in using the high-order vertical coordinate at low resolutions with the same robust properties as the low-order alternative.« less
2013-01-01
Background High resolution melting analysis (HRM) is a rapid and cost-effective technique for the characterisation of PCR amplicons. Because the reverse genetics of segmented influenza A viruses allows the generation of numerous influenza A virus reassortants within a short time, methods for the rapid selection of the correct recombinants are very useful. Methods PCR primer pairs covering the single nucleotide polymorphism (SNP) positions of two different influenza A H5N1 strains were designed. Reassortants of the two different H5N1 isolates were used as a model to prove the suitability of HRM for the selection of the correct recombinants. Furthermore, two different cycler instruments were compared. Results Both cycler instruments generated comparable average melting peaks, which allowed the easy identification and selection of the correct cloned segments or reassorted viruses. Conclusions HRM is a highly suitable method for the rapid and precise characterisation of cloned influenza A genomes. PMID:24028349
Wedi, Nils P
2014-06-28
The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Protoplanetary Disks and Planet Formation a Computational Perspective
NASA Astrophysics Data System (ADS)
Backus, Isaac
In this thesis I present my research on the early stages of planet formation. Using advanced computational modeling techniques, I study global gas and gravitational dynamics in proto- planetary disks (PPDs) on length scales from the radius of Jupiter to the size of the solar system. In that environment, I investigate the formation of gas giants and the migration, enhancement, and distribution of small solids--the precursors to planetesimals and gas giant cores. I examine numerical techniques used in planet formation and PPD modeling, especially methods for generating initial conditions (ICs) in these unstable, chaotic systems. Disk simulation outcomes may depend strongly on ICs, which may explain results in the literature. I present the largest suite of high resolution PPD simulations to-date and argue that direct fragmentations of PPDs around M-Dwarfs is a plausible path to rapidly forming gas giants. I implement dust physics to track the migration of centimeter and smaller dust grains in very high resolution PPD simulations. While current dust methods are slow, with strict resolution and/or time-stepping requirements, and have some serious numerical issues, we can still demonstrate that dust does not concentrate at the pressure maxima of spiral arms, an indication that spiral features observed in the dust component are at least as well resolved in the gas. Additionally, coherent spiral arms do not limit dust settling. We suggest a novel mechanism for disk fragmentation at large radii driven by dust accretion from the surrounding nebula. We also investigate self induced dust traps, a mechanism which may help explain the growth of solids beyond meter sizes. We argue that current apparent demonstrations of this mechanism may be due to numerical artifacts and require further investigation.
USDA-ARS?s Scientific Manuscript database
Many societal applications of soil moisture data products require high spatial resolution and numerical accuracy. Current thermal geostationary satellite sensors (GOES Imager and GOES-R ABI) could produce 2-16km resolution soil moisture proxy data. Passive microwave satellite radiometers (e.g. AMSR...
B. W. Butler; N. S. Wagenbrenner; J. M. Forthofer; B. K. Lamb; K. S. Shannon; D. Finn; R. M. Eckman; K. Clawson; L. Bradshaw; P. Sopko; S. Beard; D. Jimenez; C. Wold; M. Vosburgh
2015-01-01
A number of numerical wind flow models have been developed for simulating wind flow at relatively fine spatial resolutions (e.g., 100 m); however, there are very limited observational data available for evaluating these high-resolution models. This study presents high-resolution surface wind data sets collected from an isolated mountain and a steep river canyon. The...
NASA Astrophysics Data System (ADS)
Pathak, Harshavardhana S.; Shukla, Ratnesh K.
2016-08-01
A high-order adaptive finite-volume method is presented for simulating inviscid compressible flows on time-dependent redistributed grids. The method achieves dynamic adaptation through a combination of time-dependent mesh node clustering in regions characterized by strong solution gradients and an optimal selection of the order of accuracy and the associated reconstruction stencil in a conservative finite-volume framework. This combined approach maximizes spatial resolution in discontinuous regions that require low-order approximations for oscillation-free shock capturing. Over smooth regions, high-order discretization through finite-volume WENO schemes minimizes numerical dissipation and provides excellent resolution of intricate flow features. The method including the moving mesh equations and the compressible flow solver is formulated entirely on a transformed time-independent computational domain discretized using a simple uniform Cartesian mesh. Approximations for the metric terms that enforce discrete geometric conservation law while preserving the fourth-order accuracy of the two-point Gaussian quadrature rule are developed. Spurious Cartesian grid induced shock instabilities such as carbuncles that feature in a local one-dimensional contact capturing treatment along the cell face normals are effectively eliminated through upwind flux calculation using a rotated Hartex-Lax-van Leer contact resolving (HLLC) approximate Riemann solver for the Euler equations in generalized coordinates. Numerical experiments with the fifth and ninth-order WENO reconstructions at the two-point Gaussian quadrature nodes, over a range of challenging test cases, indicate that the redistributed mesh effectively adapts to the dynamic flow gradients thereby improving the solution accuracy substantially even when the initial starting mesh is non-adaptive. The high adaptivity combined with the fifth and especially the ninth-order WENO reconstruction allows remarkably sharp capture of discontinuous propagating shocks with simultaneous resolution of smooth yet complex small scale unsteady flow features to an exceptional detail.
The sensitivity of precipitation simulations to the soot aerosol presence
NASA Astrophysics Data System (ADS)
Palamarchuk, Iuliia; Ivanov, Sergiy; Mahura, Alexander; Ruban, Igor
2016-04-01
The role of aerosols in nonlinear feedbacks on atmospheric processes is in a focus of many researches. Particularly, the importance of black carbon particles for evolution of physical weather including precipitation formation and release is investigated by numerical modelling as well as observation networks. However, certain discrepancies between results obtained by different methods are remained. The increasing of complexity in numerical weather modelling systems leads to enlarging a volume of output data and promises to reveal new aspects in complexity of interactions and feedbacks. The Harmonie-38h1.2 model with the AROME physical package is used to study changes in precipitation life-cycle under black carbon polluted conditions. A model configuration includes a radar data assimilation procedure on a high resolution domain covering the Scandinavia region. Model results show that precipitation rate and distribution as well as other variables of atmospheric dynamics and physics over the domain are sensitive to aerosol concentrations. The attention should also be paid to numerical aspects, such as a list of observation types involved in assimilation. The use of high resolution radar information allows to include mesoscale features in initial conditions and to decrease the growth rate of a model error with the lead time.
NASA Astrophysics Data System (ADS)
Lefebvre, Joël.; Castonguay, Alexandre; Lesage, Frédéric
2018-02-01
High resolution imaging of whole rodent brains using serial OCT scanners is a promising method to investigate microstructural changes in tissue related to the evolution of neuropathologies. Although micron to sub-micron sampling resolution can be obtained by using high numerical aperture objectives and dynamic focusing, such an imaging system is not adapted to whole brain imaging. This is due to the large amount of data it generates and the significant computational resources required for reconstructing such volumes. To address this limitation, a dual resolution serial OCT scanner was developed. The optical setup consists in a swept-source OCT made of two sample and reference arms, each arm being coupled with different microscope objectives (3X / 40X). Motorized flip mirrors were used to switch between each OCT arm, thus allowing low and high resolution acquisitions within the same sample. The low resolution OCT volumes acquired with the 3X arm were stitched together, providing a 3D map of the whole mouse brain. This brain can be registered to an OCT brain template to enable neurological structures localization. The high resolution volumes acquired with the 40X arm were also stitched together to create local high resolution 3D maps of the tissue microstructure. The 40X data can be acquired at any arbitrary location in the sample, thus limiting storage-heavy high resolution data to application restricted to specific regions of interest. By providing dual-resolution OCT data, this setup can be used to validate diffusion MRI with tissue microstructure derived metrics measured at any location in ex vivo brains.
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution
Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.
2016-01-01
We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939
Prieto, Claudia; Uribe, Sergio; Razavi, Reza; Atkinson, David; Schaeffter, Tobias
2010-08-01
One of the current limitations of dynamic contrast-enhanced MR angiography is the requirement of both high spatial and high temporal resolution. Several undersampling techniques have been proposed to overcome this problem. However, in most of these methods the tradeoff between spatial and temporal resolution is constant for all the time frames and needs to be specified prior to data collection. This is not optimal for dynamic contrast-enhanced MR angiography where the dynamics of the process are difficult to predict and the image quality requirements are changing during the bolus passage. Here, we propose a new highly undersampled approach that allows the retrospective adaptation of the spatial and temporal resolution. The method combines a three-dimensional radial phase encoding trajectory with the golden angle profile order and non-Cartesian Sensitivity Encoding (SENSE) reconstruction. Different regularization images, obtained from the same acquired data, are used to stabilize the non-Cartesian SENSE reconstruction for the different phases of the bolus passage. The feasibility of the proposed method was demonstrated on a numerical phantom and in three-dimensional intracranial dynamic contrast-enhanced MR angiography of healthy volunteers. The acquired data were reconstructed retrospectively with temporal resolutions from 1.2 sec to 8.1 sec, providing a good depiction of small vessels, as well as distinction of different temporal phases.
High numerical aperture projection system for extreme ultraviolet projection lithography
Hudyma, Russell M.
2000-01-01
An optical system is described that is compatible with extreme ultraviolet radiation and comprises five reflective elements for projecting a mask image onto a substrate. The five optical elements are characterized in order from object to image as concave, convex, concave, convex, and concave mirrors. The optical system is particularly suited for ring field, step and scan lithography methods. The invention uses aspheric mirrors to minimize static distortion and balance the static distortion across the ring field width which effectively minimizes dynamic distortion. The present invention allows for higher device density because the optical system has improved resolution that results from the high numerical aperture, which is at least 0.14.
NASA Astrophysics Data System (ADS)
Weng, Jiawen; Clark, David C.; Kim, Myung K.
2016-05-01
A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.
A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes
With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less
Numerical Study of Boundary Layer Interaction with Shocks: Method Improvement and Test Computation
NASA Technical Reports Server (NTRS)
Adams, N. A.
1995-01-01
The objective is the development of a high-order and high-resolution method for the direct numerical simulation of shock turbulent-boundary-layer interaction. Details concerning the spatial discretization of the convective terms can be found in Adams and Shariff (1995). The computer code based on this method as introduced in Adams (1994) was formulated in Cartesian coordinates and thus has been limited to simple rectangular domains. For more general two-dimensional geometries, as a compression corner, an extension to generalized coordinates is necessary. To keep the requirements or limitations for grid generation low, the extended formulation should allow for non-orthogonal grids. Still, for simplicity and cost efficiency, periodicity can be assumed in one cross-flow direction. For easy vectorization, the compact-ENO coupling algorithm as used in Adams (1994) treated whole planes normal to the derivative direction with the ENO scheme whenever at least one point of this plane satisfied the detection criterion. This is apparently too restrictive for more general geometries and more complex shock patterns. Here we introduce a localized compact-ENO coupling algorithm, which is efficient as long as the overall number of grid points treated by the ENO scheme is small compared to the total number of grid points. Validation and test computations with the final code are performed to assess the efficiency and suitability of the computer code for the problems of interest. We define a set of parameters where a direct numerical simulation of a turbulent boundary layer along a compression corner with reasonably fine resolution is affordable.
Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.
Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji
2015-12-01
A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.
Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations
NASA Astrophysics Data System (ADS)
Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa
2017-05-01
We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.
NASA Astrophysics Data System (ADS)
Hilburn, Guy Louis
Results from several studies are presented which detail explorations of the physical and spectral properties of low luminosity active galactic nuclei. An initial Sagittarius A* general relativistic magnetohydrodynamic simulation and Monte Carlo radiation transport model suggests accretion rate changes as the dominant flaring method. A similar study on M87 introduces new methods to the Monte Carlo model for increased consistency in highly energetic sources. Again, accretion rate variation seems most appropriate to explain spectral transients. To more closely resolve the methods of particle energization in active galactic nuclei accretion disks, a series of localized shearing box simulations explores the effect of numerical resolution on the development of current sheets. A particular focus on numerically describing converged current sheet formation will provide new methods for consideration of turbulence in accretion disks.
Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr
2014-12-15
In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less
Changing the scale of hydrogeophysical aquifer heterogeneity characterization
NASA Astrophysics Data System (ADS)
Paradis, Daniel; Tremblay, Laurie; Ruggeri, Paolo; Brunet, Patrick; Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Holliger, Klaus; Irving, James; Molson, John; Lefebvre, Rene
2015-04-01
Contaminant remediation and management require the quantitative predictive capabilities of groundwater flow and mass transport numerical models. Such models have to encompass source zones and receptors, and thus typically cover several square kilometers. To predict the path and fate of contaminant plumes, these models have to represent the heterogeneous distribution of hydraulic conductivity (K). However, hydrogeophysics has generally been used to image relatively restricted areas of the subsurface (small fractions of km2), so there is a need for approaches defining heterogeneity at larger scales and providing data to constrain conceptual and numerical models of aquifer systems. This communication describes a workflow defining aquifer heterogeneity that was applied over a 12 km2 sub-watershed surrounding a decommissioned landfill emitting landfill leachate. The aquifer is a shallow, 10 to 20 m thick, highly heterogeneous and anisotropic assemblage of littoral sand and silt. Field work involved the acquisition of a broad range of data: geological, hydraulic, geophysical, and geochemical. The emphasis was put on high resolution and continuous hydrogeophysical data, the use of direct-push fully-screened wells and the acquisition of targeted high-resolution hydraulic data covering the range of observed aquifer materials. The main methods were: 1) surface geophysics (ground-penetrating radar and electrical resistivity); 2) direct-push operations with a geotechnical drilling rig (cone penetration tests with soil moisture resistivity CPT/SMR; full-screen well installation); and 3) borehole operations, including high-resolution hydraulic tests and geochemical sampling. New methods were developed to acquire high vertical resolution hydraulic data in direct-push wells, including both vertical and horizontal K (Kv and Kh). Various data integration approaches were used to represent aquifer properties in 1D, 2D and 3D. Using relevant vector machines (RVM), the mechanical and geophysical CPT/SMR measurements were used to recognize hydrofacies (HF) and obtain high-resolution 1D vertical profiles of hydraulic properties. Bayesian sequential simulation of the low-resolution surface-based geoelectrical measurements as well as high-resolution direct-push measurements of the electrical and hydraulic conductivities provided realistic estimates of the spatial distribution of K on a 250-m-long 2D survey line. Following a similar approach, all 1D vertical profiles of K derived from CPT/SMR soundings were integrated with available 2D geoelectrical profiles to obtain the 3D distribution of K over the study area. Numerical models were developed to understand flow and mass transport and assess how indicators could constrain model results and their K distributions. A 2D vertical section model was first developed based on a conceptual representation of heterogeneity which showed a significant effect of layering on flow and transport. The model demonstrated that solute and age tracers provide key model constraints. Additional 2D vertical section models with synthetic representations of low and high K hydrofacies were also developed on the basis of CPT/SMR soundings. These models showed that high-resolution profiles of hydraulic head could help constrain the spatial distribution and continuity of hydrofacies. History matching approaches are still required to simulate geostatistical models of K using hydrogeophysical data, while considering their impact on flow and transport with constraints provided by tracers of solutes and groundwater age.
NASA Astrophysics Data System (ADS)
Liebel, L.; Körner, M.
2016-06-01
In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R
2017-11-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.
Multi-shot PROPELLER for high-field preclinical MRI
Pandit, Prachi; Qi, Yi; Story, Jennifer; King, Kevin F.; Johnson, G. Allan
2012-01-01
With the development of numerous mouse models of cancer, there is a tremendous need for an appropriate imaging technique to study the disease evolution. High-field T2-weighted imaging using PROPELLER MRI meets this need. The 2-shot PROPELLER technique presented here, provides (a) high spatial resolution, (b) high contrast resolution, and (c) rapid and non-invasive imaging, which enables high-throughput, longitudinal studies in free-breathing mice. Unique data collection and reconstruction makes this method robust against motion artifacts. The 2-shot modification introduced here, retains more high-frequency information and provides higher SNR than conventional single-shot PROPELLER, making this sequence feasible at high-fields, where signal loss is rapid. Results are shown in a liver metastases model to demonstrate the utility of this technique in one of the more challenging regions of the mouse, which is the abdomen. PMID:20572138
Multishot PROPELLER for high-field preclinical MRI.
Pandit, Prachi; Qi, Yi; Story, Jennifer; King, Kevin F; Johnson, G Allan
2010-07-01
With the development of numerous mouse models of cancer, there is a tremendous need for an appropriate imaging technique to study the disease evolution. High-field T(2)-weighted imaging using PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI meets this need. The two-shot PROPELLER technique presented here provides (a) high spatial resolution, (b) high contrast resolution, and (c) rapid and noninvasive imaging, which enables high-throughput, longitudinal studies in free-breathing mice. Unique data collection and reconstruction makes this method robust against motion artifacts. The two-shot modification introduced here retains more high-frequency information and provides higher signal-to-noise ratio than conventional single-shot PROPELLER, making this sequence feasible at high fields, where signal loss is rapid. Results are shown in a liver metastases model to demonstrate the utility of this technique in one of the more challenging regions of the mouse, which is the abdomen. (c) 2010 Wiley-Liss, Inc.
A Runge-Kutta discontinuous finite element method for high speed flows
NASA Technical Reports Server (NTRS)
Bey, Kim S.; Oden, J. T.
1991-01-01
A Runge-Kutta discontinuous finite element method is developed for hyperbolic systems of conservation laws in two space variables. The discontinuous Galerkin spatial approximation to the conservation laws results in a system of ordinary differential equations which are marched in time using Runge-Kutta methods. Numerical results for the two-dimensional Burger's equation show that the method is (p+1)-order accurate in time and space, where p is the degree of the polynomial approximation of the solution within an element and is capable of capturing shocks over a single element without oscillations. Results for this problem also show that the accuracy of the solution in smooth regions is unaffected by the local projection and that the accuracy in smooth regions increases as p increases. Numerical results for the Euler equations show that the method captures shocks without oscillations and with higher resolution than a first-order scheme.
Jang, J; Seo, J K
2015-06-01
This paper describes a multiple background subtraction method in frequency difference electrical impedance tomography (fdEIT) to detect an admittivity anomaly from a high-contrast background conductivity distribution. The proposed method expands the use of the conventional weighted frequency difference EIT method, which has been used limitedly to detect admittivity anomalies in a roughly homogeneous background. The proposed method can be viewed as multiple weighted difference imaging in fdEIT. Although the spatial resolutions of the output images by fdEIT are very low due to the inherent ill-posedness, numerical simulations and phantom experiments of the proposed method demonstrate its feasibility to detect anomalies. It has potential application in stroke detection in a head model, which is highly heterogeneous due to the skull.
NASA Astrophysics Data System (ADS)
Greene, Patrick T.; Eldredge, Jeff D.; Zhong, Xiaolin; Kim, John
2016-07-01
In this paper, we present a method for performing uniformly high-order direct numerical simulations of high-speed flows over arbitrary geometries. The method was developed with the goal of simulating and studying the effects of complex isolated roughness elements on the stability of hypersonic boundary layers. The simulations are carried out on Cartesian grids with the geometries imposed by a third-order cut-stencil method. A fifth-order hybrid weighted essentially non-oscillatory scheme was implemented to capture any steep gradients in the flow created by the geometries and a third-order Runge-Kutta method is used for time advancement. A multi-zone refinement method was also utilized to provide extra resolution at locations with expected complex physics. The combination results in a globally fourth-order scheme in space and third order in time. Results confirming the method's high order of convergence are shown. Two-dimensional and three-dimensional test cases are presented and show good agreement with previous results. A simulation of Mach 3 flow over the logo of the Ubuntu Linux distribution is shown to demonstrate the method's capabilities for handling complex geometries. Results for Mach 6 wall-bounded flow over a three-dimensional cylindrical roughness element are also presented. The results demonstrate that the method is a promising tool for the study of hypersonic roughness-induced transition.
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen; Chow, Chuen-Yen; Chang, Sin-Chung
1998-01-01
Without resorting to special treatment for each individual test case, the 1D and 2D CE/SE shock-capturing schemes described previously (in Part I) are used to simulate flows involving phenomena such as shock waves, contact discontinuities, expansion waves and their interactions. Five 1D and six 2D problems are considered to examine the capability and robustness of these schemes. Despite their simple logical structures and low computational cost (for the 2D CE/SE shock-capturing scheme, the CPU time is about 2 micro-secs per mesh point per marching step on a Cray C90 machine), the numerical results, when compared with experimental data, exact solutions or numerical solutions by other methods, indicate that these schemes can accurately resolve shock and contact discontinuities consistently.
Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
Subwavelength resolution from multilayered structure (Conference Presentation)
NASA Astrophysics Data System (ADS)
Cheng, Bo Han; Jen, Yi-Jun; Liu, Wei-Chih; Lin, Shan-wen; Lan, Yung-Chiang; Tsai, Din Ping
2016-10-01
Breaking optical diffraction limit is one of the most important issues needed to be overcome for the demand of high-density optoelectronic components. Here, a multilayered structure which consists of alternating semiconductor and dielectric layers for breaking optical diffraction limitation at THz frequency region are proposed and analyzed. We numerically demonstrate that such multilayered structure not only can act as a hyperbolic metamaterial but also a birefringence material via the control of the external temperature (or magnetic field). A practical approach is provided to control all the diffraction signals toward a specific direction by using transfer matrix method and effective medium theory. Numerical calculations and computer simulation (based on finite element method, FEM) are carried out, which agree well with each other. The temperature (or magnetic field) parameter can be tuned to create an effective material with nearly flat isofrequency feature to transfer (project) all the k-space signals excited from the object to be resolved to the image plane. Furthermore, this multilayered structure can resolve subwavelength structures at various incident THz light sources simultaneously. In addition, the resolution power for a fixed operating frequency also can be tuned by only changing the magnitude of external magnetic field. Such a device provides a practical route for multi-functional material, photolithography and real-time super-resolution image.
Adaptive optics with pupil tracking for high resolution retinal imaging
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-01-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577
Adaptive optics with pupil tracking for high resolution retinal imaging.
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-02-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.
Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L
2008-09-01
The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less
Ti, Chaoyang; Ho-Thanh, Minh-Tri; Wen, Qi; Liu, Yuxiang
2017-10-13
Position detection with high accuracy is crucial for force calibration of optical trapping systems. Most existing position detection methods require high-numerical-aperture objective lenses, which are bulky, expensive, and difficult to miniaturize. Here, we report an affordable objective-lens-free, fiber-based position detection scheme with 2 nm spatial resolution and 150 MHz bandwidth. This fiber based detection mechanism enables simultaneous trapping and force measurements in a compact fiber optical tweezers system. In addition, we achieved more reliable signal acquisition with less distortion compared with objective based position detection methods, thanks to the light guiding in optical fibers and small distance between the fiber tips and trapped particle. As a demonstration of the fiber based detection, we used the fiber optical tweezers to apply a force on a cell membrane and simultaneously measure the cellular response.
Contaminant behavior in fractured sedimentary rocks: Seeing the fractures that matter
NASA Astrophysics Data System (ADS)
Parker, B. L.
2017-12-01
High resolution spatial sampling of continuous cores from sites contaminated with chlorinated solvents over many decades was used as a strategy to quantify mass stored in low permeability blocks of rock between hydraulically active fractures. Given that core and geophysical logging methods cannot distinguish between hydraulically active fractures and those that do not transmit water, these samples were informed by careful logging of visible fracture features in the core with sample spacing determined by modelled diffusion transport distances given rock matrix properties and expected ages of contamination. These high resolution contaminant concentration profiles from long term contaminated sites in sedimentary rock showed evidence of many more hydraulically active fractures than indicated by the most sophisticated open-hole logging methods. Fracture density is an important attribute affecting fracture connectivity and influencing contaminant plume evolution in fractured porous sedimentary rock. These contaminant profile findings were motivation to find new borehole methods to directly measure hydraulically active fracture occurrence and flux to corroborate the long term "DNAPL tracer experiment" results. Improved sensitivity is obtained when boreholes are sealed using flexible fabric liners (FLUTeTM technology) and various sensor options are deployed in the static water columns used to inflate these liners or in contact with the borehole wall behind the liners. Several methods rely on high resolution temperature measurements of ambient or induced temperature variability such as temperature vector probes (TVP), fiber optic cables for distributed temperature sensing (DTS), both using active heat; packer testing, point dilution testing and groundwater flux measurements between multiple straddle packers to account for leakage. In all cases, numerous hydraulically active fractures are identified over 100 to 300 meters depth, with a large range in transmissivities and hydraulic apertures to inform discrete fracture flow and transport models. 3-D field mapping of decades-old contaminant plumes in sedimentary aquifers shows that numerous hydraulically active fractures are needed to reproduce observed plume concentration distributions and allow targeted monitoring and remediation.
Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin
2014-01-08
The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al . 2012 Proc. R. Soc. A 468 , 1799-1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi-Dirac or Bose-Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas.
Resolution and contrast in Kelvin probe force microscopy
NASA Astrophysics Data System (ADS)
Jacobs, H. O.; Leuchtmann, P.; Homan, O. J.; Stemmer, A.
1998-08-01
The combination of atomic force microscopy and Kelvin probe technology is a powerful tool to obtain high-resolution maps of the surface potential distribution on conducting and nonconducting samples. However, resolution and contrast transfer of this method have not been fully understood, so far. To obtain a better quantitative understanding, we introduce a model which correlates the measured potential with the actual surface potential distribution, and we compare numerical simulations of the three-dimensional tip-specimen model with experimental data from test structures. The observed potential is a locally weighted average over all potentials present on the sample surface. The model allows us to calculate these weighting factors and, furthermore, leads to the conclusion that good resolution in potential maps is obtained by long and slender but slightly blunt tips on cantilevers of minimal width and surface area.
Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution
Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; ...
2016-01-01
Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single moleculemore » super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.« less
NASA Technical Reports Server (NTRS)
Garcia-Espada, Susana; Haas, Rudiger; Colomer, Francisco
2010-01-01
An important limitation for the precision in the results obtained by space geodetic techniques like VLBI and GPS are tropospheric delays caused by the neutral atmosphere, see e.g. [1]. In recent years numerical weather models (NWM) have been applied to improve mapping functions which are used for tropospheric delay modeling in VLBI and GPS data analyses. In this manuscript we use raytracing to calculate slant delays and apply these to the analysis of Europe VLBI data. The raytracing is performed through the limited area numerical weather prediction (NWP) model HIRLAM. The advantages of this model are high spatial (0.2 deg. x 0.2 deg.) and high temporal resolution (in prediction mode three hours).
Numerical experiments with a symmetric high-resolution shock-capturing scheme
NASA Technical Reports Server (NTRS)
Yee, H. C.
1986-01-01
Characteristic-based explicit and implicit total variation diminishing (TVD) schemes for the two-dimensional compressible Euler equations have recently been developed. This is a generalization of recent work of Roe and Davis to a wider class of symmetric (non-upwind) TVD schemes other than Lax-Wendroff. The Roe and Davis schemes can be viewed as a subset of the class of explicit methods. The main properties of the present class of schemes are that they can be implicit, and, when steady-state calculations are sought, the numerical solution is independent of the time step. In a recent paper, a comparison of a linearized form of the present implicit symmetric TVD scheme with an implicit upwind TVD scheme originally developed by Harten and modified by Yee was given. Results favored the symmetric method. It was found that the latter is just as accurate as the upwind method while requiring less computational effort. Currently, more numerical experiments are being conducted on time-accurate calculations and on the effect of grid topology, numerical boundary condition procedures, and different flow conditions on the behavior of the method for steady-state applications. The purpose here is to report experiences with this type of scheme and give guidelines for its use.
Numerical Weather Predictions Evaluation Using Spatial Verification Methods
NASA Astrophysics Data System (ADS)
Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.
2014-12-01
During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain--Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is cofinanced by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007--2013).
A comparative verification of high resolution precipitation forecasts using model output statistics
NASA Astrophysics Data System (ADS)
van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees
2017-04-01
Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.
Jung, Hyukjin; Jeong, Ki-Hun
2009-08-17
A microfabricated compound eye, comparable to a natural compound eye shows a spherical arrangement of integrated optical units called artificial ommatidia. Each consists of a self-aligned microlens and waveguide. The increase of waveguide length is imperative to obtain high resolution images through an artificial compound eye for wide field-of - view imaging as well as fast motion detection. This work presents an effective method for increasing the waveguide length of artificial ommatidium using a laser induced self-writing process in a photosensitive polymer resin. The numerical and experimental results show the uniform formation of waveguides and the increment of waveguide length over 850 microm. (c) 2009 Optical Society of America
NASA Technical Reports Server (NTRS)
Janus, J. Mark; Whitfield, David L.
1990-01-01
Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.
On the Measurements of Numerical Viscosity and Resistivity in Eulerian MHD Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rembiasz, Tomasz; Obergaulinger, Martin; Cerdá-Durán, Pablo
2017-06-01
We propose a simple ansatz for estimating the value of the numerical resistivity and the numerical viscosity of any Eulerian MHD code. We test this ansatz with the help of simulations of the propagation of (magneto)sonic waves, Alfvén waves, and the tearing mode (TM) instability using the MHD code Aenus. By comparing the simulation results with analytical solutions of the resistive-viscous MHD equations and an empirical ansatz for the growth rate of TMs, we measure the numerical viscosity and resistivity of Aenus. The comparison shows that the fast magnetosonic speed and wavelength are the characteristic velocity and length, respectively, ofmore » the aforementioned (relatively simple) systems. We also determine the dependence of the numerical viscosity and resistivity on the time integration method, the spatial reconstruction scheme and (to a lesser extent) the Riemann solver employed in the simulations. From the measured results, we infer the numerical resolution (as a function of the spatial reconstruction method) required to properly resolve the growth and saturation level of the magnetic field amplified by the magnetorotational instability in the post-collapsed core of massive stars. Our results show that it is most advantageous to resort to ultra-high-order methods (e.g., the ninth-order monotonicity-preserving method) to tackle this problem properly, in particular, in three-dimensional simulations.« less
A. M. S. Smith; N. A. Drake; M. J. Wooster; A. T. Hudak; Z. A. Holden; C. J. Gibbons
2007-01-01
Accurate production of regional burned area maps are necessary to reduce uncertainty in emission estimates from African savannah fires. Numerous methods have been developed that map burned and unburned surfaces. These methods are typically applied to coarse spatial resolution (1 km) data to produce regional estimates of the area burned, while higher spatial resolution...
On the use of kinetic energy preserving DG-schemes for large eddy simulation
NASA Astrophysics Data System (ADS)
Flad, David; Gassner, Gregor
2017-12-01
Recently, element based high order methods such as Discontinuous Galerkin (DG) methods and the closely related flux reconstruction (FR) schemes have become popular for compressible large eddy simulation (LES). Element based high order methods with Riemann solver based interface numerical flux functions offer an interesting dispersion dissipation behavior for multi-scale problems: dispersion errors are very low for a broad range of scales, while dissipation errors are very low for well resolved scales and are very high for scales close to the Nyquist cutoff. In some sense, the inherent numerical dissipation caused by the interface Riemann solver acts as a filter of high frequency solution components. This observation motivates the trend that element based high order methods with Riemann solvers are used without an explicit LES model added. Only the high frequency type inherent dissipation caused by the Riemann solver at the element interfaces is used to account for the missing sub-grid scale dissipation. Due to under-resolution of vortical dominated structures typical for LES type setups, element based high order methods suffer from stability issues caused by aliasing errors of the non-linear flux terms. A very common strategy to fight these aliasing issues (and instabilities) is so-called polynomial de-aliasing, where interpolation is exchanged with projection based on an increased number of quadrature points. In this paper, we start with this common no-model or implicit LES (iLES) DG approach with polynomial de-aliasing and Riemann solver dissipation and review its capabilities and limitations. We find that the strategy gives excellent results, but only when the resolution is such, that about 40% of the dissipation is resolved. For more realistic, coarser resolutions used in classical LES e.g. of industrial applications, the iLES DG strategy becomes quite inaccurate. We show that there is no obvious fix to this strategy, as adding for instance a sub-grid-scale models on top doesn't change much or in worst case decreases the fidelity even more. Finally, the core of this work is a novel LES strategy based on split form DG methods that are kinetic energy preserving. The scheme offers excellent stability with full control over the amount and shape of the added artificial dissipation. This premise is the main idea of the work and we will assess the LES capabilities of the novel split form DG approach when applied to shock-free, moderate Mach number turbulence. We will demonstrate that the novel DG LES strategy offers similar accuracy as the iLES methodology for well resolved cases, but strongly increases fidelity in case of more realistic coarse resolutions.
Truong, Trong-Kha; Guidon, Arnaud
2014-01-01
Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457
Effects of sounding temperature assimilation on weather forecasting - Model dependence studies
NASA Technical Reports Server (NTRS)
Ghil, M.; Halem, M.; Atlas, R.
1979-01-01
In comparing various methods for the assimilation of remote sounding information into numerical weather prediction (NWP) models, the problem of model dependence for the different results obtained becomes important. The paper investigates two aspects of the model dependence question: (1) the effect of increasing horizontal resolution within a given model on the assimilation of sounding data, and (2) the effect of using two entirely different models with the same assimilation method and sounding data. Tentative conclusions reached are: first, that model improvement as exemplified by increased resolution, can act in the same direction as judicious 4-D assimilation of remote sounding information, to improve 2-3 day numerical weather forecasts. Second, that the time continuous 4-D methods developed at GLAS have similar beneficial effects when used in the assimilation of remote sounding information into NWP models with very different numerical and physical characteristics.
Automated aberration correction of arbitrary laser modes in high numerical aperture systems.
Hering, Julian; Waller, Erik H; Von Freymann, Georg
2016-12-12
Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture system. Using circularly polarized light for the measurement and first-guess initial conditions for amplitude and phase of the pupil function our scalar approach outperforms recent algorithms with vectorial corrections. Besides laser lithography also applications like optical tweezers and microscopy might benefit from the method presented.
NASA Astrophysics Data System (ADS)
Drabik, Timothy J.; Lee, Sing H.
1986-11-01
The intrinsic parallelism characteristics of easily realizable optical SIMD arrays prompt their present consideration in the implementation of highly structured algorithms for the numerical solution of multidimensional partial differential equations and the computation of fast numerical transforms. Attention is given to a system, comprising several spatial light modulators (SLMs), an optical read/write memory, and a functional block, which performs simple, space-invariant shifts on images with sufficient flexibility to implement the fastest known methods for partial differential equations as well as a wide variety of numerical transforms in two or more dimensions. Either fixed or floating-point arithmetic may be used. A performance projection of more than 1 billion floating point operations/sec using SLMs with 1000 x 1000-resolution and operating at 1-MHz frame rates is made.
NASA Astrophysics Data System (ADS)
Marras, Simone; Giraldo, Frank
2015-04-01
The prediction of extreme weather sufficiently ahead of its occurrence impacts society as a whole and coastal communities specifically (e.g. Hurricane Sandy that impacted the eastern seaboard of the U.S. in the fall of 2012). With the final goal of solving hurricanes at very high resolution and numerical accuracy, we have been developing the Non-hydrostatic Unified Model of the Atmosphere (NUMA) to solve the Euler and Navier-Stokes equations by arbitrary high-order element-based Galerkin methods on massively parallel computers. NUMA is a unified model with respect to the following criteria: (a) it is based on unified numerics in that element-based Galerkin methods allow the user to choose between continuous (spectral elements, CG) or discontinuous Galerkin (DG) methods and from a large spectrum of time integrators, (b) it is unified across scales in that it can solve flow in limited-area mode (flow in a box) or in global mode (flow on the sphere). NUMA is the dynamical core that powers the U.S. Naval Research Laboratory's next-generation global weather prediction system NEPTUNE (Navy's Environmental Prediction sysTem Utilizing the NUMA corE). Because the solution of the Euler equations by high order methods is prone to instabilities that must be damped in some way, we approach the problem of stabilization via an adaptive Large Eddy Simulation (LES) scheme meant to treat such instabilities by modeling the sub-grid scale features of the flow. The novelty of our effort lies in the extension to high order spectral elements for low Mach number stratified flows of a method that was originally designed for low order, adaptive finite elements in the high Mach number regime [1]. The Euler equations are regularized by means of a dynamically adaptive stress tensor that is proportional to the residual of the unperturbed equations. Its effect is close to none where the solution is sufficiently smooth, whereas it increases elsewhere, with a direct contribution to the stabilization of the otherwise oscillatory solution. As a first step toward the Large Eddy Simulation of a hurricane, we verify the model via a high-order and high resolution idealized simulation of deep convection on the sphere. References [1] M. Nazarov and J. Hoffman (2013) Residual-based artificial viscosity for simulation of turbulent compressible flow using adaptive finite element methods Int. J. Numer. Methods Fluids, 71:339-357
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
2017-01-01
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
A Review of Element-Based Galerkin Methods for Numerical Weather Prediction
2015-04-01
with body forces to model the effects of gravity and the Earth’s rotation (i.e. Coriolis force). Although the gravitational force varies with both...more phenomena (e.g. resolving non-hydrostatic effects , incorporating more complex moisture parameterizations), their appetite for High Performance...operation effectively ). For instance, the ST-based model NOGAPS, used by the U. S. Navy, could not scale beyond 150 processes at typical resolutions [119
Cavitation erosion prediction based on analysis of flow dynamics and impact load spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihatsch, Michael S., E-mail: michael.mihatsch@aer.mw.tum.de; Schmidt, Steffen J.; Adams, Nikolaus A.
2015-10-15
Cavitation erosion is the consequence of repeated collapse-induced high pressure-loads on a material surface. The present paper assesses the prediction of impact load spectra of cavitating flows, i.e., the rate and intensity distribution of collapse events based on a detailed analysis of flow dynamics. Data are obtained from a numerical simulation which employs a density-based finite volume method, taking into account the compressibility of both phases, and resolves collapse-induced pressure waves. To determine the spectrum of collapse events in the fluid domain, we detect and quantify the collapse of isolated vapor structures. As reference configuration we consider the expansion ofmore » a liquid into a radially divergent gap which exhibits unsteady sheet and cloud cavitation. Analysis of simulation data shows that global cavitation dynamics and dominant flow events are well resolved, even though the spatial resolution is too coarse to resolve individual vapor bubbles. The inviscid flow model recovers increasingly fine-scale vapor structures and collapses with increasing resolution. We demonstrate that frequency and intensity of these collapse events scale with grid resolution. Scaling laws based on two reference lengths are introduced for this purpose. We show that upon applying these laws impact load spectra recorded on experimental and numerical pressure sensors agree with each other. Furthermore, correlation between experimental pitting rates and collapse-event rates is found. Locations of high maximum wall pressures and high densities of collapse events near walls obtained numerically agree well with areas of erosion damage in the experiment. The investigation shows that impact load spectra of cavitating flows can be inferred from flow data that captures the main vapor structures and wave dynamics without the need for resolving all flow scales.« less
The technology and biology of single-cell RNA sequencing.
Kolodziejczyk, Aleksandra A; Kim, Jong Kyoung; Svensson, Valentine; Marioni, John C; Teichmann, Sarah A
2015-05-21
The differences between individual cells can have profound functional consequences, in both unicellular and multicellular organisms. Recently developed single-cell mRNA-sequencing methods enable unbiased, high-throughput, and high-resolution transcriptomic analysis of individual cells. This provides an additional dimension to transcriptomic information relative to traditional methods that profile bulk populations of cells. Already, single-cell RNA-sequencing methods have revealed new biology in terms of the composition of tissues, the dynamics of transcription, and the regulatory relationships between genes. Rapid technological developments at the level of cell capture, phenotyping, molecular biology, and bioinformatics promise an exciting future with numerous biological and medical applications. Copyright © 2015 Elsevier Inc. All rights reserved.
An Investigation into Solution Verification for CFD-DEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fullmer, William D.; Musser, Jordan
This report presents the study of the convergence behavior of the computational fluid dynamicsdiscrete element method (CFD-DEM) method, specifically National Energy Technology Laboratory’s (NETL) open source MFiX code (MFiX-DEM) with a diffusion based particle-tocontinuum filtering scheme. In particular, this study focused on determining if the numerical method had a solution in the high-resolution limit where the grid size is smaller than the particle size. To address this uncertainty, fixed particle beds of two primary configurations were studied: i) fictitious beds where the particles are seeded with a random particle generator, and ii) instantaneous snapshots from a transient simulation of anmore » experimentally relevant problem. Both problems considered a uniform inlet boundary and a pressure outflow. The CFD grid was refined from a few particle diameters down to 1/6 th of a particle diameter. The pressure drop between two vertical elevations, averaged across the bed cross-section was considered as the system response quantity of interest. A least-squares regression method was used to extrapolate the grid-dependent results to an approximate “grid-free” solution in the limit of infinite resolution. The results show that the diffusion based scheme does yield a converging solution. However, the convergence is more complicated than encountered in simpler, single-phase flow problems showing strong oscillations and, at times, oscillations superimposed on top of globally non-monotonic behavior. The challenging convergence behavior highlights the importance of using at least four grid resolutions in solution verification problems so that (over-determined) regression-based extrapolation methods may be applied to approximate the grid-free solution. The grid-free solution is very important in solution verification and VVUQ exercise in general as the difference between it and the reference solution largely determines the numerical uncertainty. By testing different randomized particle configurations of the same general problem (for the fictitious case) or different instances of freezing a transient simulation, the numerical uncertainties appeared to be on the same order of magnitude as ensemble or time averaging uncertainties. By testing different drag laws, almost all cases studied show that model form uncertainty in this one, very important closure relation was larger than the numerical uncertainty, at least with a reasonable CFD grid, roughly five particle diameters. In this study, the diffusion width (filtering length scale) was mostly set at a constant of six particle diameters. A few exploratory tests were performed to show that similar convergence behavior was observed for diffusion widths greater than approximately two particle diameters. However, this subject was not investigated in great detail because determining an appropriate filter size is really a validation question which must be determined by comparison to experimental or highly accurate numerical data. Future studies are being considered targeting solution verification of transient simulations as well as validation of the filter size with direct numerical simulation data.« less
Unlocking the spatial inversion of large scanning magnetic microscopy datasets
NASA Astrophysics Data System (ADS)
Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.
2013-12-01
Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.
Weighted small subdomain filtering technology
NASA Astrophysics Data System (ADS)
Tai, Zhenhua; Zhang, Fengxu; Zhang, Fengqin; Zhang, Xingzhou; Hao, Mengcheng
2017-09-01
A high-resolution method to define the horizontal edges of gravity sources is presented by improving the three-directional small subdomain filtering (TDSSF). This proposed method is the weighted small subdomain filtering (WSSF). The WSSF uses a numerical difference instead of the phase conversion in the TDSSF to reduce the computational complexity. To make the WSSF more insensitive to noise, the numerical difference is combined with the average algorithm. Unlike the TDSSF, the WSSF uses a weighted sum to integrate the numerical difference results along four directions into one contour, for making its interpretation more convenient and accurate. The locations of tightened gradient belts are used to define the edges of sources in the WSSF result. This proposed method is tested on synthetic data. The test results show that the WSSF provides the horizontal edges of sources more clearly and correctly, even if the sources are interfered with one another and the data is corrupted with random noise. Finally, the WSSF and two other known methods are applied to a real data respectively. The detected edges by the WSSF are sharper and more accurate.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
NASA Astrophysics Data System (ADS)
Taneja, Ankur; Higdon, Jonathan
2018-01-01
A high-order spectral element discontinuous Galerkin method is presented for simulating immiscible two-phase flow in petroleum reservoirs. The governing equations involve a coupled system of strongly nonlinear partial differential equations for the pressure and fluid saturation in the reservoir. A fully implicit method is used with a high-order accurate time integration using an implicit Rosenbrock method. Numerical tests give the first demonstration of high order hp spatial convergence results for multiphase flow in petroleum reservoirs with industry standard relative permeability models. High order convergence is shown formally for spectral elements with up to 8th order polynomials for both homogeneous and heterogeneous permeability fields. Numerical results are presented for multiphase fluid flow in heterogeneous reservoirs with complex geometric or geologic features using up to 11th order polynomials. Robust, stable simulations are presented for heterogeneous geologic features, including globally heterogeneous permeability fields, anisotropic permeability tensors, broad regions of low-permeability, high-permeability channels, thin shale barriers and thin high-permeability fractures. A major result of this paper is the demonstration that the resolution of the high order spectral element method may be exploited to achieve accurate results utilizing a simple cartesian mesh for non-conforming geological features. Eliminating the need to mesh to the boundaries of geological features greatly simplifies the workflow for petroleum engineers testing multiple scenarios in the face of uncertainty in the subsurface geology.
NASA Astrophysics Data System (ADS)
Brasseur, P.; Verron, J. A.; Djath, B.; Duran, M.; Gaultier, L.; Gourdeau, L.; Melet, A.; Molines, J. M.; Ubelmann, C.
2014-12-01
The upcoming high-resolution SWOT altimetry satellite will provide an unprecedented description of the ocean dynamic topography for studying sub- and meso-scale processes in the ocean. But there is still much uncertainty on the signal that will be observed. There are many scientific questions that are unresolved about the observability of altimetry at vhigh resolution and on the dynamical role of the ocean meso- and submesoscales. In addition, SWOT data will raise specific problems due to the size of the data flows. These issues will probably impact the data assimilation approaches for future scientific or operational oceanography applications. In this work, we propose to use a high-resolution numerical model of the Western Pacific Solomon Sea as a regional laboratory to explore such observability and dynamical issues, as well as new data assimilation challenges raised by SWOT. The Solomon Sea connects subtropical water masses to the equatorial ones through the low latitude western boundary currents and could potentially modulate the tropical Pacific climate. In the South Western Pacific, the Solomon Sea exhibits very intense eddy kinetic energy levels, while relatively little is known about the mesoscale and submesoscale activities in this region. The complex bathymetry of the region, complicated by the presence of narrow straits and numerous islands, raises specific challenges. So far, a Solomon sea model configuration has been set up at 1/36° resolution. Numerical simulations have been performed to explore the meso- and submesoscales dynamics. The numerical solutions which have been validated against available in situ data, show the development of small scale features, eddies, fronts and filaments. Spectral analysis reveals a behavior that is consistent with the SQG theory. There is a clear evidence of energy cascade from the small scales including the submesoscales, although those submesoscales are only partially resolved by the model. In parallel, investigations have been conducted using image assimilation approaches in order to explore the richness of high-resolution altimetry missions. These investigations illustrate the potential benefit of combining tracer fields (SST, SSS and spiciness) with high-resolution SWOT data to estimate the fine-scale circulation.
SIL-STED microscopy technique enhancing super-resolution of fluorescence microscopy
NASA Astrophysics Data System (ADS)
Park, No-Cheol; Lim, Geon; Lee, Won-sup; Moon, Hyungbae; Choi, Guk-Jong; Park, Young-Pil
2017-08-01
We have characterized a new type STED microscope which combines a high numerical aperture (NA) optical head with a solid immersion lens (SIL), and we call it as SIL-STED microscope. The advantage of a SIL-STED microscope is that its high NA of the SIL makes it superior to a general STED microscope in lateral resolution, thus overcoming the optical diffraction limit at the macromolecular level and enabling advanced super-resolution imaging of cell surface or cell membrane structure and function Do. This study presents the first implementation of higher NA illumination in a STED microscope limiting the fluorescence lateral resolution to about 40 nm. The refractive index of the SIL which is made of material KTaO3 is about 2.23 and 2.20 at a wavelength of 633 nm and 780 nm which are used for excitation and depletion in STED imaging, respectively. Based on the vector diffraction theory, the electric field focused by the SILSTED microscope is numerically calculated so that the numerical results of the point dispersion function of the microscope and the expected resolution could be analyzed. For further investigation, fluorescence imaging of nano size fluorescent beads is fulfilled to show improved performance of the technique.
NASA Astrophysics Data System (ADS)
Lucas-Serrano, A.; Font, J. A.; Ibáñez, J. M.; Martí, J. M.
2004-12-01
We assess the suitability of a recent high-resolution central scheme developed by \\cite{kurganov} for the solution of the relativistic hydrodynamic equations. The novelty of this approach relies on the absence of Riemann solvers in the solution procedure. The computations we present are performed in one and two spatial dimensions in Minkowski spacetime. Standard numerical experiments such as shock tubes and the relativistic flat-faced step test are performed. As an astrophysical application the article includes two-dimensional simulations of the propagation of relativistic jets using both Cartesian and cylindrical coordinates. The simulations reported clearly show the capabilities of the numerical scheme of yielding satisfactory results, with an accuracy comparable to that obtained by the so-called high-resolution shock-capturing schemes based upon Riemann solvers (Godunov-type schemes), even well inside the ultrarelativistic regime. Such a central scheme can be straightforwardly applied to hyperbolic systems of conservation laws for which the characteristic structure is not explicitly known, or in cases where a numerical computation of the exact solution of the Riemann problem is prohibitively expensive. Finally, we present comparisons with results obtained using various Godunov-type schemes as well as with those obtained using other high-resolution central schemes which have recently been reported in the literature.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R.
2017-01-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues. PMID:29188089
Numerical solution of the exterior oblique derivative BVP using the direct BEM formulation
NASA Astrophysics Data System (ADS)
Čunderlík, Róbert; Špir, Róbert; Mikula, Karol
2016-04-01
The fixed gravimetric boundary value problem (FGBVP) represents an exterior oblique derivative problem for the Laplace equation. A direct formulation of the boundary element method (BEM) for the Laplace equation leads to a boundary integral equation (BIE) where a harmonic function is represented as a superposition of the single-layer and double-layer potential. Such a potential representation is applied to obtain a numerical solution of FGBVP. The oblique derivative problem is treated by a decomposition of the gradient of the unknown disturbing potential into its normal and tangential components. Our numerical scheme uses the collocation with linear basis functions. It involves a triangulated discretization of the Earth's surface as our computational domain considering its complicated topography. To achieve high-resolution numerical solutions, parallel implementations using the MPI subroutines as well as an iterative elimination of far zones' contributions are performed. Numerical experiments present a reconstruction of a harmonic function above the Earth's topography given by the spherical harmonic approach, namely by the EGM2008 geopotential model up to degree 2160. The SRTM30 global topography model is used to approximate the Earth's surface by the triangulated discretization. The obtained BEM solution with the resolution 0.05 deg (12,960,002 nodes) is compared with EGM2008. The standard deviation of residuals 5.6 cm indicates a good agreement. The largest residuals are obviously in high mountainous regions. They are negative reaching up to -0.7 m in Himalayas and about -0.3 m in Andes and Rocky Mountains. A local refinement in the area of Slovakia confirms an improvement of the numerical solution in this mountainous region despite of the fact that the Earth's topography is here considered in more details.
MCore: A High-Order Finite-Volume Dynamical Core for Atmospheric General Circulation Models
NASA Astrophysics Data System (ADS)
Ullrich, P.; Jablonowski, C.
2011-12-01
The desire for increasingly accurate predictions of the atmosphere has driven numerical models to smaller and smaller resolutions, while simultaneously exponentially driving up the cost of existing numerical models. Even with the modern rapid advancement of computational performance, it is estimated that it will take more than twenty years before existing models approach the scales needed to resolve atmospheric convection. However, smarter numerical methods may allow us to glimpse the types of results we would expect from these fine-scale simulations while only requiring a fraction of the computational cost. The next generation of atmospheric models will likely need to rely on both high-order accuracy and adaptive mesh refinement in order to properly capture features of interest. We present our ongoing research on developing a set of ``smart'' numerical methods for simulating the global non-hydrostatic fluid equations which govern atmospheric motions. We have harnessed a high-order finite-volume based approach in developing an atmospheric dynamical core on the cubed-sphere. This type of method is desirable for applications involving adaptive grids, since it has been shown that spuriously reflected wave modes are intrinsically damped out under this approach. The model further makes use of an implicit-explicit Runge-Kutta-Rosenbrock (IMEX-RKR) time integrator for accurate and efficient coupling of the horizontal and vertical model components. We survey the algorithmic development of the model and present results from idealized dynamical core test cases, as well as give a glimpse at future work with our model.
NASA Astrophysics Data System (ADS)
Guerra, J. E.; Ullrich, P. A.
2015-12-01
Tempest is a next-generation global climate and weather simulation platform designed to allow experimentation with numerical methods at very high spatial resolutions. The atmospheric fluid equations are discretized by continuous / discontinuous finite elements in the horizontal and by a staggered nodal finite element method (SNFEM) in the vertical, coupled with implicit/explicit time integration. At global horizontal resolutions below 10km, many important questions remain on optimal techniques for solving the fluid equations. We present results from a suite of meso-scale test cases to validate the performance of the SNFEM applied in the vertical. Internal gravity wave, mountain wave, convective, and Cartesian baroclinic instability tests will be shown at various vertical orders of accuracy and compared with known results.
Dual-axis confocal microscope for high-resolution in vivo imaging
Wang, Thomas D.; Mandella, Michael J.; Contag, Christopher H.; Kino, Gordon S.
2007-01-01
We describe a novel confocal microscope that uses separate low-numerical-aperture objectives with the illumination and collection axes crossed at angle θ from the midline. This architecture collects images in scattering media with high transverse and axial resolution, long working distance, large field of view, and reduced noise from scattered light. We measured transverse and axial (FWHM) resolution of 1.3 and 2.1 μm, respectively, in free space, and confirm subcellular resolution in excised esophageal mucosa. The optics may be scaled to millimeter dimensions and fiber coupled for collection of high-resolution images in vivo. PMID:12659264
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
Nonlinear ultrasonic imaging with X wave
NASA Astrophysics Data System (ADS)
Du, Hongwei; Lu, Wei; Feng, Huanqing
2009-10-01
X wave has a large depth of field and may have important application in ultrasonic imaging to provide high frame rate (HFR). However, the HFR system suffers from lower spatial resolution. In this paper, a study of nonlinear imaging with X wave is presented to improve the resolution. A theoretical description of realizable nonlinear X wave is reported. The nonlinear field is simulated by solving the KZK nonlinear wave equation with a time-domain difference method. The results show that the second harmonic field of X wave has narrower mainlobe and lower sidelobes than the fundamental field. In order to evaluate the imaging effect with X wave, an imaging model involving numerical calculation of the KZK equation, Rayleigh-Sommerfeld integral, band-pass filtering and envelope detection is constructed to obtain 2D fundamental and second harmonic images of scatters in tissue-like medium. The results indicate that if X wave is used, the harmonic image has higher spatial resolution throughout the entire imaging region than the fundamental image, but higher sidelobes occur as compared to conventional focus imaging. A HFR imaging method with higher spatial resolution is thus feasible provided an apodization method is used to suppress sidelobes.
3D radar wavefield tomography of comet interiors
NASA Astrophysics Data System (ADS)
Sava, Paul; Asphaug, Erik
2018-04-01
Answering fundamental questions about the origin and evolution of small planetary bodies hinges on our ability to image their surface and interior structure in detail and at high resolution. The interior structure is not easily accessible without systematic imaging using, e.g., radar transmission and reflection data from multiple viewpoints, as in medical tomography. Radar tomography can be performed using methodology adapted from terrestrial exploration seismology. Our feasibility study primarily focuses on full wavefield methods that facilitate high quality imaging of small body interiors. We consider the case of a monostatic system (co-located transmitters and receivers) operated in various frequency bands between 5 and 15 MHz, from a spacecraft in slow polar orbit around a spinning comet nucleus. Using realistic numerical experiments, we demonstrate that wavefield techniques can generate high resolution tomograms of comets nuclei with arbitrary shape and complex interior properties.
A Study of the Unstable Modes in High Mach Number Gaseous Jets and Shear Layers
NASA Astrophysics Data System (ADS)
Bassett, Gene Marcel
1993-01-01
Instabilities affecting the propagation of supersonic gaseous jets have been studied using high resolution computer simulations with the Piecewise-Parabolic-Method (PPM). These results are discussed in relation to jets from galactic nuclei. These studies involve a detailed treatment of a single section of a very long jet, approximating the dynamics by using periodic boundary conditions. Shear layer simulations have explored the effects of shear layers on the growth of nonlinear instabilities. Convergence of the numerical approximations has been tested by comparing jet simulations with different grid resolutions. The effects of initial conditions and geometry on the dominant disruptive instabilities have also been explored. Simulations of shear layers with a variety of thicknesses, Mach numbers and densities perturbed by incident sound waves imply that the time for the excited kink modes to grow large in amplitude and disrupt the shear layer is taug = (546 +/- 24) (M/4)^{1.7 } (Apert/0.02) ^{-0.4} delta/c, where M is the jet Mach number, delta is the half-width of the shear layer, and A_ {pert} is the perturbation amplitude. For simulations of periodic jets, the initial velocity perturbations set up zig-zag shock patterns inside the jet. In each case a single zig-zag shock pattern (an odd mode) or a double zig-zag shock pattern (an even mode) grows to dominate the flow. The dominant kink instability responsible for these shock patterns moves approximately at the linear resonance velocity, nu_ {mode} = cextnu_ {relative}/(cjet + c_ {ext}). For high resolution simulations (those with 150 or more computational zones across the jet width), the even mode dominates if the even penetration is higher in amplitude initially than the odd perturbation. For low resolution simulations, the odd mode dominates even for a stronger even mode perturbation. In high resolution simulations the jet boundary rolls up and large amounts of external gas are entrained into the jet. In low resolution simulations this entrainment process is impeded by numerical viscosity. The three-dimensional jet simulations behave similarly to two-dimensional jet runs with the same grid resolutions.
Filter and Grid Resolution in DG-LES
NASA Astrophysics Data System (ADS)
Miao, Ling; Sammak, Shervin; Madnia, Cyrus K.; Givi, Peyman
2017-11-01
The discontinuous Galerkin (DG) methodology has proven very effective for large eddy simulation (LES) of turbulent flows. Two important parameters in DG-LES are the grid resolution (h) and the filter size (Δ). In most previous work, the filter size is usually set to be proportional to the grid spacing. In this work, the DG method is combined with a subgrid scale (SGS) closure which is equivalent to that of the filtered density function (FDF). The resulting hybrid scheme is particularly attractive because a larger portion of the resolved energy is captured as the order of spectral approximation increases. Different cases for LES of a three-dimensional temporally developing mixing layer are appraised and a systematic parametric study is conducted to investigate the effects of grid resolution, the filter width size, and the order of spectral discretization. Comparative assessments are also made via the use of high resolution direct numerical simulation (DNS) data.
NASA Astrophysics Data System (ADS)
Lukowski, Mateusz; Usowicz, Boguslaw; Sagan, Joanna; Szlazak, Radoslaw; Gluba, Lukasz; Rojek, Edyta
2017-04-01
Soil moisture is an important parameter in many environmental studies, as it influences the exchange of water and energy at the interface between the land surface and the atmosphere. Accurate assessment of the soil moisture spatial and temporal variations is crucial for numerous studies; starting from a small scale of single field, then catchment, mesoscale basin, ocean conglomeration, finally ending at the global water cycle. Despite numerous advantages, such as fine accuracy (undisturbed by clouds or daytime conditions) and good temporal resolution, passive microwave remote sensing of soil moisture, e.g. SMOS and SMAP, are not applicable to a small scale - simply because of too coarse spatial resolution. On the contrary, thermal infrared-based methods of soil moisture retrieval have a good spatial resolution, but are often disturbed by clouds and vegetation interferences or night effects. The methods that base on point measurements, collected in situ by monitoring stations or during field campaigns, are sometimes called "ground truth" and may serve as a reference for remote sensing, of course after some up-scaling and approximation procedures that are, unfortunately, potential source of error. Presented research concern attempt to synergistic approach that join two remote sensing methods: passive microwave and thermal infrared, supported by in situ measurements. Microwave brightness temperature of soil was measured by ELBARA, the radiometer at 1.4 GHz frequency, installed at 6 meters high tower at Bubnow test site in Poland. Thermal inertia around the tower was modelled using the statistical-physical model whose inputs were: soil physical properties, its water content, albedo and surface temperatures measured by an infrared pyrometer, directed at the same footprint as ELBARA. The results coming from this method were compared to in situ data obtained during several field campaigns and by the stationary agrometeorological stations. The approach seems to be reasonable, as both variables, brightness temperature and thermal inertia, strongly depend on soil moisture. Despite the fact that the presented research focused on modelling in the small size, 4 ha test site, the method is promising for larger scales as well, due to similarities between ELBARA and SMOS and between pyrometer and satellite imaging spectrometers (Landsat, Sentinel etc.). The approach will merge advantages: high accuracy of passive microwave sensing with a good spatial resolution of thermal infrared methods. The work was partially funded under two ESA projects: 1) "ELBARA_PD (Penetration Depth)" No. 4000107897/13/NL/KML, funded by the Government of Poland through an ESA-PECS contract (Plan for European Cooperating States). 2) "Technical Support for the fabrication and deployment of the radiometer ELBARA-III in Bubnow, Poland" No. 4000113360/15/NL/FF/gp.
NASA Astrophysics Data System (ADS)
Liu, Chao; Wang, Famei; Zheng, Shijie; Sun, Tao; Lv, Jingwei; Liu, Qiang; Yang, Lin; Mu, Haiwei; Chu, Paul K.
2016-07-01
A highly birefringent photonic crystal fibre is proposed and characterized based on a surface plasmon resonance sensor. The birefringence of the sensor is numerically analyzed by the finite-element method. In the numerical simulation, the resonance wavelength can be directly positioned at this birefringence abrupt change point and the depth of the abrupt change of birefringence reflects the intensity of excited surface plasmon. Consequently, the novel approach can accurately locate the resonance peak of the system without analyzing the loss spectrum. Simulated average sensitivity is as high as 1131 nm/RIU, corresponding to a resolution of 1 × 10-4 RIU in this sensor. Therefore, results obtained via the approach not only show polarization independence and less noble metal consumption, but also reveal better performance in terms of accuracy and computation efficiency.
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Chang, Sin-Chung; Yu, Sheng-Tao; Wang, Xiao-Yen; Loh, Ching-Yuen; Jorgenson, Philip C. E.
1999-01-01
In this overview paper, we review the basic principles of the method of space-time conservation element and solution element for solving the conservation laws in one and two spatial dimensions. The present method is developed on the basis of local and global flux conservation in a space-time domain, in which space and time are treated in a unified manner. In contrast to the modern upwind schemes, the approach here does not use the Riemann solver and the reconstruction procedure as the building blocks. The drawbacks of the upwind approach, such as the difficulty of rationally extending the 1D scalar approach to systems of equations and particularly to multiple dimensions is here contrasted with the uniformity and ease of generalization of the Conservation Element and Solution Element (CE/SE) 1D scalar schemes to systems of equations and to multiple spatial dimensions. The assured compatibility with the simplest type of unstructured meshes, and the uniquely simple nonreflecting boundary conditions of the present method are also discussed. The present approach has yielded high-resolution shocks, rarefaction waves, acoustic waves, vortices, ZND detonation waves, and shock/acoustic waves/vortices interactions. Moreover, since no directional splitting is employed, numerical resolution of two-dimensional calculations is comparable to that of the one-dimensional calculations. Some sample applications displaying the strengths and broad applicability of the CE/SE method are reviewed.
Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation
NASA Astrophysics Data System (ADS)
Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra
2017-12-01
Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1992-01-01
The present treatment of elliptic regions via hyperbolic flux-splitting and high order methods proposes a flux splitting in which the corresponding Jacobians have real and positive/negative eigenvalues. While resembling the flux splitting used in hyperbolic systems, the present generalization of such splitting to elliptic regions allows the handling of mixed-type systems in a unified and heuristically stable fashion. The van der Waals fluid-dynamics equation is used. Convergence with good resolution to weak solutions for various Riemann problems are observed.
Entropy Splitting for High Order Numerical Simulation of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Sandham, N. D.; Yee, H. C.; Kwak, Dochan (Technical Monitor)
2000-01-01
A stable high order numerical scheme for direct numerical simulation (DNS) of shock-free compressible turbulence is presented. The method is applicable to general geometries. It contains no upwinding, artificial dissipation, or filtering. Instead the method relies on the stabilizing mechanisms of an appropriate conditioning of the governing equations and the use of compatible spatial difference operators for the interior points (interior scheme) as well as the boundary points (boundary scheme). An entropy splitting approach splits the inviscid flux derivatives into conservative and non-conservative portions. The spatial difference operators satisfy a summation by parts condition leading to a stable scheme (combined interior and boundary schemes) for the initial boundary value problem using a generalized energy estimate. A Laplacian formulation of the viscous and heat conduction terms on the right hand side of the Navier-Stokes equations is used to ensure that any tendency to odd-even decoupling associated with central schemes can be countered by the fluid viscosity. A special formulation of the continuity equation is used, based on similar arguments. The resulting methods are able to minimize spurious high frequency oscillation producing nonlinear instability associated with pure central schemes, especially for long time integration simulation such as DNS. For validation purposes, the methods are tested in a DNS of compressible turbulent plane channel flow at a friction Mach number of 0.1 where a very accurate turbulence data base exists. It is demonstrated that the methods are robust in terms of grid resolution, and in good agreement with incompressible channel data, as expected at this Mach number. Accurate turbulence statistics can be obtained with moderate grid sizes. Stability limits on the range of the splitting parameter are determined from numerical tests.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
2004-01-01
This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. Other related issues in high order WENO finite difference and finite volume methods have also been investigated. methods are two classes of high order, high resolution methods suitable for convection dominated simulations with possible discontinuous or sharp gradient solutions. In [18], we first review these two classes of methods, pointing out their similarities and differences in algorithm formulation, theoretical properties, implementation issues, applicability, and relative advantages. We then present some quantitative comparisons of the third order finite volume WENO methods and discontinuous Galerkin methods for a series of test problems to assess their relative merits in accuracy and CPU timing. In [3], we review the development of the Runge-Kutta discontinuous Galerkin (RKDG) methods for non-linear convection-dominated problems. These robust and accurate methods have made their way into the main stream of computational fluid dynamics and are quickly finding use in a wide variety of applications. They combine a special class of Runge-Kutta time discretizations, that allows the method to be non-linearly stable regardless of its accuracy, with a finite element space discretization by discontinuous approximations, that incorporates the ideas of numerical fluxes and slope limiters coined during the remarkable development of the high-resolution finite difference and finite volume schemes. The resulting RKDG methods are stable, high-order accurate, and highly parallelizable schemes that can easily handle complicated geometries and boundary conditions. We review the theoretical and algorithmic aspects of these methods and show several applications including nonlinear conservation laws, the compressible and incompressible Navier-Stokes equations, and Hamilton-Jacobi-like equations.
Structured Illumination Microscopy for the Investigation of Synaptic Structure and Function.
Hong, Soyon; Wilton, Daniel K; Stevens, Beth; Richardson, Douglas S
2017-01-01
The neuronal synapse is a primary building block of the nervous system to which alterations in structure or function can result in numerous pathologies. Studying its formation and elimination is the key to understanding how brains are wired during development, maintained throughout adulthood plasticity, and disrupted during disease. However, due to its diffraction-limited size, investigations of the synaptic junction at the structural level have primarily relied on labor-intensive electron microscopy or ultra-thin section array tomography. Recent advances in the field of super-resolution light microscopy now allow researchers to image synapses and associated molecules with high-spatial resolution, while taking advantage of the key characteristics of light microscopy, such as easy sample preparation and the ability to detect multiple targets with molecular specificity. One such super-resolution technique, Structured Illumination Microscopy (SIM), has emerged as an attractive method to examine synapse structure and function. SIM requires little change in standard light microscopy sample preparation steps, but results in a twofold improvement in both lateral and axial resolutions compared to widefield microscopy. The following protocol outlines a method for imaging synaptic structures at resolutions capable of resolving the intricacies of these neuronal connections.
Numerical analysis of biosonar beamforming mechanisms and strategies in bats.
Müller, Rolf
2010-09-01
Beamforming is critical to the function of most sonar systems. The conspicuous noseleaf and pinna shapes in bats suggest that beamforming mechanisms based on diffraction of the outgoing and incoming ultrasonic waves play a major role in bat biosonar. Numerical methods can be used to investigate the relationships between baffle geometry, acoustic mechanisms, and resulting beampatterns. Key advantages of numerical approaches are: efficient, high-resolution estimation of beampatterns, spatially dense predictions of near-field amplitudes, and the malleability of the underlying shape representations. A numerical approach that combines near-field predictions based on a finite-element formulation for harmonic solutions to the Helmholtz equation with a free-field projection based on the Kirchhoff integral to obtain estimates of the far-field beampattern is reviewed. This method has been used to predict physical beamforming mechanisms such as frequency-dependent beamforming with half-open resonance cavities in the noseleaf of horseshoe bats and beam narrowing through extension of the pinna aperture with skin folds in false vampire bats. The fine structure of biosonar beampatterns is discussed for the case of the Chinese noctule and methods for assessing the spatial information conveyed by beampatterns are demonstrated for the brown long-eared bat.
Christ, Andreas; Chavannes, Nicolas; Nikoloski, Neviana; Gerber, Hans-Ulrich; Poković, Katja; Kuster, Niels
2005-02-01
A new human head phantom has been proposed by CENELEC/IEEE, based on a large scale anthropometric survey. This phantom is compared to a homogeneous Generic Head Phantom and three high resolution anatomical head models with respect to specific absorption rate (SAR) assessment. The head phantoms are exposed to the radiation of a generic mobile phone (GMP) with different antenna types and a commercial mobile phone. The phones are placed in the standardized testing positions and operate at 900 and 1800 MHz. The average peak SAR is evaluated using both experimental (DASY3 near field scanner) and numerical (FDTD simulations) techniques. The numerical and experimental results compare well and confirm that the applied SAR assessment methods constitute a conservative approach.
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments
NASA Astrophysics Data System (ADS)
Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue
2013-03-01
The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.
NAVO MSRC Navigator. Spring 2003
2003-01-01
computational model run on the IBM POWER4 (MARCELLUS) in support of the Airborne Laser Challenge Project II. The data were visualized using Alias|Wavefront Maya...Turbulence in a Jet Stream in the Airborne Laser Context High Performance Computing 11 Largest NAVO MSRC System Becomes Even Bigger and Better 11 Using the smp...centimeters (cm). The resolution requirement to resolve the microjets and the flow outside in the combustor is too severe for any single numerical method
NASA Astrophysics Data System (ADS)
Mixa, T.; Fritts, D. C.; Laughman, B.; Wang, L.; Kantha, L. H.
2015-12-01
Multiple observations provide compelling evidence that gravity wave dissipation events often occur in multi-scale environments having highly-structured wind and stability profiles extending from the stable boundary layer into the mesosphere and lower thermosphere. Such events tend to be highly localized and thus yield local energy and momentum deposition and efficient secondary gravity wave generation expected to have strong influences at higher altitudes [e.g., Fritts et al., 2013; Baumgarten and Fritts, 2014]. Lidars, radars, and airglow imagers typically cannot achieve the spatial resolution needed to fully quantify these small-scale instability dynamics. Hence, we employ high-resolution modeling to explore these dynamics in representative environments. Specifically, we describe numerical studies of gravity wave packets impinging on a sheet of high stratification and shear and the resulting instabilities and impacts on the gravity wave amplitude and momentum flux for various flow and gravity wave parameters. References: Baumgarten, Gerd, and David C. Fritts (2014). Quantifying Kelvin-Helmholtz instability dynamics observed in noctilucent clouds: 1. Methods and observations. Journal of Geophysical Research: Atmospheres, 119.15, 9324-9337. Fritts, D. C., Wang, L., & Werne, J. A. (2013). Gravity wave-fine structure interactions. Part I: Influences of fine structure form and orientation on flow evolution and instability. Journal of the Atmospheric Sciences, 70(12), 3710-3734.
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data
Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-01-01
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods. PMID:29439502
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data.
Li, Hui; Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-02-11
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods.
Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM)
NASA Astrophysics Data System (ADS)
Sinitskiy, Anton V.; Voth, Gregory A.
2018-01-01
Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.
Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM).
Sinitskiy, Anton V; Voth, Gregory A
2018-01-07
Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.
Monte-Carlo simulation of spatial resolution of an image intensifier in a saturation mode
NASA Astrophysics Data System (ADS)
Xie, Yuntao; Wang, Xi; Zhang, Yujun; Sun, Xiaoquan
2018-04-01
In order to investigate the spatial resolution of an image intensifier which is irradiated by high-energy pulsed laser, a three-dimensional electron avalanche model was built and the cascade process of the electrons was numerically simulated. The influence of positive wall charges, due to the failure of replenishing charges extracted from the channel during the avalanche, was considered by calculating its static electric field through particle-in-cell (PIC) method. By tracing the trajectory of electrons throughout the image intensifier, the energy of the electrons at the output of the micro channel plate and the electron distribution at the phosphor screen are numerically calculated. The simulated energy distribution of output electrons are in good agreement with experimental data of previous studies. In addition, the FWHM extensions of the electron spot at phosphor screen as a function of the number of incident electrons are calculated. The results demonstrate that the spot size increases significantly with the increase in the number of incident electrons. Furthermore, we got the MTFs of the image intensifier by Fourier transform of a point spread function at phosphor screen. Comparison between the MTFs in our model and the MTFs by analytic method shows that spatial resolution of the image intensifier decreases significantly as the number of incident electrons increases, and it is particularly obvious when incident electron number greater than 100.
TopoSCALE v.1.0: downscaling gridded climate data in complex terrain
NASA Astrophysics Data System (ADS)
Fiddes, J.; Gruber, S.
2014-02-01
Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of observations (i.e. remote areas or future periods).
Implicitly solving phase appearance and disappearance problems using two-fluid six-equation model
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-01-25
Phase appearance and disappearance issue presents serious numerical challenges in two-phase flow simulations using the two-fluid six-equation model. Numerical challenges arise from the singular equation system when one phase is absent, as well as from the discontinuity in the solution space when one phase appears or disappears. In this work, a high-resolution spatial discretization scheme on staggered grids and fully implicit methods were applied for the simulation of two-phase flow problems using the two-fluid six-equation model. A Jacobian-free Newton-Krylov (JFNK) method was used to solve the discretized nonlinear problem. An improved numerical treatment was proposed and proved to be effectivemore » to handle the numerical challenges. The treatment scheme is conceptually simple, easy to implement, and does not require explicit truncations on solutions, which is essential to conserve mass and energy. Various types of phase appearance and disappearance problems relevant to thermal-hydraulics analysis have been investigated, including a sedimentation problem, an oscillating manometer problem, a non-condensable gas injection problem, a single-phase flow with heat addition problem and a subcooled flow boiling problem. Successful simulations of these problems demonstrate the capability and robustness of the proposed numerical methods and numerical treatments. As a result, volume fraction of the absent phase can be calculated effectively as zero.« less
Lu, Hangwen; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei
2016-01-01
Differential phase contrast (DPC) is a non-interferometric quantitative phase imaging method achieved by using an asymmetric imaging procedure. We report a pupil modulation differential phase contrast (PMDPC) imaging method by filtering a sample’s Fourier domain with half-circle pupils. A phase gradient image is captured with each half-circle pupil, and a quantitative high resolution phase image is obtained after a deconvolution process with a minimum of two phase gradient images. Here, we introduce PMDPC quantitative phase image reconstruction algorithm and realize it experimentally in a 4f system with an SLM placed at the pupil plane. In our current experimental setup with the numerical aperture of 0.36, we obtain a quantitative phase image with a resolution of 1.73μm after computationally removing system aberrations and refocusing. We also extend the depth of field digitally by 20 times to ±50μm with a resolution of 1.76μm. PMID:27828473
Use of upscaled elevation and surface roughness data in two-dimensional surface water models
Hughes, J.D.; Decker, J.D.; Langevin, C.D.
2011-01-01
In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.
NASA Astrophysics Data System (ADS)
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
Optical coherence microscope for invariant high resolution in vivo skin imaging
NASA Astrophysics Data System (ADS)
Murali, S.; Lee, K. S.; Meemon, P.; Rolland, J. P.
2008-02-01
A non-invasive, reliable and affordable imaging system with the capability of detecting skin pathologies such as skin cancer would be a valuable tool to use for pre-screening and diagnostic applications. Optical Coherence Microscopy (OCM) is emerging as a building block for in vivo optical diagnosis, where high numerical aperture optics is introduced in the sample arm to achieve high lateral resolution. While high numerical aperture optics enables realizing high lateral resolution at the focus point, dynamic focusing is required to maintain the target lateral resolution throughout the depth of the sample being imaged. In this paper, we demonstrate the ability to dynamically focus in real-time with no moving parts to a depth of up to 2mm in skin-equivalent tissue in order to achieve 3.5μm lateral resolution throughout an 8 cubic millimeter sample. The built-in dynamic focusing ability is provided by an addressable liquid lens embedded in custom-designed optics which was designed for a broadband laser source of 120 nm bandwidth centered at around 800nm. The imaging probe was designed to be low-cost and portable. Design evaluation and tolerance analysis results show that the probe is robust to manufacturing errors and produces consistent high performance throughout the imaging volume.
Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca; Palmer, Kevin; Deutsch, Clayton V.
High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit inmore » South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.« less
NASA Astrophysics Data System (ADS)
Rutkowski, Lucile; Masłowski, Piotr; Johansson, Alexandra C.; Khodabakhsh, Amir; Foltynowicz, Aleksandra
2018-01-01
Broadband precision spectroscopy is indispensable for providing high fidelity molecular parameters for spectroscopic databases. We have recently shown that mechanical Fourier transform spectrometers based on optical frequency combs can measure broadband high-resolution molecular spectra undistorted by the instrumental line shape (ILS) and with a highly precise frequency scale provided by the comb. The accurate measurement of the power of the comb modes interacting with the molecular sample was achieved by acquiring single-burst interferograms with nominal resolution matched to the comb mode spacing. Here we describe in detail the experimental and numerical steps needed to achieve sub-nominal resolution and retrieve ILS-free molecular spectra, i.e. with ILS-induced distortion below the noise level. We investigate the accuracy of the transition line centers retrieved by fitting to the absorption lines measured using this method. We verify the performance by measuring an ILS-free cavity-enhanced low-pressure spectrum of the 3ν1 + ν3 band of CO2 around 1575 nm with line widths narrower than the nominal resolution. We observe and quantify collisional narrowing of absorption line shape, for the first time with a comb-based spectroscopic technique. Thus retrieval of line shape parameters with accuracy not limited by the Voigt profile is now possible for entire absorption bands acquired simultaneously.
Triangulation-based 3D surveying borescope
NASA Astrophysics Data System (ADS)
Pulwer, S.; Steglich, P.; Villringer, C.; Bauer, J.; Burger, M.; Franz, M.; Grieshober, K.; Wirth, F.; Blondeau, J.; Rautenberg, J.; Mouti, S.; Schrader, S.
2016-04-01
In this work, a measurement concept based on triangulation was developed for borescopic 3D-surveying of surface defects. The integration of such measurement system into a borescope environment requires excellent space utilization. The triangulation angle, the projected pattern, the numerical apertures of the optical system, and the viewing angle were calculated using partial coherence imaging and geometric optical raytracing methods. Additionally, optical aberrations and defocus were considered by the integration of Zernike polynomial coefficients. The measurement system is able to measure objects with a size of 50 μm in all dimensions with an accuracy of +/- 5 μm. To manage the issue of a low depth of field while using an optical high resolution system, a wavelength dependent aperture was integrated. Thereby, we are able to control depth of field and resolution of the optical system and can use the borescope in measurement mode with high resolution and low depth of field or in inspection mode with low resolution and higher depth of field. First measurements of a demonstrator system are in good agreement with our simulations.
High-magnification super-resolution FINCH microscopy using birefringent crystal lens interferometers
NASA Astrophysics Data System (ADS)
Siegel, Nisan; Lupashin, Vladimir; Storrie, Brian; Brooker, Gary
2016-12-01
Fresnel incoherent correlation holography (FINCH) microscopy is a promising approach for high-resolution biological imaging but has so far been limited to use with low-magnification, low-numerical-aperture configurations. We report the use of in-line incoherent interferometers made from uniaxial birefringent α-barium borate (α-BBO) or calcite crystals that overcome the aberrations and distortions present with previous implementations that employed spatial light modulators or gradient refractive index lenses. FINCH microscopy incorporating these birefringent elements and high-numerical-aperture oil immersion objectives could outperform standard wide-field fluorescence microscopy, with, for example, a 149 nm lateral point spread function at a wavelength of 590 nm. Enhanced resolution was confirmed with sub-resolution fluorescent beads. Taking the Golgi apparatus as a biological example, three different proteins labelled with GFP and two other fluorescent dyes in HeLa cells were resolved with an image quality that is comparable to similar samples captured by structured illumination microscopy.
NASA Astrophysics Data System (ADS)
Adhikari, Surendra; Ivins, Erik R.; Larour, Eric
2016-03-01
A classical Green's function approach for computing gravitationally consistent sea-level variations associated with mass redistribution on the earth's surface employed in contemporary sea-level models naturally suits the spectral methods for numerical evaluation. The capability of these methods to resolve high wave number features such as small glaciers is limited by the need for large numbers of pixels and high-degree (associated Legendre) series truncation. Incorporating a spectral model into (components of) earth system models that generally operate on a mesh system also requires repetitive forward and inverse transforms. In order to overcome these limitations, we present a method that functions efficiently on an unstructured mesh, thus capturing the physics operating at kilometer scale yet capable of simulating geophysical observables that are inherently of global scale with minimal computational cost. The goal of the current version of this model is to provide high-resolution solid-earth, gravitational, sea-level and rotational responses for earth system models operating in the domain of the earth's outer fluid envelope on timescales less than about 1 century when viscous effects can largely be ignored over most of the globe. The model has numerous important geophysical applications. For example, we compute time-varying computations of global geodetic and sea-level signatures associated with recent ice-sheet changes that are derived from space gravimetry observations. We also demonstrate the capability of our model to simultaneously resolve kilometer-scale sources of the earth's time-varying surface mass transport, derived from high-resolution modeling of polar ice sheets, and predict the corresponding local and global geodetic signatures.
Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation
NASA Astrophysics Data System (ADS)
Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo
2015-01-01
Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.
Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin
2014-01-01
The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al. 2012 Proc. R. Soc. A 468, 1799–1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi–Dirac or Bose–Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas. PMID:24399919
The Gaia FGK benchmark stars. High resolution spectral library
NASA Astrophysics Data System (ADS)
Blanco-Cuaresma, S.; Soubiran, C.; Jofré, P.; Heiter, U.
2014-06-01
Context. An increasing number of high-resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims: We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK benchmark stars) that will allow us to assess stellar analysis methods and calibrate spectroscopic surveys. Methods: High-resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process to homogenize the observed data and assess the quality of the resulting library. Results: We built a high-quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and ensures reproducibility. Additionally, it allows us to quickly adapt the library to specific needs that can arise from future spectroscopic analyses. Based on NARVAL and HARPS data obtained within the Gaia Data Processing and Analysis Consortium (DPAC) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group, and on data retrieved from the ESO-ADP database.The library of spectra is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A98
Ultra high resolution imaging of the human head at 8 tesla: 2K x 2K for Y2K.
Robitaille, P M; Abduljalil, A M; Kangarlu, A
2000-01-01
To acquire ultra high resolution MRI images of the human brain at 8 Tesla within a clinically acceptable time frame. Gradient echo images were acquired from the human head of normal subjects using a transverse electromagnetic resonator operating in quadrature and tuned to 340 MHz. In each study, a group of six images was obtained containing a total of 208 MB of unprocessed information. Typical acquisition parameters were as follows: matrix = 2,000 x 2,000, field of view = 20 cm, slice thickness = 2 mm, number of excitations (NEX) = 1, flip angle = 45 degrees, TR = 750 ms, TE = 17 ms, receiver bandwidth = 69.4 kHz. This resulted in a total scan time of 23 minutes, an in-plane resolution of 100 microm, and a pixel volume of 0.02 mm3. The ultra high resolution images acquired in this study represent more than a 50-fold increase in in-plane resolution relative to conventional 256 x 256 images obtained with a 20 cm field of view and a 5 mm slice thickness. Nonetheless, the ultra high resolution images could be acquired both with adequate image quality and signal to noise. They revealed numerous small venous structures throughout the image plane and provided reasonable delineation between gray and white matter. The elevated signal-to-noise ratio observed in ultra high field magnetic resonance imaging can be utilized to acquire images with a level of resolution approaching the histological level under in vivo conditions. However, brain motion is likely to degrade the useful resolution. This situation may be remedied in part with cardiac gating. Nonetheless, these images represent a significant advance in our ability to examine small anatomical features with noninvasive imaging methods.
NASA Astrophysics Data System (ADS)
Kim, Tae Hee; James, Robin; Narayanan, Ram M.
2017-04-01
Fiber Reinforced Polymer or Plastic (FRP) composites have been rapidly increasing in the aerospace, automotive and marine industry, and civil engineering, because these composites show superior characteristics such as outstanding strength and stiffness, low weight, as well as anti-corrosion and easy production. Generally, the advancement of materials calls for correspondingly advanced methods and technologies for inspection and failure detection during production or maintenance, especially in the area of nondestructive testing (NDT). Among numerous inspection techniques, microwave sensing methods can be effectively used for NDT of FRP composites. FRP composite materials can be produced using various structures and materials, and various defects or flaws occur due to environmental conditions encountered during operation. However, reliable, low-cost, and easy-to-operate NDT methods have not been developed and tested. FRP composites are usually produced as multilayered structures consisting of fiber plate, matrix and core. Therefore, typical defects appearing in FRP composites are disbondings, delaminations, object inclusions, and certain kinds of barely visible impact damages. In this paper, we propose a microwave NDT method, based on synthetic aperture radar (SAR) imaging algorithms, for stand-off imaging of internal delaminations. When a microwave signal is incident on a multilayer dielectric material, the reflected signal provides a good response to interfaces and transverse cracks. An electromagnetic wave model is introduced to delineate interface widths or defect depths from the reflected waves. For the purpose of numerical analysis and simulation, multilayered composite samples with various artificial defects are assumed, and their SAR images are obtained and analyzed using a variety of high-resolution wideband waveforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiswell, S
2009-01-11
Assimilation of radar velocity and precipitation fields into high-resolution model simulations can improve precipitation forecasts with decreased 'spin-up' time and improve short-term simulation of boundary layer winds (Benjamin, 2004 & 2007; Xiao, 2008) which is critical to improving plume transport forecasts. Accurate description of wind and turbulence fields is essential to useful atmospheric transport and dispersion results, and any improvement in the accuracy of these fields will make consequence assessment more valuable during both routine operation as well as potential emergency situations. During 2008, the United States National Weather Service (NWS) radars implemented a significant upgrade which increased the real-timemore » level II data resolution to 8 times their previous 'legacy' resolution, from 1 km range gate and 1.0 degree azimuthal resolution to 'super resolution' 250 m range gate and 0.5 degree azimuthal resolution (Fig 1). These radar observations provide reflectivity, velocity and returned power spectra measurements at a range of up to 300 km (460 km for reflectivity) at a frequency of 4-5 minutes and yield up to 13.5 million point observations per level in super-resolution mode. The migration of National Weather Service (NWS) WSR-88D radars to super resolution is expected to improve warning lead times by detecting small scale features sooner with increased reliability; however, current operational mesoscale model domains utilize grid spacing several times larger than the legacy data resolution, and therefore the added resolution of radar data is not fully exploited. The assimilation of super resolution reflectivity and velocity data into high resolution numerical weather model forecasts where grid spacing is comparable to the radar data resolution is investigated here to determine the impact of the improved data resolution on model predictions.« less
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.
Zaboikin, Michail; Freter, Carl
2018-01-01
We describe a method for measuring genome editing efficiency from in silico analysis of high-resolution melt curve data. The melt curve data derived from amplicons of genome-edited or unmodified target sites were processed to remove the background fluorescent signal emanating from free fluorophore and then corrected for temperature-dependent quenching of fluorescence of double-stranded DNA-bound fluorophore. Corrected data were normalized and numerically differentiated to obtain the first derivatives of the melt curves. These were then mathematically modeled as a sum or superposition of minimal number of Gaussian components. Using Gaussian parameters determined by modeling of melt curve derivatives of unedited samples, we were able to model melt curve derivatives from genetically altered target sites where the mutant population could be accommodated using an additional Gaussian component. From this, the proportion contributed by the mutant component in the target region amplicon could be accurately determined. Mutant component computations compared well with the mutant frequency determination from next generation sequencing data. The results were also consistent with our earlier studies that used difference curve areas from high-resolution melt curves for determining the efficiency of genome-editing reagents. The advantage of the described method is that it does not require calibration curves to estimate proportion of mutants in amplicons of genome-edited target sites. PMID:29300734
Common-path digital holographic microscopy based on a beam displacer unit
NASA Astrophysics Data System (ADS)
Di, Jianglei; Zhang, Jiwei; Song, Yu; Wang, Kaiqiang; Wei, Kun; Zhao, Jianlin
2018-02-01
Digital holographic microscopy (DHM) has become a novel tool with advantages of full field, non-destructive, high-resolution and 3D imaging, which captures the quantitative amplitude and phase information of microscopic specimens. It's a well-established method for digital recording and numerical reconstructing the full complex field of wavefront of the samples with a diffraction-limited lateral resolution down to 0.3 μm depending on the numerical aperture of microscope objective. Meanwhile, its axial resolution through axial direction is less than 10 nm due to the interferometric nature in phase imaging. Compared with the typical optical configurations such as Mach-Zehnder interferometer and Michelson interferometer, the common-path DHM has the advantages of simple and compact configuration, high stability, and so on. Here, a simple, compact, and low-cost common-path DHM based on a beam displacer unit is proposed for quantitative phase imaging of biological cells. The beam displacer unit is completely compatible with commercial microscope and can be easily set up in the output port of the microscope as a compact independent device. This technique can be used to achieve the quantitative phase measurement of biological cells with an excellent temporal stability of 0.51 nm, which makes it having a good prospect in the fields of biological and medical science. Living mouse osteoblastic cells are quantitatively measured with the system to demonstrate its capability and applicability.
Backside imaging of a microcontroller with common-path digital holography
NASA Astrophysics Data System (ADS)
Finkeldey, Markus; Göring, Lena; Schellenberg, Falk; Gerhardt, Nils C.; Hofmann, Martin
2017-03-01
The investigation of integrated circuits (ICs), such as microcontrollers (MCUs) and system on a chip (SoCs) devices is a topic with growing interests. The need for fast and non-destructive imaging methods is given by the increasing importance of hardware Trojans, reverse engineering and further security related analysis of integrated cryptographic devices. In the field of side-channel attacks, for instance, the precise spot for laser fault attacks is important and could be determined by using modern high resolution microscopy methods. Digital holographic microscopy (DHM) is a promising technique to achieve high resolution phase images of surface structures. These phase images provide information about the change of the refractive index in the media and the topography. For enabling a high phase stability, we use the common-path geometry to create the interference pattern. The interference pattern, or hologram, is captured with a water cooled sCMOS camera. This provides a fast readout while maintaining a low level of noise. A challenge for these types of holograms is the interference of the reflected waves from the different interfaces inside the media. To distinguish between the phase signals from the buried layer and the surface reflection we use specific numeric filters. For demonstrating the performance of our setup we show results with devices under test (DUT), using a 1064 nm laser diode as light source. The DUTs are modern microcontrollers thinned to different levels of thickness of the Si-substrate. The effect of the numeric filter compared to unfiltered images is analyzed.
Numerical Error Estimation with UQ
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Korn, Peter; Marotzke, Jochem
2014-05-01
Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted
NASA Astrophysics Data System (ADS)
Brothelande, E.; Lénat, J.-F.; Normier, A.; Bacri, C.; Peltier, A.; Paris, R.; Kelfoun, K.; Merle, O.; Finizola, A.; Garaebiti, E.
2016-08-01
The Yenkahe dome (Tanna Island, Vanuatu) is one of the most spectacular examples of presently active post-caldera resurgence, exhibiting a very high uplift rate over the past 1000 years (156 mm/year on average). Although numerous inhabited areas are scattered around the dome, the dynamics of this structure and associated hazards remain poorly studied because of its remote location and dense vegetation cover. A high-resolution photogrammetric campaign was carried out in November 2011 over the dome. Georeferenced photographs were treated by "Structure from Motion" and "Multiple-view Stereophotogrammetry" methods to produce a 3D-digital surface model (DSM) of the area and its associated orthophotograph. This DSM is much more accurate than previously available SRTM and Aster digital elevation models (DEMs), particularly at minimal (coastline) and maximal altitudes (Yasur culmination point, 390 m). While previous mapping relied mostly on low resolution DEMs and satellite images, the high precision of the DSM allows for a detailed structural analysis of the Yenkahe dome, notably based on the quantification of fault displacements. The new structural map, inferred from the 3D reconstruction and morphological analysis of the dome, reveals a complex pattern of faults and destabilization scars reflecting a succession of constructive and destructive events. Numerous landslide scars directed toward the sea highlight the probable occurrence of a tsunami event affecting the south-eastern coast of Tanna. Simulations of landslide-triggered tsunamis show the short time propagation of such a wave (1-2 min), which could affect coastal localities even following relatively small destabilized volumes (a few million cubic meters).
NASA Astrophysics Data System (ADS)
Hanasoge, Shravan; Agarwal, Umang; Tandon, Kunj; Koelman, J. M. Vianney A.
2017-09-01
Determining the pressure differential required to achieve a desired flow rate in a porous medium requires solving Darcy's law, a Laplace-like equation, with a spatially varying tensor permeability. In various scenarios, the permeability coefficient is sampled at high spatial resolution, which makes solving Darcy's equation numerically prohibitively expensive. As a consequence, much effort has gone into creating upscaled or low-resolution effective models of the coefficient while ensuring that the estimated flow rate is well reproduced, bringing to the fore the classic tradeoff between computational cost and numerical accuracy. Here we perform a statistical study to characterize the relative success of upscaling methods on a large sample of permeability coefficients that are above the percolation threshold. We introduce a technique based on mode-elimination renormalization group theory (MG) to build coarse-scale permeability coefficients. Comparing the results with coefficients upscaled using other methods, we find that MG is consistently more accurate, particularly due to its ability to address the tensorial nature of the coefficients. MG places a low computational demand, in the manner in which we have implemented it, and accurate flow-rate estimates are obtained when using MG-upscaled permeabilities that approach or are beyond the percolation threshold.
On the application of subcell resolution to conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Chang, Shih-Hung
1989-01-01
LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.
NASA Astrophysics Data System (ADS)
Ponte, Aurélien L.; Klein, Patrice; Dunphy, Michael; Le Gentil, Sylvie
2017-03-01
The performance of a tentative method that disentangles the contributions of a low-mode internal tide on sea level from that of the balanced mesoscale eddies is examined using an idealized high resolution numerical simulation. This disentanglement is essential for proper estimation from sea level of the ocean circulation related to balanced motions. The method relies on an independent observation of the sea surface water density whose variations are 1/dominated by the balanced dynamics and 2/correlate with variations of potential vorticity at depth for the chosen regime of surface-intensified turbulence. The surface density therefore leads via potential vorticity inversion to an estimate of the balanced contribution to sea level fluctuations. The difference between instantaneous sea level (presumably observed with altimetry) and the balanced estimate compares moderately well with the contribution from the low-mode tide. Application to realistic configurations remains to be tested. These results aim at motivating further developments of reconstruction methods of the ocean dynamics based on potential vorticity dynamics arguments. In that context, they are particularly relevant for the upcoming wide-swath high resolution altimetric missions (SWOT).
Comparative study of high-resolution shock-capturing schemes for a real gas
NASA Technical Reports Server (NTRS)
Montagne, J.-L.; Yee, H. C.; Vinokur, M.
1987-01-01
Recently developed second-order explicit shock-capturing methods, in conjunction with generalized flux-vector splittings, and a generalized approximate Riemann solver for a real gas are studied. The comparisons are made on different one-dimensional Riemann (shock-tube) problems for equilibrium air with various ranges of Mach numbers, densities and pressures. Six different Riemann problems are considered. These tests provide a check on the validity of the generalized formulas, since theoretical prediction of their properties appears to be difficult because of the non-analytical form of the state equation. The numerical results in the supersonic and low-hypersonic regimes indicate that these produce good shock-capturing capability and that the shock resolution is only slightly affected by the state equation of equilibrium air. The difference in shock resolution between the various methods varies slightly from one Riemann problem to the other, but the overall accuracy is very similar. For the one-dimensional case, the relative efficiency in terms of operation count for the different methods is within 30%. The main difference between the methods lies in their versatility in being extended to multidimensional problems with efficient implicit solution procedures.
Validation of New Wind Resource Maps
NASA Astrophysics Data System (ADS)
Elliott, D.; Schwartz, M.
2002-05-01
The National Renewable Energy Laboratory (NREL) recently led a project to validate updated state wind resource maps for the northwestern United States produced by a private U.S. company, TrueWind Solutions (TWS). The independent validation project was a cooperative activity among NREL, TWS, and meteorological consultants. The independent validation concept originated at a May 2001 technical workshop held at NREL to discuss updating the Wind Energy Resource Atlas of the United States. Part of the workshop, which included more than 20 attendees from the wind resource mapping and consulting community, was dedicated to reviewing the latest techniques for wind resource assessment. It became clear that using a numerical modeling approach for wind resource mapping was rapidly gaining ground as a preferred technique and if the trend continues, it will soon become the most widely-used technique around the world. The numerical modeling approach is a relatively fast application compared to older mapping methods and, in theory, should be quite accurate because it directly estimates the magnitude of boundary-layer processes that affect the wind resource of a particular location. Numerical modeling output combined with high resolution terrain data can produce useful wind resource information at a resolution of 1 km or lower. However, because the use of the numerical modeling approach is new (last 35 years) and relatively unproven, meteorological consultants question the accuracy of the approach. It was clear that new state or regional wind maps produced by this method would have to undergo independent validation before the results would be accepted by the wind energy community and developers.
Integration of Local Observations into the One Dimensional Fog Model PAFOG
NASA Astrophysics Data System (ADS)
Thoma, Christina; Schneider, Werner; Masbou, Matthieu; Bott, Andreas
2012-05-01
The numerical prediction of fog requires a very high vertical resolution of the atmosphere. Owing to a prohibitive computational effort of high resolution three dimensional models, operational fog forecast is usually done by means of one dimensional fog models. An important condition for a successful fog forecast with one dimensional models consists of the proper integration of observational data into the numerical simulations. The goal of the present study is to introduce new methods for the consideration of these data in the one dimensional radiation fog model PAFOG. First, it will be shown how PAFOG may be initialized with observed visibilities. Second, a nudging scheme will be presented for the inclusion of measured temperature and humidity profiles in the PAFOG simulations. The new features of PAFOG have been tested by comparing the model results with observations of the German Meteorological Service. A case study will be presented that reveals the importance of including local observations in the model calculations. Numerical results obtained with the modified PAFOG model show a distinct improvement of fog forecasts regarding the times of fog formation, dissipation as well as the vertical extent of the investigated fog events. However, model results also reveal that a further improvement of PAFOG might be possible if several empirical model parameters are optimized. This tuning can only be realized by comprehensive comparisons of model simulations with corresponding fog observations.
Bridging the scales in atmospheric composition simulations using a nudging technique
NASA Astrophysics Data System (ADS)
D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco
2010-05-01
Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.
Solid immersion terahertz imaging with sub-wavelength resolution
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Schadko, Aleksander O.; Lebedev, Sergey P.; Tolstoguzov, Viktor L.; Kurlov, Vladimir N.; Reshetov, Igor V.; Spektor, Igor E.; Skorobogatiy, Maksim; Yurchenko, Stanislav O.; Zaytsev, Kirill I.
2017-05-01
We have developed a method of solid immersion THz imaging—a non-contact technique employing the THz beam focused into evanescent-field volume and allowing strong reduction in the dimensions of THz caustic. We have combined numerical simulations and experimental studies to demonstrate a sub-wavelength 0.35λ0-resolution of the solid immersion THz imaging system compared to 0.85λ0-resolution of a standard imaging system, employing only an aspherical singlet. We have discussed the prospective of using the developed technique in various branches of THz science and technology, namely, for THz measurements of solid-state materials featuring sub-wavelength variations of physical properties, for highly accurate mapping of healthy and pathological tissues in THz medical diagnosis, for detection of sub-wavelength defects in THz non-destructive sensing, and for enhancement of THz nonlinear effects.
Gounaridis, Lefteris; Groumas, Panos; Schreuder, Erik; Heideman, Rene; Avramopoulos, Hercules; Kouloumentas, Christos
2016-04-04
It is still a common belief that ultra-high quality-factors (Q-factors) are a prerequisite in optical resonant cavities for high refractive index resolution and low detection limit in biosensing applications. In combination with the ultra-short steps that are necessary when the measurement of the resonance shift relies on the wavelength scanning of a laser source and conventional methods for data processing, the high Q-factor requirement makes these biosensors extremely impractical. In this work we analyze an alternative processing method based on the fast-Fourier transform, and show through Monte-Carlo simulations that improvement by 2-3 orders of magnitude can be achieved in the resolution and the detection limit of the system in the presence of amplitude and spectral noise. More significantly, this improvement is maximum for low Q-factors around 104 and is present also for high intra-cavity losses and large scanning steps making the designs compatible with the low-cost aspect of lab-on-a-chip technology. Using a micro-ring resonator as model cavity and a system design with low Q-factor (104), low amplitude transmission (0.85) and relatively large scanning step (0.25 pm), we show that resolution close to 0.01 pm and detection limit close to 10-7 RIU can be achieved improving the sensing performance by more than 2 orders of magnitude compared to the performance of systems relying on a simple peak search processing method. The improvement in the limit of detection is present even when the simple method is combined with ultra-high Q-factors and ultra-short scanning steps due to the trade-off between the system resolution and sensitivity. Early experimental results are in agreement with the trends of the numerical studies.
NASA Astrophysics Data System (ADS)
Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck
2018-02-01
The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.
2010-05-01
circulation from December 2003 to June 2008 . The model is driven by tidal harmonics, realistic atmospheric forcing, and dynamically consistent initial and open...important element of the regional circulation (He and Wilkin 2006). We applied the method of Mellor and Yamada (1982) to compute vertical turbulent...shelfbreak ROMS hindcast ran continuously from December 2003 through January 2008 . Initial conditions were taken from the MABGOM ROMS simulation on 1
Revealing retroperitoneal liposarcoma morphology using optical coherence tomography
NASA Astrophysics Data System (ADS)
Carbajal, Esteban F.; Baranov, Stepan A.; Manne, Venu G. R.; Young, Eric D.; Lazar, Alexander J.; Lev, Dina C.; Pollock, Raphael E.; Larin, Kirill V.
2011-02-01
A new approach to distinguish normal fat, well-differentiated (WD), and dedifferentiated liposarcoma (LS) tumors is demonstrated, based on the use of optical coherence tomography (OCT). OCT images show the same structures seen with conventional histological methods. Our visual grading analysis is supported by numerical analysis of observed structures for normal fat and WDLS samples. Further development could apply the real-time and high resolution advantages of OCT for use in liposarcoma diagnosis and clinical procedures.
NASA Astrophysics Data System (ADS)
Gailler, A.; Hébert, H.; Schindelé, F.; Reymond, D.
2017-11-01
Tsunami modeling tools in the French tsunami Warning Center operational context provide rapidly derived warning levels with a dimensionless variable at basin scale. A new forecast method based on coastal amplification laws has been tested to estimate the tsunami onshore height, with a focus on the French Riviera test-site (Nice area). This fast prediction tool provides a coastal tsunami height distribution, calculated from the numerical simulation of the deep ocean tsunami amplitude and using a transfer function derived from the Green's law. Due to a lack of tsunami observations in the western Mediterranean basin, coastal amplification parameters are here defined regarding high resolution nested grids simulations. The preliminary results for the Nice test site on the basis of nine historical and synthetic sources show a good agreement with the time-consuming high resolution modeling: the linear approximation is obtained within 1 min in general and provides estimates within a factor of two in amplitude, although the resonance effects in harbors and bays are not reproduced. In Nice harbor especially, variation in tsunami amplitude is something that cannot be really assessed because of the magnitude range and maximum energy azimuth of possible events to account for. However, this method is well suited for a fast first estimate of the coastal tsunami threat forecast.
NASA Astrophysics Data System (ADS)
Gailler, A.; Schindelé, F.; Hebert, H.; Reymond, D.
2017-12-01
Tsunami modeling tools in the French tsunami Warning Center operational context provide for now warning levels with a no dimension scale, and at basin scale. A new forecast method based on coastal amplification laws has been tested to estimate the tsunami onshore height, with a focus on the French Riviera test-site (Nice area). This fast prediction tool provides a coastal tsunami height distribution, calculated from the numerical simulation of the deep ocean tsunami amplitude and using a transfer function derived from the Green's law. Due to a lack of tsunami observation in the western Mediterranean basin, coastal amplification parameters are here defined regarding high resolution nested grids simulations. The first encouraging results for the Nice test site on the basis of 9 historical and fake sources show a good agreement with the time-consuming high resolution modeling: the linear approximation provides within in general 1 minute estimates less a factor of 2 in amplitude, although the resonance effects in harbors and bays are not reproduced. In Nice harbor especially, variation in tsunami amplitude is something that cannot be really appreciated because of the magnitude range and maximum energy azimuth of possible events to account for. However, this method suits well for a fast first estimate of the coastal tsunami threat forecast.
NASA Astrophysics Data System (ADS)
Gailler, A.; Hébert, H.; Schindelé, F.; Reymond, D.
2018-04-01
Tsunami modeling tools in the French tsunami Warning Center operational context provide rapidly derived warning levels with a dimensionless variable at basin scale. A new forecast method based on coastal amplification laws has been tested to estimate the tsunami onshore height, with a focus on the French Riviera test-site (Nice area). This fast prediction tool provides a coastal tsunami height distribution, calculated from the numerical simulation of the deep ocean tsunami amplitude and using a transfer function derived from the Green's law. Due to a lack of tsunami observations in the western Mediterranean basin, coastal amplification parameters are here defined regarding high resolution nested grids simulations. The preliminary results for the Nice test site on the basis of nine historical and synthetic sources show a good agreement with the time-consuming high resolution modeling: the linear approximation is obtained within 1 min in general and provides estimates within a factor of two in amplitude, although the resonance effects in harbors and bays are not reproduced. In Nice harbor especially, variation in tsunami amplitude is something that cannot be really assessed because of the magnitude range and maximum energy azimuth of possible events to account for. However, this method is well suited for a fast first estimate of the coastal tsunami threat forecast.
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-01-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-10
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
Enwright, Nicholas M.; Jones, William R.; Garber, Adrienne L.; Keller, Matthew J.
2014-01-01
Long-term monitoring efforts often use remote sensing to track trends in habitat or landscape conditions over time. To most appropriately compare observations over time, long-term monitoring efforts strive for consistency in methods. Thus, advances and changes in technology over time can present a challenge. For instance, modern camera technology has led to an increasing availability of very high-resolution imagery (i.e. submetre and metre) and a shift from analogue to digital photography. While numerous studies have shown that image resolution can impact the accuracy of classifications, most of these studies have focused on the impacts of comparing spatial resolution changes greater than 2 m. Thus, a knowledge gap exists on the impacts of minor changes in spatial resolution (i.e. submetre to about 1.5 m) in very high-resolution aerial imagery (i.e. 2 m resolution or less). This study compared the impact of spatial resolution on land/water classifications of an area dominated by coastal marsh vegetation in Louisiana, USA, using 1:12,000 scale colour-infrared analogue aerial photography (AAP) scanned at four different dot-per-inch resolutions simulating ground sample distances (GSDs) of 0.33, 0.54, 1, and 2 m. Analysis of the impact of spatial resolution on land/water classifications was conducted by exploring various spatial aspects of the classifications including density of waterbodies and frequency distributions in waterbody sizes. This study found that a small-magnitude change (1–1.5 m) in spatial resolution had little to no impact on the amount of water classified (i.e. percentage mapped was less than 1.5%), but had a significant impact on the mapping of very small waterbodies (i.e. waterbodies ≤ 250 m2). These findings should interest those using temporal image classifications derived from very high-resolution aerial photography as a component of long-term monitoring programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pomeroy, J. W., E-mail: James.Pomeroy@Bristol.ac.uk; Kuball, M.
2015-10-14
Solid immersion lenses (SILs) are shown to greatly enhance optical spatial resolution when measuring AlGaN/GaN High Electron Mobility Transistors (HEMTs), taking advantage of the high refractive index of the SiC substrates commonly used for these devices. Solid immersion lenses can be applied to techniques such as electroluminescence emission microscopy and Raman thermography, aiding the development device physics models. Focused ion beam milling is used to fabricate solid immersion lenses in SiC substrates with a numerical aperture of 1.3. A lateral spatial resolution of 300 nm is demonstrated at an emission wavelength of 700 nm, and an axial spatial resolution of 1.7 ± 0.3 μm atmore » a laser wavelength of 532 nm is demonstrated; this is an improvement of 2.5× and 5×, respectively, when compared with a conventional 0.5 numerical aperture objective lens without a SIL. These results highlight the benefit of applying the solid immersion lenses technique to the optical characterization of GaN HEMTs. Further improvements may be gained through aberration compensation and increasing the SIL numerical aperture.« less
Pennycook, Timothy J.; Lupini, Andrew R.; Yang, Hao; ...
2014-10-15
In this paper, we demonstrate a method to achieve high efficiency phase contrast imaging in aberration corrected scanning transmission electron microscopy (STEM) with a pixelated detector. The pixelated detector is used to record the Ronchigram as a function of probe position which is then analyzed with ptychography. Ptychography has previously been used to provide super-resolution beyond the diffraction limit of the optics, alongside numerically correcting for spherical aberration. Here we rely on a hardware aberration corrector to eliminate aberrations, but use the pixelated detector data set to utilize the largest possible volume of Fourier space to create high efficiency phasemore » contrast images. The use of ptychography to diagnose the effects of chromatic aberration is also demonstrated. In conclusion, the four dimensional dataset is used to compare different bright field detector configurations from the same scan for a sample of bilayer graphene. Our method of high efficiency ptychography produces the clearest images, while annular bright field produces almost no contrast for an in-focus aberration-corrected probe.« less
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
NASA Astrophysics Data System (ADS)
Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.
2018-03-01
In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.
An adaptive moving mesh method for two-dimensional ideal magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Han, Jianqiang; Tang, Huazhong
2007-01-01
This paper presents an adaptive moving mesh algorithm for two-dimensional (2D) ideal magnetohydrodynamics (MHD) that utilizes a staggered constrained transport technique to keep the magnetic field divergence-free. The algorithm consists of two independent parts: MHD evolution and mesh-redistribution. The first part is a high-resolution, divergence-free, shock-capturing scheme on a fixed quadrangular mesh, while the second part is an iterative procedure. In each iteration, mesh points are first redistributed, and then a conservative-interpolation formula is used to calculate the remapped cell-averages of the mass, momentum, and total energy on the resulting new mesh; the magnetic potential is remapped to the new mesh in a non-conservative way and is reconstructed to give a divergence-free magnetic field on the new mesh. Several numerical examples are given to demonstrate that the proposed method can achieve high numerical accuracy, track and resolve strong shock waves in ideal MHD problems, and preserve divergence-free property of the magnetic field. Numerical examples include the smooth Alfvén wave problem, 2D and 2.5D shock tube problems, two rotor problems, the stringent blast problem, and the cloud-shock interaction problem.
Numerical modeling of marine Gravity data for tsunami hazard zone mapping
NASA Astrophysics Data System (ADS)
Porwal, Nipun
2012-07-01
Tsunami is a series of ocean wave with very high wavelengths ranges from 10 to 500 km. Therefore tsunamis act as shallow water waves and hard to predict from various methods. Bottom Pressure Recorders of Poseidon class considered as a preeminent method to detect tsunami waves but Acoustic Modem in Ocean Bottom Pressure (OBP) sensors placed in the vicinity of trenches having depth of more than 6000m fails to propel OBP data to Surface Buoys. Therefore this paper is developed for numerical modeling of Gravity field coefficients from Bureau Gravimetric International (BGI) which do not play a central role in the study of geodesy, satellite orbit computation, & geophysics but by mathematical transformation of gravity field coefficients using Normalized Legendre Polynomial high resolution ocean bottom pressure (OBP) data is generated. Real time sea level monitored OBP data of 0.3° by 1° spatial resolution using Kalman filter (kf080) for past 10 years by Estimating the Circulation and Climate of the Ocean (ECCO) has been correlated with OBP data from gravity field coefficients which attribute a feasible study on future tsunami detection system from space and in identification of most suitable sites to place OBP sensors near deep trenches. The Levitus Climatological temperature and salinity are assimilated into the version of the MITGCM using the ad-joint method to obtain the sea height segment. Then TOPEX/Poseidon satellite altimeter, surface momentum, heat, and freshwater fluxes from NCEP reanalysis product and the dynamic ocean topography DOT_DNSCMSS08_EGM08 is used to interpret sea-bottom elevation. Then all datasets are associated under raster calculator in ArcGIS 9.3 using Boolean Intersection Algebra Method and proximal analysis tools with high resolution sea floor topographic map. Afterward tsunami prone area and suitable sites for set up of BPR as analyzed in this research is authenticated by using Passive microwave radiometry system for Tsunami Hazard Zone Mapping by network of seismometers. Thus using such methodology for early Tsunami Hazard Zone Mapping also increase accuracy and reduce time period for tsunami predictions. KEYWORDS:, Tsunami, Gravity Field Coefficients, Ocean Bottom Pressure, ECCO, BGI, Sea Bottom Temperature, Sea Floor Topography.
NASA Astrophysics Data System (ADS)
Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang
2017-08-01
Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Shi, J.; Chen, S. S>
2007-01-01
Advances in computing power allow atmospheric prediction models to be mn at progressively finer scales of resolution, using increasingly more sophisticated physical parameterizations and numerical methods. The representation of cloud microphysical processes is a key component of these models, over the past decade both research and operational numerical weather prediction models have started using more complex microphysical schemes that were originally developed for high-resolution cloud-resolving models (CRMs). A recent report to the United States Weather Research Program (USWRP) Science Steering Committee specifically calls for the replacement of implicit cumulus parameterization schemes with explicit bulk schemes in numerical weather prediction (NWP) as part of a community effort to improve quantitative precipitation forecasts (QPF). An improved Goddard bulk microphysical parameterization is implemented into a state-of the-art of next generation of Weather Research and Forecasting (WRF) model. High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atllan"ic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The 31CE scheme with a cloud ice-snow-hail configuration led to a better agreement with observation in terms of simulated narrow convective line and rainfall intensity. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 m/s). For an Atlantic hurricane case, varying the microphysical schemes had no significant impact on the track forecast but did affect the intensity (important for air-sea interaction)
Rip current evidence by hydrodynamic simulations, bathymetric surveys and UAV observation
NASA Astrophysics Data System (ADS)
Benassai, Guido; Aucelli, Pietro; Budillon, Giorgio; De Stefano, Massimo; Di Luccio, Diana; Di Paola, Gianluigi; Montella, Raffaele; Mucerino, Luigi; Sica, Mario; Pennetta, Micla
2017-09-01
The prediction of the formation, spacing and location of rip currents is a scientific challenge that can be achieved by means of different complementary methods. In this paper the analysis of numerical and experimental data, including RPAS (remotely piloted aircraft systems) observations, allowed us to detect the presence of rip currents and rip channels at the mouth of Sele River, in the Gulf of Salerno, southern Italy. The dataset used to analyze these phenomena consisted of two different bathymetric surveys, a detailed sediment analysis and a set of high-resolution wave numerical simulations, completed with Google EarthTM images and RPAS observations. The grain size trend analysis and the numerical simulations allowed us to identify the rip current occurrence, forced by topographically constrained channels incised on the seabed, which were compared with observations.
A Simple Two Aircraft Conflict Resolution Algorithm
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
1999-01-01
Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in the cockpit, dispatchers in operation control centers and air traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control imctions.This paper describes a conflict detection and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection and resolution method.
Modeling the surface evapotranspiration over the southern Great Plains
NASA Technical Reports Server (NTRS)
Liljegren, J. C.; Doran, J. C.; Hubbe, J. M.; Shaw, W. J.; Zhong, S.; Collatz, G. J.; Cook, D. R.; Hart, R. L.
1996-01-01
We have developed a method to apply the Simple Biosphere Model of Sellers et al to calculate the surface fluxes of sensible heat and water vapor at high spatial resolution over the domain of the US DOE's Cloud and Radiation Testbed (CART) in Kansas and Oklahoma. The CART, which is within the GCIP area of interest for the Mississippi River Basin, is an extensively instrumented facility operated as part of the DOE's Atmospheric Radiation Measurement (ARM) program. Flux values calculated with our method will be used to provide lower boundary conditions for numerical models to study the atmosphere over the CART domain.
High-resolution numerical models for smoke transport in plumes from wildland fires
Philip Cunningham; Scott Goodrick
2013-01-01
A high-resolution large-eddy simulation (LES) model is employed to examine the fundamental structure and dynamics of buoyant plumes arising from heat sources representative of wildland fires. Herein we describe several aspects of the mean properties of the simulated plumes. Mean plume trajectories are apparently well described by the traditional two-thirds law for...
Estimation of wind regime from combination of RCM and NWP data in the Gulf of Riga (Baltic Sea)
NASA Astrophysics Data System (ADS)
Sile, T.; Sennikovs, J.; Bethers, U.
2012-04-01
Gulf of Riga is a semi-enclosed gulf located in the Eastern part of the Baltic Sea. Reliable wind climate data is crucial for the development of wind energy. The objective of this study is to create high resolution wind parameter datasets for the Gulf of Riga using climate and numerical weather prediction (NWP) models as an alternative to methods that rely on observations with the expectation of benefit from comparing different approaches. The models used for the estimation of the wind regime are an ensemble of Regional Climate Models (RCM, ENSEMBLES, 23 runs are considered) and high resolution NWP data. Future projections provided by RCM are of interest however their spatial resolution is unsatisfactory. We describe a method of spatial refinement of RCM data using NWP data to resolve small scale features. We apply the method of RCM bias correction (Sennikovs and Bethers, 2009) previously used for temperature and precipitation to wind data and use NWP data instead of observations. The refinement function is calculated using contemporary climate (1981- 2010) and later applied to RCM near future (2021 - 2050) projections to produce a dataset with the same resolution as NWP data. This method corrects for RCM biases that were shown to be present in the initial analysis and inter-model statistical analysis was carried out to estimate uncertainty. Using the datasets produced by this method the current and future projections of wind speed and wind energy density are calculated. Acknowledgments: This research is part of the GORWIND (The Gulf of Riga as a Resource for Wind Energy) project (EU34711). The ENSEMBLES data used in this work was funded by the EU FP6 Integrated Project ENSEMBLES (Contract number 505539) whose support is gratefully acknowledged.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
A High-Resolution Capability for Large-Eddy Simulation of Jet Flows
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2011-01-01
A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.
VizieR Online Data Catalog: Abundances in the local region. II. F, G, and K dwarfs (Luck+, 2017)
NASA Astrophysics Data System (ADS)
Luck, R. E.
2017-06-01
The McDonald Observatory 2.1m Telescope and Sandiford Cassegrain Echelle Spectrograph provided much of the observational data for this study. High-resolution spectra were obtained during numerous observing runs, from 1996 to 2010. The spectra cover a continuous wavelength range from about 484 to 700nm, with a resolving power of about 60000. The wavelength range used demands two separate observations--one centered at about 520nm, and the other at about 630nm. Typical S/N values per pixel for the spectra are more than 150. Spectra of 57 dwarfs were obtained using the Hobby-Eberly telescope and High-Resolution Spectrograph. The spectra have a resolution of 30000, spanning the wavelength range of 400 to 785nm. They also have very high signal-to-noise ratios, >300 per resolution element in numerous cases. The last set of spectra were obtained from the ELODIE Archive (Moultaka et al. 2004PASP..116..693M). These spectra are fully processed, including order co-addition, and have a continuous wavelength span of 400 to 680nm and a resolution of 42000. The ELODIE spectra utilized here all have S/N>75 per pixel. (6 data files).
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Kumar, Sujay V.; Krikishen, Jayanthi; Jedlovec, Gary J.
2011-01-01
It is hypothesized that high-resolution, accurate representations of surface properties such as soil moisture and sea surface temperature are necessary to improve simulations of summertime pulse-type convective precipitation in high resolution models. This paper presents model verification results of a case study period from June-August 2008 over the Southeastern U.S. using the Weather Research and Forecasting numerical weather prediction model. Experimental simulations initialized with high-resolution land surface fields from the NASA Land Information System (LIS) and sea surface temperature (SST) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) are compared to a set of control simulations initialized with interpolated fields from the National Centers for Environmental Prediction 12-km North American Mesoscale model. The LIS land surface and MODIS SSTs provide a more detailed surface initialization at a resolution comparable to the 4-km model grid spacing. Soil moisture from the LIS spin-up run is shown to respond better to the extreme rainfall of Tropical Storm Fay in August 2008 over the Florida peninsula. The LIS has slightly lower errors and higher anomaly correlations in the top soil layer, but exhibits a stronger dry bias in the root zone. The model sensitivity to the alternative surface initial conditions is examined for a sample case, showing that the LIS/MODIS data substantially impact surface and boundary layer properties.
Zhou, Lian; Li, Xu; Zhu, Shanan; He, Bin
2011-01-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) was recently introduced as a noninvasive electrical conductivity imaging approach with high spatial resolution close to ultrasound imaging. In the present study, we test the feasibility of the MAT-MI method for breast tumor imaging using numerical modeling and computer simulation. Using the finite element method, we have built three dimensional numerical breast models with varieties of embedded tumors for this simulation study. In order to obtain an accurate and stable forward solution that does not have numerical errors caused by singular MAT-MI acoustic sources at conductivity boundaries, we first derive an integral forward method for calculating MAT-MI acoustic sources over the entire imaging volume. An inverse algorithm for reconstructing the MAT-MI acoustic source is also derived with spherical measurement aperture, which simulates a practical setup for breast imaging. With the numerical breast models, we have conducted computer simulations under different imaging parameter setups and all the results suggest that breast tumors that have large conductivity contrast to its surrounding tissues as reported in literature may be readily detected in the reconstructed MAT-MI images. In addition, our simulations also suggest that the sensitivity of imaging breast tumors using the presented MAT-MI setup depends more on the tumor location and the conductivity contrast between the tumor and its surrounding tissues than on the tumor size. PMID:21364262
NASA Astrophysics Data System (ADS)
Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana
2017-11-01
Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
NASA Astrophysics Data System (ADS)
Bastin, Sophie; Champollion, Cédric; Bock, Olivier; Drobinski, Philippe; Masson, Frédéric
2005-03-01
Global Positioning System (GPS) tomography analyses of water vapor, complemented by high-resolution numerical simulations are used to investigate a Mistral/sea breeze event in the region of Marseille, France, during the ESCOMPTE experiment. This is the first time GPS tomography has been used to validate the three-dimensional water vapor concentration from numerical simulation, and to analyze a small-scale meteorological event. The high spatial and temporal resolution of GPS analyses provides a unique insight into the evolution of the vertical and horizontal distribution of water vapor during the Mistral/sea-breeze transition.
Dances with Membranes: Breakthroughs from Super-resolution Imaging
Curthoys, Nikki M.; Parent, Matthew; Mlodzianoski, Michael; Nelson, Andrew J.; Lilieholm, Jennifer; Butler, Michael B.; Valles, Matthew; Hess, Samuel T.
2017-01-01
Biological membrane organization mediates numerous cellular functions and has also been connected with an immense number of human diseases. However, until recently, experimental methodologies have been unable to directly visualize the nanoscale details of biological membranes, particularly in intact living cells. Numerous models explaining membrane organization have been proposed, but testing those models has required indirect methods; the desire to directly image proteins and lipids in living cell membranes is a strong motivation for the advancement of technology. The development of super-resolution microscopy has provided powerful tools for quantification of membrane organization at the level of individual proteins and lipids, and many of these tools are compatible with living cells. Previously inaccessible questions are now being addressed, and the field of membrane biology is developing rapidly. This chapter discusses how the development of super-resolution microscopy has led to fundamental advances in the field of biological membrane organization. We summarize the history and some models explaining how proteins are organized in cell membranes, and give an overview of various super-resolution techniques and methods of quantifying super-resolution data. We discuss the application of super-resolution techniques to membrane biology in general, and also with specific reference to the fields of actin and actin-binding proteins, virus infection, mitochondria, immune cell biology, and phosphoinositide signaling. Finally, we present our hopes and expectations for the future of super-resolution microscopy in the field of membrane biology. PMID:26015281
An artificial nonlinear diffusivity method for supersonic reacting flows with shocks
NASA Astrophysics Data System (ADS)
Fiorina, B.; Lele, S. K.
2007-03-01
A computational approach for modeling interactions between shocks waves, contact discontinuities and reactions zones with a high-order compact scheme is investigated. To prevent the formation of spurious oscillations around shocks, artificial nonlinear viscosity [A.W. Cook, W.H. Cabot, A high-wavenumber viscosity for high resolution numerical method, J. Comput. Phys. 195 (2004) 594-601] based on high-order derivative of the strain rate tensor is used. To capture temperature and species discontinuities a nonlinear diffusivity based on the entropy gradient is added. It is shown that the damping of 'wiggles' is controlled by the model constants and is largely independent of the mesh size and the shock strength. The same holds for the numerical shock thickness and allows a determination of the L2 error. In the shock tube problem, with fluids of different initial entropy separated by the diaphragm, an artificial diffusivity is required to accurately capture the contact surface. Finally, the method is applied to a shock wave propagating into a medium with non-uniform density/entropy and to a CJ detonation wave. Multi-dimensional formulation of the model is presented and is illustrated by a 2D oblique wave reflection from an inviscid wall, by a 2D supersonic blunt body flow and by a Mach reflection problem.
Hierarchical classification in high dimensional numerous class cases
NASA Technical Reports Server (NTRS)
Kim, Byungyong; Landgrebe, D. A.
1990-01-01
As progress in new sensor technology continues, increasingly high resolution imaging sensors are being developed. These sensors give more detailed and complex data for each picture element and greatly increase the dimensionality of data over past systems. Three methods for designing a decision tree classifier are discussed: a top down approach, a bottom up approach, and a hybrid approach. Three feature extraction techniques are implemented. Canonical and extended canonical techniques are mainly dependent upon the mean difference between two classes. An autocorrelation technique is dependent upon the correlation differences. The mathematical relationship between sample size, dimensionality, and risk value is derived.
Orzó, László
2015-06-29
Retrieving correct phase information from an in-line hologram is difficult as the object wave field and the diffractions of the zero order and the conjugate object term overlap. The existing iterative numerical phase retrieval methods are slow, especially in the case of high Fresnel number systems. Conversely, the reconstruction of the object wave field from an off-axis hologram is simple, but due to the applied spatial frequency filtering the achievable resolution is confined. Here, a new, high-speed algorithm is introduced that efficiently incorporates the data of an auxiliary off-axis hologram in the phase retrieval of the corresponding in-line hologram. The efficiency of the introduced combined phase retrieval method is demonstrated by simulated and measured holograms.
Nixdorf, Erik; Sun, Yuanyuan; Lin, Mao; Kolditz, Olaf
2017-12-15
The main objective of this study is to quantify the groundwater contamination risk of Songhua River Basin by applying a novel approach of integrating public datasets, web services and numerical modelling techniques. To our knowledge, this study is the first to establish groundwater risk maps for the entire Songhua River Basin, one of the largest and most contamination-endangered river basins in China. Index-based groundwater risk maps were created with GIS tools at a spatial resolution of 30arc sec by combining the results of groundwater vulnerability and hazard assessment. Groundwater vulnerability was evaluated using the DRASTIC index method based on public datasets at the highest available resolution in combination with numerical groundwater modelling. As a novel approach to overcome data scarcity at large scales, a web mapping service based data query was applied to obtain an inventory for potential hazardous sites within the basin. The groundwater risk assessment demonstrated that <1% of Songhua River Basin is at high or very high contamination risk. These areas were mainly located in the vast plain areas with hotspots particularly in the Changchun metropolitan area. Moreover, groundwater levels and pollution point sources were found to play a significantly larger impact in assessing these areas than originally assumed by the index scheme. Moderate contamination risk was assigned to 27% of the aquifers, predominantly associated with less densely populated agricultural areas. However, the majority of aquifer area in the sparsely populated mountain ranges displayed low groundwater contamination risk. Sensitivity analysis demonstrated that this novel method is valid for regional assessments of groundwater contamination risk. Despite limitations in resolution and input data consistency, the obtained groundwater contamination risk maps will be beneficial for regional and local decision-making processes with regard to groundwater protection measures, particularly if other data availability is limited. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling dust growth in protoplanetary disks: The breakthrough case
NASA Astrophysics Data System (ADS)
Drążkowska, J.; Windmark, F.; Dullemond, C. P.
2014-07-01
Context. Dust coagulation in protoplanetary disks is one of the initial steps toward planet formation. Simple toy models are often not sufficient to cover the complexity of the coagulation process, and a number of numerical approaches are therefore used, among which integration of the Smoluchowski equation and various versions of the Monte Carlo algorithm are the most popular. Aims: Recent progress in understanding the processes involved in dust coagulation have caused a need for benchmarking and comparison of various physical aspects of the coagulation process. In this paper, we directly compare the Smoluchowski and Monte Carlo approaches to show their advantages and disadvantages. Methods: We focus on the mechanism of planetesimal formation via sweep-up growth, which is a new and important aspect of the current planet formation theory. We use realistic test cases that implement a distribution in dust collision velocities. This allows a single collision between two grains to have a wide range of possible outcomes but also requires a very high numerical accuracy. Results: For most coagulation problems, we find a general agreement between the two approaches. However, for the sweep-up growth driven by the "lucky" breakthrough mechanism, the methods exhibit very different resolution dependencies. With too few mass bins, the Smoluchowski algorithm tends to overestimate the growth rate and the probability of breakthrough. The Monte Carlo method is less dependent on the number of particles in the growth timescale aspect but tends to underestimate the breakthrough chance due to its limited dynamic mass range. Conclusions: We find that the Smoluchowski approach, which is generally better for the breakthrough studies, is sensitive to low mass resolutions in the high-mass, low-number tail that is important in this scenario. To study the low number density features, a new modulation function has to be introduced to the interaction probabilities. As the minimum resolution needed for breakthrough studies depends strongly on setup, verification has to be performed on a case by case basis.
High Order Approximations for Compressible Fluid Dynamics on Unstructured and Cartesian Meshes
NASA Technical Reports Server (NTRS)
Barth, Timothy (Editor); Deconinck, Herman (Editor)
1999-01-01
The development of high-order accurate numerical discretization techniques for irregular domains and meshes is often cited as one of the remaining challenges facing the field of computational fluid dynamics. In structural mechanics, the advantages of high-order finite element approximation are widely recognized. This is especially true when high-order element approximation is combined with element refinement (h-p refinement). In computational fluid dynamics, high-order discretization methods are infrequently used in the computation of compressible fluid flow. The hyperbolic nature of the governing equations and the presence of solution discontinuities makes high-order accuracy difficult to achieve. Consequently, second-order accurate methods are still predominately used in industrial applications even though evidence suggests that high-order methods may offer a way to significantly improve the resolution and accuracy for these calculations. To address this important topic, a special course was jointly organized by the Applied Vehicle Technology Panel of NATO's Research and Technology Organization (RTO), the von Karman Institute for Fluid Dynamics, and the Numerical Aerospace Simulation Division at the NASA Ames Research Center. The NATO RTO sponsored course entitled "Higher Order Discretization Methods in Computational Fluid Dynamics" was held September 14-18, 1998 at the von Karman Institute for Fluid Dynamics in Belgium and September 21-25, 1998 at the NASA Ames Research Center in the United States. During this special course, lecturers from Europe and the United States gave a series of comprehensive lectures on advanced topics related to the high-order numerical discretization of partial differential equations with primary emphasis given to computational fluid dynamics (CFD). Additional consideration was given to topics in computational physics such as the high-order discretization of the Hamilton-Jacobi, Helmholtz, and elasticity equations. This volume consists of five articles prepared by the special course lecturers. These articles should be of particular relevance to those readers with an interest in numerical discretization techniques which generalize to very high-order accuracy. The articles of Professors Abgrall and Shu consider the mathematical formulation of high-order accurate finite volume schemes utilizing essentially non-oscillatory (ENO) and weighted essentially non-oscillatory (WENO) reconstruction together with upwind flux evaluation. These formulations are particularly effective in computing numerical solutions of conservation laws containing solution discontinuities. Careful attention is given by the authors to implementational issues and techniques for improving the overall efficiency of these methods. The article of Professor Cockburn discusses the discontinuous Galerkin finite element method. This method naturally extends to high-order accuracy and has an interpretation as a finite volume method. Cockburn addresses two important issues associated with the discontinuous Galerkin method: controlling spurious extrema near solution discontinuities via "limiting" and the extension to second order advective-diffusive equations (joint work with Shu). The articles of Dr. Henderson and Professor Schwab consider the mathematical formulation and implementation of the h-p finite element methods using hierarchical basis functions and adaptive mesh refinement. These methods are particularly useful in computing high-order accurate solutions containing perturbative layers and corner singularities. Additional flexibility is obtained using a mortar FEM technique whereby nonconforming elements are interfaced together. Numerous examples are given by Henderson applying the h-p FEM method to the simulation of turbulence and turbulence transition.
A method for generating high resolution satellite image time series
NASA Astrophysics Data System (ADS)
Guo, Tao
2014-10-01
There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.
Optimization of an on-board imaging system for extremely rapid radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherry Kemmerling, Erica M.; Wu, Meng, E-mail: mengwu@stanford.edu; Yang, He
2015-11-15
Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors aremore » proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration deformation fields was computed. Results: By most global metrics (e.g., mean, median, and maximum pointwise distance), the high-resolution detector had the best performance but the medium-resolution detector was comparable. For all medium- and high-resolution detector registrations, mean error between the realistic and gold standard deformation fields was less than 4 mm. By pointwise metrics (e.g., tracking a small lesion), the high- and medium-resolution detectors performed similarly. For these detectors, the smallest error between the realistic and gold standard registrations was 0.6 mm and the largest error was 3.6 mm. Conclusions: The medium-resolution CT detector was selected as the best for an extremely rapid radiation therapy system. In essentially all test cases, data from this detector produced a significantly better registration than data from the low-resolution detector and a comparable registration to data from the high-resolution detector. The medium-resolution detector provides an appropriate compromise between registration accuracy and system cost.« less
NASA Astrophysics Data System (ADS)
Soni, V.; Hadjadj, A.; Roussel, O.
2017-12-01
In this paper, a fully adaptive multiresolution (MR) finite difference scheme with a time-varying tolerance is developed to study compressible fluid flows containing shock waves in interaction with solid obstacles. To ensure adequate resolution near rigid bodies, the MR algorithm is combined with an immersed boundary method based on a direct-forcing approach in which the solid object is represented by a continuous solid-volume fraction. The resulting algorithm forms an efficient tool capable of solving linear and nonlinear waves on arbitrary geometries. Through a one-dimensional scalar wave equation, the accuracy of the MR computation is, as expected, seen to decrease in time when using a constant MR tolerance considering the accumulation of error. To overcome this problem, a variable tolerance formulation is proposed, which is assessed through a new quality criterion, to ensure a time-convergence solution for a suitable quality resolution. The newly developed algorithm coupled with high-resolution spatial and temporal approximations is successfully applied to shock-bluff body and shock-diffraction problems solving Euler and Navier-Stokes equations. Results show excellent agreement with the available numerical and experimental data, thereby demonstrating the efficiency and the performance of the proposed method.
The impact of fabrication parameters and substrate stiffness in direct writing of living constructs.
Tirella, Annalisa; Ahluwalia, Arti
2012-01-01
Biomolecules and living cells can be printed in high-resolution patterns to fabricate living constructs for tissue engineering. To evaluate the impact of processing cells with rapid prototyping (RP) methods, we modeled the printing phase of two RP systems that use biomaterial inks containing living cells: a high-resolution inkjet system (BioJet) and a lower-resolution nozzle-based contact printing system (PAM(2)). In the first fabrication method, we reasoned that cell damage occurs principally during drop collision on the printing surface, in the second we hypothesize that shear stresses act on cells during extrusion (within the printing nozzle). The two cases were modeled changing the printing conditions: biomaterial substrate stiffness and volumetric flow rate, respectively, in BioJet and PAM(2). Results show that during inkjet printing impact energies of about 10(-8) J are transmitted to cells, whereas extrusion energies of the order of 10(-11) J are exerted in direct printing. Viability tests of printed cells can be related to those numerical simulations, suggesting a threshold energy of 10(-9) J to avoid permanent cell damage. To obtain well-defined living constructs, a combination of these methods is proposed for the fabrication of scaffolds with controlled 3D architecture and spatial distribution of biomolecules and cells. Copyright © 2012 American Institute of Chemical Engineers (AIChE).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelanti, Marica, E-mail: marica.pelanti@ensta-paristech.fr; Shyue, Keh-Ming, E-mail: shyue@ntu.edu.tw
2014-02-15
We model liquid–gas flows with cavitation by a variant of the six-equation single-velocity two-phase model with stiff mechanical relaxation of Saurel–Petitpas–Berry (Saurel et al., 2009) [9]. In our approach we employ phasic total energy equations instead of the phasic internal energy equations of the classical six-equation system. This alternative formulation allows us to easily design a simple numerical method that ensures consistency with mixture total energy conservation at the discrete level and agreement of the relaxed pressure at equilibrium with the correct mixture equation of state. Temperature and Gibbs free energy exchange terms are included in the equations as relaxationmore » terms to model heat and mass transfer and hence liquid–vapor transition. The algorithm uses a high-resolution wave propagation method for the numerical approximation of the homogeneous hyperbolic portion of the model. In two dimensions a fully-discretized scheme based on a hybrid HLLC/Roe Riemann solver is employed. Thermo-chemical terms are handled numerically via a stiff relaxation solver that forces thermodynamic equilibrium at liquid–vapor interfaces under metastable conditions. We present numerical results of sample tests in one and two space dimensions that show the ability of the proposed model to describe cavitation mechanisms and evaporation wave dynamics.« less
Time multiplexing based extended depth of focus imaging.
Ilovitsh, Asaf; Zalevsky, Zeev
2016-01-01
We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.
NASA Astrophysics Data System (ADS)
Qu, Yegao; Shi, Ruchao; Batra, Romesh C.
2018-02-01
We present a robust sharp-interface immersed boundary method for numerically studying high speed flows of compressible and viscous fluids interacting with arbitrarily shaped either stationary or moving rigid solids. The Navier-Stokes equations are discretized on a rectangular Cartesian grid based on a low-diffusion flux splitting method for inviscid fluxes and conservative high-order central-difference schemes for the viscous components. Discontinuities such as those introduced by shock waves and contact surfaces are captured by using a high-resolution weighted essentially non-oscillatory (WENO) scheme. Ghost cells in the vicinity of the fluid-solid interface are introduced to satisfy boundary conditions on the interface. Values of variables in the ghost cells are found by using a constrained moving least squares method (CMLS) that eliminates numerical instabilities encountered in the conventional MLS formulation. The solution of the fluid flow and the solid motion equations is advanced in time by using the third-order Runge-Kutta and the implicit Newmark integration schemes, respectively. The performance of the proposed method has been assessed by computing results for the following four problems: shock-boundary layer interaction, supersonic viscous flows past a rigid cylinder, moving piston in a shock tube and lifting off from a flat surface of circular, rectangular and elliptic cylinders triggered by shock waves, and comparing computed results with those available in the literature.
Numerical Modeling of Three-Dimensional Fluid Flow with Phase Change
NASA Technical Reports Server (NTRS)
Esmaeeli, Asghar; Arpaci, Vedat
1999-01-01
We present a numerical method to compute phase change dynamics of three-dimensional deformable bubbles. The full Navier-Stokes and energy equations are solved for both phases by a front tracking/finite difference technique. The fluid boundary is explicitly tracked by discrete points that are connected by triangular elements to form a front that is used to keep the stratification of material properties sharp and to calculate the interfacial source terms. Two simulations are presented to show robustness of the method in handling complex phase boundaries. In the first case, growth of a vapor bubble in zero gravity is studied where large volume increase of the bubble is managed by adaptively increasing the front resolution. In the second case, growth of a bubble under high gravity is studied where indentation at the rear of the bubble results in a region of large curvature which challenges the front tracking in three dimensions.
NASA Astrophysics Data System (ADS)
Bonavita, M.; Torrisi, L.
2005-03-01
A new data assimilation system has been designed and implemented at the National Center for Aeronautic Meteorology and Climatology of the Italian Air Force (CNMCA) in order to improve its operational numerical weather prediction capabilities and provide more accurate guidance to operational forecasters. The system, which is undergoing testing before operational use, is based on an “observation space” version of the 3D-VAR method for the objective analysis component, and on the High Resolution Regional Model (HRM) of the Deutscher Wetterdienst (DWD) for the prognostic component. Notable features of the system include a completely parallel (MPI+OMP) implementation of the solution of analysis equations by a preconditioned conjugate gradient descent method; correlation functions in spherical geometry with thermal wind constraint between mass and wind field; derivation of the objective analysis parameters from a statistical analysis of the innovation increments.
Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2016-08-01
Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.
Research highlights: June 1990 - May 1991
NASA Technical Reports Server (NTRS)
1991-01-01
Linear instability calculations at MSFC have suggested that the Geophysical Fluid Flow Cell (GFFC) should exhibit classic baroclinic instability at accessible parameter settings. Interest was in the mechanisms of transition to temporal chaos and the evolution of spatio-temporal chaos. In order to understand more about such transitions, high resolution numerical experiments for the physically simplest model of two layer baroclinic instability were conducted. This model has the advantage that the numerical code is exponentially convergent and can be efficiently run for very long times, enabling the study of chaotic attractors without the often devastating effects of low-order trunction found in many previous studies. Numerical algorithms for implementing an empirical orthogonal function (EOF) analysis of the high resolution numerical results were completed. Under conditions of rapid rotation and relatively low differential heating, convection in a spherical shell takes place as columnar banana cells wrapped around the annular gap, but with axes oriented along the axis of rotation; these were clearly evident in the GFFC experiments. The results of recent numerical simulations of columnar convection and future research plans are presented.
Absolute single-photoionization cross sections of Se 2 + : Experiment and theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macaluso, D. A.; Aguilar, A.; Kilcoyne, A. L. D.
2015-12-28
Absolute single-photoionization cross-section measurements for Se 2+ ions were performed at the Advanced Light Source at Lawrence Berkeley National Laboratory using the merged-beams photo-ion technique. Measurements were made at a photon energy resolution of 24 ± 3 meV in the photon energy range 23.5-42.5 eV, spanning the ground state and low-lying metastable state ionization thresholds. Here, to clearly resolve the resonant structure near the ground-state threshold, high-resolution measurements were made from 30.0 to 31.9 eV at a photon energy resolution of 6.7 ± 0.7 meV. Numerous resonance features observed in the experimental spectra are assigned and their energies and quantummore » defects tabulated. The high-resolution cross-section measurements are compared with large-scale, state-of-the-art theoretical cross-section calculations obtained from the Dirac Coulomb R -matrix method. Suitable agreement is obtained over the entire photon energy range investigated. In conclusion, these results are an experimental determination of the absolute photoionization cross section of doubly ionized selenium and include a detailed analysis of the photoionization resonance spectrum of this ion.« less
High-resolution analysis of the mechanical behavior of tissue
NASA Astrophysics Data System (ADS)
Hudnut, Alexa W.; Armani, Andrea M.
2017-06-01
The mechanical behavior and properties of biomaterials, such as tissue, have been directly and indirectly connected to numerous malignant physiological states. For example, an increase in the Young's Modulus of tissue can be indicative of cancer. Due to the heterogeneity of biomaterials, it is extremely important to perform these measurements using whole or unprocessed tissue because the tissue matrix contains important information about the intercellular interactions and the structure. Thus, developing high-resolution approaches that can accurately measure the elasticity of unprocessed tissue samples is of great interest. Unfortunately, conventional elastography methods such as atomic force microscopy, compression testing, and ultrasound elastography either require sample processing or have poor resolution. In the present work, we demonstrate the characterization of unprocessed salmon muscle using an optical polarimetric elastography system. We compare the results of compression testing within different samples of salmon skeletal muscle with different numbers of collagen membranes to characterize differences in heterogeneity. Using the intrinsic collagen membranes as markers, we determine the resolution of the system when testing biomaterials. The device reproducibly measures the stiffness of the tissues at variable strains. By analyzing the amount of energy lost by the sample during compression, collagen membranes that are 500 μm in size are detected.
Huang, Chenxi; Huang, Hongxin; Toyoda, Haruyoshi; Inoue, Takashi; Liu, Huafeng
2012-11-19
We propose a new method for realizing high-spatial-resolution detection of singularity points in optical vortex beams. The method uses a Shack-Hartmann wavefront sensor (SHWS) to record a Hartmanngram. A map of evaluation values related to phase slope is then calculated from the Hartmanngram. The position of an optical vortex is determined by comparing the map with reference maps that are calculated from numerically created spiral phases having various positions. Optical experiments were carried out to verify the method. We displayed various spiral phase distribution patterns on a phase-only spatial light modulator and measured the resulting singularity point using the proposed method. The results showed good linearity in detecting the position of singularity points. The RMS error of the measured position of the singularity point was approximately 0.056, in units normalized to the lens size of the lenslet array used in the SHWS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundquist, K A
Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model.more » First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the use of flux (non-zero) boundary conditions. This anabatic flow set-up is further coupled to atmospheric physics parameterizations, which calculate surface fluxes, demonstrating that the IBM can be coupled to various land-surface parameterizations in atmospheric models. Additionally, the IB method is extended to three dimensions, using both trilinear and inverse distance weighted interpolations. Results are presented for geostrophic flow over a three-dimensional hill. It is found that while the IB method using trilinear interpolation works well for simple three-dimensional geometries, a more flexible and robust method is needed for extremely complex geometries, as found in three-dimensional urban environments. A second, more flexible, immersed boundary method is devised using inverse distance weighting, and results are compared to the first IBM approach. Additionally, the functionality to nest a domain with resolved complex geometry inside of a parent domain without resolved complex geometry is described. The new IBM approach is used to model urban terrain from Oklahoma City in a one-way nested configuration, where lateral boundary conditions are provided by the parent domain. Finally, the IB method is extended to include wall model parameterizations for rough surfaces. Two possible implementations are presented, one which uses the log law to reconstruct velocities exterior to the solid domain, and one which reconstructs shear stress at the immersed boundary, rather than velocity. These methods are tested on the three-dimensional canonical case of neutral atmospheric boundary layer flow over flat terrain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Chen, Ting; Tan, Sirui
Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less
Study on diagnosis of micro-biomechanical structure using optical coherence tomography
NASA Astrophysics Data System (ADS)
Saeki, Souichi; Hashimoto, Youhei; Saito, Takashi; Hiro, Takafumi; Matsuzaki, Masunori
2007-02-01
Acute coronary syndromes, e.g. myocardial infarctions, are caused by the rupture of unstable plaques on coronary arteries. The stability of plaque, which depends on biomechanical properties of fibrous cap, should be diagnosed crucially. Recently, Optical Coherence Tomography (OCT) has been developed as a cross-sectional imaging method of microstructural biological tissue with high resolution 1~10 μm. Multi-functional OCT system has been promising, e.g. an estimator of biomechanical characteristics. It has been, however, difficult to estimate biomechanical characteristics, because OCT images have just speckle patterns by back-scattering light from tissue. In this study, presented is Optical Coherence Straingraphy (OCS) on the basis of OCT system, which can diagnose tissue strain distribution. This is basically composed of Recursive Cross-correlation technique (RC), which can provide a displacement vector distribution with high resolution. Furthermore, Adjacent Cross-correlation Multiplication (ACM) is introduced as a speckle noise reduction method. Multiplying adjacent correlation maps can eliminate anomalies from speckle noise, and then can enhance S/N in the determination of maximum correlation coefficient. Error propagation also can be further prevented by introducing to the recursive algorithm (RC). In addition, the spatial vector interpolation by local least square method is introduced to remove erroneous vectors and smooth the vector distribution. This was numerically applied to compressed elastic heterogeneous tissue samples to carry out the accuracy verifications. Consequently, it was quantitatively confirmed that its accuracy of displacement vectors and strain matrix components could be enhanced, comparing with the conventional method. Therefore, the proposed method was validated by the identification of different elastic objects with having nearly high resolution for that defined by optical system.
High-Resolution Regional Reanalysis in China: Evaluation of 1 Year Period Experiments
NASA Astrophysics Data System (ADS)
Zhang, Qi; Pan, Yinong; Wang, Shuyu; Xu, Jianjun; Tang, Jianping
2017-10-01
Globally, reanalysis data sets are widely used in assessing climate change, validating numerical models, and understanding the interactions between the components of a climate system. However, due to the relatively coarse resolution, most global reanalysis data sets are not suitable to apply at the local and regional scales directly with the inadequate descriptions of mesoscale systems and climatic extreme incidents such as mesoscale convective systems, squall lines, tropical cyclones, regional droughts, and heat waves. In this study, by using a data assimilation system of Gridpoint Statistical Interpolation, and a mesoscale atmospheric model of Weather Research and Forecast model, we build a regional reanalysis system. This is preliminary and the first experimental attempt to construct a high-resolution reanalysis for China main land. Four regional test bed data sets are generated for year 2013 via three widely used methods (classical dynamical downscaling, spectral nudging, and data assimilation) and a hybrid method with data assimilation coupled with spectral nudging. Temperature at 2 m, precipitation, and upper level atmospheric variables are evaluated by comparing against observations for one-year-long tests. It can be concluded that the regional reanalysis with assimilation and nudging methods can better produce the atmospheric variables from surface to upper levels, and regional extreme events such as heat waves, than the classical dynamical downscaling. Compared to the ERA-Interim global reanalysis, the hybrid nudging method performs slightly better in reproducing upper level temperature and low-level moisture over China, which improves regional reanalysis data quality.
Numerical analysis of the beam position monitor pickup for the Iranian light source facility
NASA Astrophysics Data System (ADS)
Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.
2017-03-01
In this paper, we describe the design of a button type Beam Position Monitor (BPM) for the low emittance storage ring of the Iranian Light Source Facility (ILSF). First, we calculate sensitivities, induced power and intrinsic resolution based on solving Laplace equation numerically by finite element method (FEM), in order to find the potential at each point of BPM's electrode surface. After the optimization of the designed BPM, trapped high order modes (HOM), wakefield and thermal loss effects are calculated. Finally, after fabrication of BPM, it is experimentally tested by using a test-stand. The results depict that the designed BPM has a linear response in the area of 2×4 mm2 inside the beam pipe and the sensitivity of 0.080 and 0.087 mm-1 in horizontal and vertical directions. Experimental results also depict that they are in a good agreement with numerical analysis.
Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube
NASA Astrophysics Data System (ADS)
Zhou, Guangzhao; Xu, Kun; Liu, Feng
2018-01-01
The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.
Oxygen Measurements in Liposome Encapsulated Hemoglobin
NASA Astrophysics Data System (ADS)
Phiri, Joshua Benjamin
Liposome encapsulated hemoglobins (LEH's) are of current interest as blood substitutes. An analytical methodology for rapid non-invasive measurements of oxygen in artificial oxygen carriers is examined. High resolution optical absorption spectra are calculated by means of a one dimensional diffusion approximation. The encapsulated hemoglobin is prepared from fresh defibrinated bovine blood. Liposomes are prepared from hydrogenated soy phosphatidylcholine (HSPC), cholesterol and dicetylphosphate using a bath sonication method. An integrating sphere spectrophotometer is employed for diffuse optics measurements. Data is collected using an automated data acquisition system employing lock-in -amplifiers. The concentrations of hemoglobin derivatives are evaluated from the corresponding extinction coefficients using a numerical technique of singular value decomposition, and verification of the results is done using Monte Carlo simulations. In situ measurements are required for the determination of hemoglobin derivatives because most encapsulation methods invariably lead to the formation of methemoglobin, a nonfunctional form of hemoglobin. The methods employed in this work lead to high resolution absorption spectra of oxyhemoglobin and other derivatives in red blood cells and liposome encapsulated hemoglobin (LEH). The analysis using singular value decomposition method offers a quantitative means of calculating the fractions of oxyhemoglobin and other hemoglobin derivatives in LEH samples. The analytical methods developed in this work will become even more useful when production of LEH as a blood substitute is scaled up to large volumes.
A Navier-Stokes Solution of Hull-Ring Wing-Thruster Interaction
NASA Technical Reports Server (NTRS)
Yang, C.-I.; Hartwich, P.; Sundaram, P.
1991-01-01
Navier-Stokes simulations of high Reynolds number flow around an axisymmetric body supported in a water tunnel were made. The numerical method is based on a finite-differencing high resolution second-order accurate implicit upwind scheme. Four different configurations were investigated, these are: (1) barebody; (2) body with an operating propeller; (3) body with a ring wing; and (4) body with a ring wing and an operating propeller. Pressure and velocity components near the stern region were obtained computationally and are shown to compare favorably with the experimental data. The method correctly predicts the existence and extent of stern flow separation for the barebody and the absence of flow separation for the three other configurations with ring wing and/or propeller.
NASA Astrophysics Data System (ADS)
Schnitzler, H.; Zimmer, Klaus-Peter
2008-09-01
Similar to human's binocular vision, stereomicroscopes are comprised of two optical paths under a convergence angle providing a full perspective insight into the world's microstructure. The numerical aperture of stereomicroscopes has continuously increased over the years, reaching the point where the lenses of left and right perspective paths touched each other. This constraint appeared as an upper limit for the resolution of stereomicroscopes, as the resolution of a stereomicroscope was deduced from the numerical apertures of the two equally sized perspective channels. We present the optical design and advances in resolution of the world's first asymmetrical stereomicroscope, which is a technological breakthrough in many aspects of stereomicroscopes. This unique approach uses a large numerical aperture and thus an, so far, unachievable high lateral resolution in the one path, and a small aperture in the other path, which provides a high depth of field ("Fusion Optics"). This new concept is a technical challenge for the optical design of the zoom system as well as for the common main objectives. Furthermore, the new concept makes use of the particular way in which perspective information by binocular vision is formed in the human's brain. In conjunction with a research project at the University of Zurich, Leica Microsystems consolidated the functionality of this concept in to a new generation of stereomicroscopes.
Study of compressible turbulent flows in supersonic environment by large-eddy simulation
NASA Astrophysics Data System (ADS)
Genin, Franklin
The numerical resolution of turbulent flows in high-speed environment is of fundamental importance but remains a very challenging problem. First, the capture of strong discontinuities, typical of high-speed flows, requires the use of shock-capturing schemes, which are not adapted to the resolution of turbulent structures due to their intrinsic dissipation. On the other hand, low-dissipation schemes are unable to resolve shock fronts and other sharp gradients without creating high amplitude numerical oscillations. Second, the nature of turbulence in high-speed flows differs from its incompressible behavior, and, in the context of Large-Eddy Simulation, the subgrid closure must be adapted to the modeling of compressibility effects and shock waves on turbulent flows. The developments described in this thesis are two-fold. First, a state of the art closure approach for LES is extended to model subgrid turbulence in compressible flows. The energy transfers due to compressible turbulence and the diffusion of turbulent kinetic energy by pressure fluctuations are assessed and integrated in the Localized Dynamic ksgs model. Second, a hybrid numerical scheme is developed for the resolution of the LES equations and of the model transport equation, which combines a central scheme for turbulent resolutions to a shock-capturing method. A smoothness parameter is defined and used to switch from the base smooth solver to the upwind scheme in regions of discontinuities. It is shown that the developed hybrid methodology permits a capture of shock/turbulence interactions in direct simulations that agrees well with other reference simulations, and that the LES methodology effectively reproduces the turbulence evolution and physical phenomena involved in the interaction. This numerical approach is then employed to study a problem of practical importance in high-speed mixing. The interaction of two shock waves with a high-speed turbulent shear layer as a mixing augmentation technique is considered. It is shown that the levels of turbulence are increased through the interaction, and that the mixing is significantly improved in this flow configuration. However, the region of increased mixing is found to be localized to a region close to the impact of the shocks, and that the statistical levels of turbulence relax to their undisturbed levels some short distance downstream of the interaction. The present developments are finally applied to a practical configuration relevant to scramjet injection. The normal injection of a sonic jet into a supersonic crossflow is considered numerically, and compared to the results of an experimental study. A fair agreement in the statistics of mean and fluctuating velocity fields is obtained. Furthermore, some of the instantaneous flow structures observed in experimental visualizations are identified in the present simulation. The dynamics of the interaction for the reference case, based on the experimental study, as well as for a case of higher freestream Mach number and a case of higher momentum ratio, are examined. The classical instantaneous vortical structures are identified, and their generation mechanisms, specific to supersonic flow, are highlighted. Furthermore, two vortical structures, recently revealed in low-speed jets in crossflow but never documented for high-speed flows, are identified during the flow evolution.
Vibration analysis of angle-ply laminated composite plates with an embedded piezoceramic layer.
Lin, Hsien-Yang; Huang, Jin-Hung; Ma, Chien-Ching
2003-09-01
An optical full-field technique, called amplitude-fluctuation electronic speckle pattern interferometry (AF-ESPI), is used in this study to investigate the force-induced transverse vibration of an angle-ply laminated composite embedded with a piezoceramic layer (piezolaminated plates). The piezolaminated plates are excited by applying time-harmonic voltages to the embedded piezoceramic layer. Because clear fringe patterns will appear only at resonant frequencies, both the resonant frequencies and mode shapes of the vibrating piezolaminated plates with five different fiber orientation angles are obtained by the proposed AF-ESPI method. A laser Doppler vibrometer (LDV) system that has the advantage of high resolution and broad dynamic range also is applied to measure the frequency response of piezolaminated plates. In addition to the two proposed optical techniques, numerical computations based on a commercial finite element package are presented for comparison with the experimental results. Three different numerical formulations are used to evaluate the vibration characteristics of piezolaminated plates. Good agreements of the measured data by the optical method and the numerical results predicted by the finite element method (FEM) demonstrate that the proposed methodology in this study is a powerful tool for the vibration analysis of piezolaminated plates.
A divergence-cleaning scheme for cosmological SPMHD simulations
NASA Astrophysics Data System (ADS)
Stasyszyn, F. A.; Dolag, K.; Beck, A. M.
2013-01-01
In magnetohydrodynamics (MHD), the magnetic field is evolved by the induction equation and coupled to the gas dynamics by the Lorentz force. We perform numerical smoothed particle magnetohydrodynamics (SPMHD) simulations and study the influence of a numerical magnetic divergence. For instabilities arising from {nabla }\\cdot {boldsymbol B} related errors, we find the hyperbolic/parabolic cleaning scheme suggested by Dedner et al. to give good results and prevent numerical artefacts from growing. Additionally, we demonstrate that certain current SPMHD implementations of magnetic field regularizations give rise to unphysical instabilities in long-time simulations. We also find this effect when employing Euler potentials (divergenceless by definition), which are not able to follow the winding-up process of magnetic field lines properly. Furthermore, we present cosmological simulations of galaxy cluster formation at extremely high resolution including the evolution of magnetic fields. We show synthetic Faraday rotation maps and derive structure functions to compare them with observations. Comparing all the simulations with and without divergence cleaning, we are able to confirm the results of previous simulations performed with the standard implementation of MHD in SPMHD at normal resolution. However, at extremely high resolution, a cleaning scheme is needed to prevent the growth of numerical {nabla }\\cdot {boldsymbol B} errors at small scales.
NASA Astrophysics Data System (ADS)
Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.
2015-12-01
The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .
Differential absorption lidars for remote sensing of atmospheric pressure and temperature profiles
NASA Technical Reports Server (NTRS)
Korb, C. Laurence; Schwemmer, Geary K.; Famiglietti, Joseph; Walden, Harvey; Prasad, Coorg
1995-01-01
A near infrared differential absorption lidar technique is developed using atmospheric oxygen as a tracer for high resolution vertical profiles of pressure and temperature with high accuracy. Solid-state tunable lasers and high-resolution spectrum analyzers are developed to carry out ground-based and airborne measurement demonstrations and results of the measurements presented. Numerical error analysis of high-altitude airborne and spaceborne experiments is carried out, and system concepts developed for their implementation.
NASA Astrophysics Data System (ADS)
Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale
2013-04-01
A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.
A Simple Two Aircraft Conflict Resolution Algorithm
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
2006-01-01
Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in, the cockpit, dispatchers in operation control centers sad and traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control functions. This paper describes a conflict detection, and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm, which is often used for missile guidance during the terminal phase. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection, and the conflict resolution methods.
Highly Coarse-Grained Representations of Transmembrane Proteins
2017-01-01
Numerous biomolecules and biomolecular complexes, including transmembrane proteins (TMPs), are symmetric or at least have approximate symmetries. Highly coarse-grained models of such biomolecules, aiming at capturing the essential structural and dynamical properties on resolution levels coarser than the residue scale, must preserve the underlying symmetry. However, making these models obey the correct physics is in general not straightforward, especially at the highly coarse-grained resolution where multiple (∼3–30 in the current study) amino acid residues are represented by a single coarse-grained site. In this paper, we propose a simple and fast method of coarse-graining TMPs obeying this condition. The procedure involves partitioning transmembrane domains into contiguous segments of equal length along the primary sequence. For the coarsest (lowest-resolution) mappings, it turns out to be most important to satisfy the symmetry in a coarse-grained model. As the resolution is increased to capture more detail, however, it becomes gradually more important to match modular repeats in the secondary structure (such as helix-loop repeats) instead. A set of eight TMPs of various complexity, functionality, structural topology, and internal symmetry, representing different classes of TMPs (ion channels, transporters, receptors, adhesion, and invasion proteins), has been examined. The present approach can be generalized to other systems possessing exact or approximate symmetry, allowing for reliable and fast creation of multiscale, highly coarse-grained mappings of large biomolecular assemblies. PMID:28043122
Unstructured mesh adaptivity for urban flooding modelling
NASA Astrophysics Data System (ADS)
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
Mumcuoglu, Tarkan; Wollstein, Gadi; Wojtkowski, Maciej; Kagemann, Larry; Ishikawa, Hiroshi; Gabriele, Michelle L.; Srinivasan, Vivek; Fujimoto, James G.; Duker, Jay S.; Schuman, Joel S.
2009-01-01
Purpose To test if improving optical coherence tomography (OCT) resolution and scanning speed improves the visualization of glaucomatous structural changes as compared with conventional OCT. Design Prospective observational case series. Participants Healthy and glaucomatous subjects in various stages of disease. Methods Subjects were scanned at a single visit with commercially available OCT (StratusOCT) and high-speed ultrahigh-resolution (hsUHR) OCT. The prototype hsUHR OCT had an axial resolution of 3.4 μm (3 times higher than StratusOCT), with an A-scan rate of 24 000 hertz (60 times faster than StratusOCT). The fast scanning rate allowed the acquisition of novel scanning patterns such as raster scanning, which provided dense coverage of the retina and optic nerve head. Main Outcome Measures Discrimination of retinal tissue layers and detailed visualization of retinal structures. Results High-speed UHR OCT provided a marked improvement in tissue visualization as compared with StratusOCT. This allowed the identification of numerous retinal layers, including the ganglion cell layer, which is specifically prone to glaucomatous damage. Fast scanning and the enhanced A-scan registration properties of hsUHR OCT provided maps of the macula and optic nerve head with unprecedented detail, including en face OCT fundus images and retinal nerve fiber layer thickness maps. Conclusion High-speed UHR OCT improves visualization of the tissues relevant to the detection and management of glaucoma. PMID:17884170
van Ditmarsch, Dave; Xavier, João B
2011-06-17
Online spectrophotometric measurements allow monitoring dynamic biological processes with high-time resolution. Contrastingly, numerous other methods require laborious treatment of samples and can only be carried out offline. Integrating both types of measurement would allow analyzing biological processes more comprehensively. A typical example of this problem is acquiring quantitative data on rhamnolipid secretion by the opportunistic pathogen Pseudomonas aeruginosa. P. aeruginosa cell growth can be measured by optical density (OD600) and gene expression can be measured using reporter fusions with a fluorescent protein, allowing high time resolution monitoring. However, measuring the secreted rhamnolipid biosurfactants requires laborious sample processing, which makes this an offline measurement. Here, we propose a method to integrate growth curve data with endpoint measurements of secreted metabolites that is inspired by a model of exponential cell growth. If serial diluting an inoculum gives reproducible time series shifted in time, then time series of endpoint measurements can be reconstructed using calculated time shifts between dilutions. We illustrate the method using measured rhamnolipid secretion by P. aeruginosa as endpoint measurements and we integrate these measurements with high-resolution growth curves measured by OD600 and expression of rhamnolipid synthesis genes monitored using a reporter fusion. Two-fold serial dilution allowed integrating rhamnolipid measurements at a ~0.4 h-1 frequency with high-time resolved data measured at a 6 h-1 frequency. We show how this simple method can be used in combination with mutants lacking specific genes in the rhamnolipid synthesis or quorum sensing regulation to acquire rich dynamic data on P. aeruginosa virulence regulation. Additionally, the linear relation between the ratio of inocula and the time-shift between curves produces high-precision measurements of maximum specific growth rates, which were determined with a precision of ~5.4%. Growth curve synchronization allows integration of rich time-resolved data with endpoint measurements to produce time-resolved quantitative measurements. Such data can be valuable to unveil the dynamic regulation of virulence in P. aeruginosa. More generally, growth curve synchronization can be applied to many biological systems thus helping to overcome a key obstacle in dynamic regulation: the scarceness of quantitative time-resolved data.
Laskar, Junaid M; Shravan Kumar, P; Herminghaus, Stephan; Daniels, Karen E; Schröter, Matthias
2016-04-20
Optically transparent immersion liquids with refractive index (n∼1.77) to match the sapphire-based aplanatic numerical aperture increasing lens (aNAIL) are necessary for achieving deep 3D imaging with high spatial resolution. We report that antimony tribromide (SbBr3) salt dissolved in liquid diiodomethane (CH2I2) provides a new high refractive index immersion liquid for optics applications. The refractive index is tunable from n=1.74 (pure) to n=1.873 (saturated), by adjusting either salt concentration or temperature; this allows it to match (or even exceed) the refractive index of sapphire. Importantly, the solution gives excellent light transmittance in the ultraviolet to near-infrared range, an improvement over commercially available immersion liquids. This refractive-index-matched immersion liquid formulation has enabled us to develop a sapphire-based aNAIL objective that has both high numerical aperture (NA=1.17) and long working distance (WD=12 mm). This opens up new possibilities for deep 3D imaging with high spatial resolution.
Enhanced linear-array photoacoustic beamforming using modified coherence factor.
Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador
2018-02-01
Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
2016-11-16
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
On the remote sensing of cloud properties from satellite infrared sounder data
NASA Technical Reports Server (NTRS)
Yeh, H. Y. M.
1984-01-01
A method for remote sensing of cloud parameters by using infrared sounder data has been developed on the basis of the parameterized infrared transfer equation applicable to cloudy atmospheres. The method is utilized for the retrieval of the cloud height, amount, and emissivity in 11 micro m region. Numerical analyses and retrieval experiments have been carried out by utilizing the synthetic sounder data for the theoretical study. The sensitivity of the numerical procedures to the measurement and instrument errors are also examined. The retrieved results are physically discussed and numerically compared with the model atmospheres. Comparisons reveal that the recovered cloud parameters agree reasonably well with the pre-assumed values. However, for cases when relatively thin clouds and/or small cloud fractional cover within a field of view are present, the recovered cloud parameters show considerable fluctuations. Experiments on the proposed algorithm are carried out utilizing High Resolution Infrared Sounder (HIRS/2) data of NOAA 6 and TIROS-N. Results of experiments show reasonably good comparisons with the surface reports and GOES satellite images.
McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy
2017-09-27
Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.
High numerical aperture multilayer Laue lenses
Morgan, Andrew J.; Prasciolu, Mauro; Andrejczuk, Andrzej; ...
2015-06-01
The ever-increasing brightness of synchrotron radiation sources demands improved X-ray optics to utilise their capability for imaging and probing biological cells, nanodevices, and functional matter on the nanometer scale with chemical sensitivity. Here we demonstrate focusing a hard X-ray beam to an 8 nm focus using a volume zone plate (also referred to as a wedged multilayer Laue lens). This lens was constructed using a new deposition technique that enabled the independent control of the angle and thickness of diffracting layers to microradian and nanometer precision, respectively. This ensured that the Bragg condition is satisfied at each point along themore » lens, leading to a high numerical aperture that is limited only by its extent. We developed a phase-shifting interferometric method based on ptychography to characterise the lens focus. The precision of the fabrication and characterisation demonstrated here provides the path to efficient X-ray optics for imaging at 1 nm resolution.« less
NASA Astrophysics Data System (ADS)
Davis, L. J.; Boggess, M.; Kodpuak, E.; Deutsch, M.
2012-11-01
We report on a model for the deposition of three dimensional, aggregated nanocrystalline silver films, and an efficient numerical simulation method developed for visualizing such structures. We compare our results to a model system comprising chemically deposited silver films with morphologies ranging from dilute, uniform distributions of nanoparticles to highly porous aggregated networks. Disordered silver films grown in solution on silica substrates are characterized using digital image analysis of high resolution scanning electron micrographs. While the latter technique provides little volume information, plane-projected (two dimensional) island structure and surface coverage may be reliably determined. Three parameters governing film growth are evaluated using these data and used as inputs for the deposition model, greatly reducing computing requirements while still providing direct access to the complete (bulk) structure of the films throughout the growth process. We also show how valuable three dimensional characteristics of the deposited materials can be extracted using the simulated structures.
Noiseless Vlasov-Poisson simulations with linearly transformed particles
Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...
2014-06-25
We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less
Triebl, Alexander; Trötzmüller, Martin; Hartler, Jürgen; Stojakovic, Tatjana; Köfeler, Harald C
2018-01-01
An improved approach for selective and sensitive identification and quantitation of lipid molecular species using reversed phase chromatography coupled to high resolution mass spectrometry was developed. The method is applicable to a wide variety of biological matrices using a simple liquid-liquid extraction procedure. Together, this approach combines three selectivity criteria: Reversed phase chromatography separates lipids according to their acyl chain length and degree of unsaturation and is capable of resolving positional isomers of lysophospholipids, as well as structural isomers of diacyl phospholipids and glycerolipids. Orbitrap mass spectrometry delivers the elemental composition of both positive and negative ions with high mass accuracy. Finally, automatically generated tandem mass spectra provide structural insight into numerous glycerolipids, phospholipids, and sphingolipids within a single run. Method validation resulted in a linearity range of more than four orders of magnitude, good values for accuracy and precision at biologically relevant concentration levels, and limits of quantitation of a few femtomoles on column. Hundreds of lipid molecular species were detected and quantified in three different biological matrices, which cover well the wide variety and complexity of various model organisms in lipidomic research. Together with a reliable software package, this method is a prime choice for global lipidomic analysis of even the most complex biological samples. PMID:28415015
A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows
NASA Astrophysics Data System (ADS)
Wall, Clifton; Pierce, Charles D.; Moin, Parviz
2002-09-01
A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.
Finite element modeling of mass transport in high-Péclet cardiovascular flows
NASA Astrophysics Data System (ADS)
Hansen, Kirk; Arzani, Amirhossein; Shadden, Shawn
2016-11-01
Mass transport plays an important role in many important cardiovascular processes, including thrombus formation and atherosclerosis. These mass transport problems are characterized by Péclet numbers of up to 108, leading to several numerical difficulties. The presence of thin near-wall concentration boundary layers requires very fine mesh resolution in these regions, while large concentration gradients within the flow cause numerical stabilization issues. In this work, we will discuss some guidelines for solving mass transport problems in cardiovascular flows using a stabilized Galerkin finite element method. First, we perform mesh convergence studies in a series of idealized and patient-specific geometries to determine the required near-wall mesh resolution for these types of problems, using both first- and second-order tetrahedral finite elements. Second, we investigate the use of several boundary condition types at outflow boundaries where backflow during some parts of the cardiac cycle can lead to convergence issues. Finally, we evaluate the effect of reducing Péclet number by increasing mass diffusivity as has been proposed by some researchers. This work was supported by the NSF GRFP and NSF Career Award #1354541.
Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Larsson, Johan
2013-01-01
A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.
The CE/SE Method: a CFD Framework for the Challenges of the New Millennium
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Yu, Sheng-Tao
2001-01-01
The space-time conservation element and solution element (CE/SE) method, which was originated and is continuously being developed at NASA Glenn Research Center, is a high-resolution, genuinely multidimensional and unstructured-mesh compatible numerical method for solving conservation laws. Since its inception in 1991, the CE/SE method has been used to obtain highly accurate numerical solutions for 1D, 2D and 3D flow problems involving shocks, contact discontinuities, acoustic waves, vortices, shock/acoustic waves/vortices interactions, shock/boundary layers interactions and chemical reactions. Without the aid of preconditioning or other special techniques, it has been applied to both steady and unsteady flows with speeds ranging from Mach number = 0.00288 to 10. In addition, the method has unique features that allow for (i) the use of very simple non-reflecting boundary conditions, and (ii) a unified wall boundary treatment for viscous and inviscid flows. The CE/SE method was developed with the conviction that, with a solid foundation in physics, a robust, coherent and accurate numerical framework can be built without involving overly complex mathematics. As a result, the method was constructed using a set of design principles that facilitate simplicity, robustness and accuracy. The most important among them are: (i) enforcing both local and global flux conservation in space and time, with flux evaluation at an interface being an integral part of the solution procedure and requiring no interpolation or extrapolation; (ii) unifying space and time and treating them as a single entity; and (iii) requiring that a numerical scheme be built from a nondissipative core scheme such that the numerical dissipation can be effectively controlled and, as a result, will not overwhelm the physical dissipation. Part I of the workshop will be devoted to a discussion of these principles along with a description of how the ID, 2D and 3D CE/SE schemes are constructed. In Part II, various applications of the CE/SE method, particularly those involving chemical reactions and acoustics, will be presented. The workshop will be concluded with a sketch of the future research directions.
NASA Astrophysics Data System (ADS)
Laubscher, Markus; Bourquin, Stéphane; Froehly, Luc; Karamata, Boris; Lasser, Theo
2004-07-01
Current spectroscopic optical coherence tomography (OCT) methods rely on a posteriori numerical calculation. We present an experimental alternative for accessing spectroscopic information in OCT without post-processing based on wavelength de-multiplexing and parallel detection using a diffraction grating and a smart pixel detector array. Both a conventional A-scan with high axial resolution and the spectrally resolved measurement are acquired simultaneously. A proof-of-principle demonstration is given on a dynamically changing absorbing sample. The method's potential for fast spectroscopic OCT imaging is discussed. The spectral measurements obtained with this approach are insensitive to scan non-linearities or sample movements.
Hybrid Upwind Splitting (HUS) by a Field-by-Field Decomposition
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1995-01-01
We introduce and develop a new approach for upwind biasing: the hybrid upwind splitting (HUS) method. This original procedure is based on a suitable hybridization of current prominent flux vector splitting (FVS) and flux difference splitting (FDS) methods. The HUS method is designed to naturally combine the respective strengths of the above methods while excluding their main deficiencies. Specifically, the HUS strategy yields a family of upwind methods that exhibit the robustness of FVS schemes in the capture of nonlinear waves and the accuracy of some FDS schemes in the resolution of linear waves. We give a detailed construction of the HUS methods following a general and systematic procedure directly performed at the basic level of the field by field (i.e. waves) decomposition involved in FDS methods. For such a given decomposition, each field is endowed either with FVS or FDS numerical fluxes, depending on the nonlinear nature of the field under consideration. Such a design principle is made possible thanks to the introduction of a convenient formalism that provides us with a unified framework for upwind methods. The HUS methods we propose bring significant improvements over current methods in terms of accuracy and robustness. They yield entropy-satisfying approximate solutions as they are strongly supported in numerical experiments. Field by field hybrid numerical fluxes also achieve fairly simple and explicit expressions and hence require a computational effort between that of the FVS and FDS. Several numerical experiments ranging from stiff 1D shock-tube to high speed viscous flows problems are displayed, intending to illustrate the benefits of the present approach. We assess in particular the relevance of our HUS schemes to viscous flow calculations.
Estimation of geopotential from satellite-to-satellite range rate data: Numerical results
NASA Technical Reports Server (NTRS)
Thobe, Glenn E.; Bose, Sam C.
1987-01-01
A technique for high-resolution geopotential field estimation by recovering the harmonic coefficients from satellite-to-satellite range rate data is presented and tested against both a controlled analytical simulation of a one-day satellite mission (maximum degree and order 8) and then against a Cowell method simulation of a 32-day mission (maximum degree and order 180). Innovations include: (1) a new frequency-domain observation equation based on kinetic energy perturbations which avoids much of the complication of the usual Keplerian element perturbation approaches; (2) a new method for computing the normalized inclination functions which unlike previous methods is both efficient and numerically stable even for large harmonic degrees and orders; (3) the application of a mass storage FFT to the entire mission range rate history; (4) the exploitation of newly discovered symmetries in the block diagonal observation matrix which reduce each block to the product of (a) a real diagonal matrix factor, (b) a real trapezoidal factor with half the number of rows as before, and (c) a complex diagonal factor; (5) a block-by-block least-squares solution of the observation equation by means of a custom-designed Givens orthogonal rotation method which is both numerically stable and tailored to the trapezoidal matrix structure for fast execution.
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).
Accelerated high-resolution photoacoustic tomography via compressed sensing
NASA Astrophysics Data System (ADS)
Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward
2016-12-01
Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.
NASA Astrophysics Data System (ADS)
Ricchi, Antonio; Marcello Miglietta, Mario; Barbariol, Francesco; Benetazzo, Alvise; Bonaldo, Davide; Falcieri, Francesco M.; Russo, Aniello; Sclavo, Mauro; Carniel, Sandro
2017-04-01
Between 19-22 January 2014 a baroclinic wave from the Atlantic region goes in cutoff over the Strait of Gibraltar. The resulting depression remains active for approximately 80 hours, passing off shore of the north African coast, crossing the Tyrrhenian Sea and the Adriatic Sea, before turning south. During the first phase (close to the Balearic islands) and when passing over the Adriatic, the depression assumes the characteristics of a TLC (Tropical Like Cyclones). Sea Surface Temperature (SST) is a very important factor for a proper numerical simulation of these events hence we chose to model this TLC event using the COAWST suite (Coupled Ocean Atmosphere Wave Sediment Transport Modelling System). In the first phase of our work we identified the best model configuration to reproduce the phenomenon, extensively testing different microphysics and PBL (Planetary Boundary Layer) schemes available in the numerical model WRF (Weather Research for Forecasting). In the second phase, in order to evaluate the impact of SST, we used the best physical set-up that reproduces the phenomenon in terms of intensity, trajectory and timing, using four different methods of implementation of the SST in the model: i)from a spectrum-radiometer at 8.3 km resolution and updated every six hours; ii) from a dataset provided by "MyOcean" at 1 km resolution and updated every 6 hours; iii) from COAWST suite run in coupled atmosphere-ocean configuration; iv) from COAWST suite in fully coupled atmosphere-ocean- wave configuration). Results show the importance of the selected microphysics scheme in order to correctly reproduce the TLC trajectory, and of the use of high-resolution and high-frequency SST fields, updated every hour in order to reproduce the diurnal cycles. Coupled numerical runs produce less intense heat fluxes which on turn result in better TLC trajectories, more realistic timing and intensity when compared with standalone simulations, even if the latter use a high resolution SST. Last, a temporary increase of the mixed layer depth along the trajectory of the TLC was exhibited by the fully coupled run during the two phases of maximum intensity of the phenomenon, when the wave field is more developed and acts more intensely on the vertical mixing. We will discuss how these results can be improved or further validated in proximity of land by using satellite information that will be available within the framework of H2020 CEASELESS project.
Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale
NASA Astrophysics Data System (ADS)
González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.
2017-12-01
Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).
2011-01-01
Background Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement’s ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. Methods The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. Results In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Conclusions Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations. PMID:22166145
NASA Technical Reports Server (NTRS)
Ioup, J. W.; Ioup, G. E.; Rayborn, G. H., Jr.; Wood, G. M., Jr.; Upchurch, B. T.
1984-01-01
Mass spectrometer data in the form of ion current versus mass-to-charge ratio often include overlapping mass peaks, especially in low- and medium-resolution instruments. Numerical deconvolution of such data effectively enhances the resolution by decreasing the overlap of mass peaks. In this paper two approaches to deconvolution are presented: a function-domain iterative technique and a Fourier transform method which uses transform-domain function-continuation. Both techniques include data smoothing to reduce the sensitivity of the deconvolution to noise. The efficacy of these methods is demonstrated through application to representative mass spectrometer data and the deconvolved results are discussed and compared to data obtained from a spectrometer with sufficient resolution to achieve separation of the mass peaks studied. A case for which the deconvolution is seriously affected by Gibbs oscillations is analyzed.
An M-estimator for reduced-rank system identification.
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T
2017-01-15
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.
An M-estimator for reduced-rank system identification
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.
2018-01-01
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659
Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.
2005-01-01
Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In contrast, FEMA Flood Insurance Rate Maps (FIRMs) based on the FAN model predict uniformly high flood risk across the study areas without regard for small-scale topography and surficial geology. ?? 2005 Geological Society of America.
Soft x-ray holographic tomography for biological specimens
NASA Astrophysics Data System (ADS)
Gao, Hongyi; Chen, Jianwen; Xie, Honglan; Li, Ruxin; Xu, Zhizhan; Jiang, Shiping; Zhang, Yuxuan
2003-10-01
In this paper, we present some experimental results on X -ray holography, holographic tomography, and a new holographic tomography method called pre-amplified holographic tomography is proposed. Due to the shorter wavelength and the larger penetration depths, X-rays provide the potential of higher resolution in imaging techniques, and have the ability to image intact, living, hydrated cells w ithout slicing, dehydration, chemical fixation or stain. Recently, using X-ray source in National Synchrotron Radiation Laboratory in Hefei, we have successfully performed some soft X-ray holography experiments on biological specimen. The specimens used in the experiments was the garlic clove epidermis, we got their X-ray hologram, and then reconstructed them by computer programs, the feature of the cell walls, the nuclei and some cytoplasm were clearly resolved. However, there still exist some problems in realization of practical 3D microscopic imaging due to the near-unity refractive index of the matter. There is no X-ray optics having a sufficient high numerical aperture to achieve a depth resolution that is comparable to the transverse resolution. On the other hand, computer tomography needs a record of hundreds of views of the test object at different angles for high resolution. This is because the number of views required for a densely packed object is equal to the object radius divided by the desired depth resolution. Clearly, it is impractical for a radiation-sensitive biological specimen. Moreover, the X-ray diffraction effect makes projection data blur, this badly degrades the resolution of the reconstructed image. In order to observe 3D structure of the biological specimens, McNulty proposed a new method for 3D imaging called "holographic tomography (HT)" in which several holograms of the specimen are recorded from various illumination directions and combined in the reconstruction step. This permits the specimens to be sampled over a wide range of spatial frequencies to improve the depth resolution. In NSRL, we performed soft X-ray holographic tomography experiments. The specimen was the spider filaments and PM M A as recording medium. By 3D CT reconstruction of the projection data, three dimensional density distribution of the specimen was obtained. Also, we developed a new X-ray holographic tomography m ethod called pre-amplified holographic tomography. The method permits a digital real-time 3D reconstruction with high-resolution and a simple and compact experimental setup as well.
Key issues review: numerical studies of turbulence in stars
NASA Astrophysics Data System (ADS)
Arnett, W. David; Meakin, Casey
2016-10-01
Three major problems of single-star astrophysics are convection, magnetic fields and rotation. Numerical simulations of convection in stars now have sufficient resolution to be truly turbulent, with effective Reynolds numbers of \\text{Re}>{{10}4} , and some turbulent boundary layers have been resolved. Implications of these developments are discussed for stellar structure, evolution and explosion as supernovae. Methods for three-dimensional (3D) simulations of stars are compared and discussed for 3D atmospheres, solar rotation, core-collapse and stellar boundary layers. Reynolds-averaged Navier-Stokes (RANS) analysis of the numerical simulations has been shown to provide a novel and quantitative estimate of resolution errors. Present treatments of stellar boundaries require revision, even for early burning stages (e.g. for mixing regions during He-burning). As stellar core-collapse is approached, asymmetry and fluctuations grow, rendering spherically symmetric models of progenitors more unrealistic. Numerical resolution of several different types of three-dimensional (3D) stellar simulations are compared; it is suggested that core-collapse simulations may be under-resolved. The Rayleigh-Taylor instability in explosions has a deep connection to convection, for which the abundance structure in supernova remnants may provide evidence.
A spatially adaptive total variation regularization method for electrical resistance tomography
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2015-12-01
The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.
Slic Superpixels for Object Delineation from Uav Data
NASA Astrophysics Data System (ADS)
Crommelinck, S.; Bennett, R.; Gerke, M.; Koeva, M. N.; Yang, M. Y.; Vosselman, G.
2017-08-01
Unmanned aerial vehicles (UAV) are increasingly investigated with regard to their potential to create and update (cadastral) maps. UAVs provide a flexible and low-cost platform for high-resolution data, from which object outlines can be accurately delineated. This delineation could be automated with image analysis methods to improve existing mapping procedures that are cost, time and labor intensive and of little reproducibility. This study investigates a superpixel approach, namely simple linear iterative clustering (SLIC), in terms of its applicability to UAV data. The approach is investigated in terms of its applicability to high-resolution UAV orthoimages and in terms of its ability to delineate object outlines of roads and roofs. Results show that the approach is applicable to UAV orthoimages of 0.05 m GSD and extents of 100 million and 400 million pixels. Further, the approach delineates the objects with the high accuracy provided by the UAV orthoimages at completeness rates of up to 64 %. The approach is not suitable as a standalone approach for object delineation. However, it shows high potential for a combination with further methods that delineate objects at higher correctness rates in exchange of a lower localization quality. This study provides a basis for future work that will focus on the incorporation of multiple methods for an interactive, comprehensive and accurate object delineation from UAV data. This aims to support numerous application fields such as topographic and cadastral mapping.
NASA Astrophysics Data System (ADS)
Javernick, L.; Bertoldi, W.; Redolfi, M.
2017-12-01
Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical agreement simulations suffered to represent braiding planforms (evolving toward meandering), and parameterization that ensured braided produced exaggerated activation and bank erosion rates. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917
Advancing MODFLOW Applying the Derived Vector Space Method
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.
2015-12-01
The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
Guillen Bonilla, José Trinidad; Guillen Bonilla, Alex; Rodríguez Betancourtt, Verónica M.; Guillen Bonilla, Héctor; Casillas Zamora, Antonio
2017-01-01
The application of the sensor optical fibers in the areas of scientific instrumentation and industrial instrumentation is very attractive due to its numerous advantages. In the industry of civil engineering for example, quasi-distributed sensors made with optical fiber are used for reliable strain and temperature measurements. Here, a quasi-distributed sensor in the frequency domain is discussed. The sensor consists of a series of low-finesse Fabry-Perot interferometers where each Fabry-Perot interferometer acts as a local sensor. Fabry-Perot interferometers are formed by pairs of identical low reflective Bragg gratings imprinted in a single mode fiber. All interferometer sensors have different cavity length, provoking frequency-domain multiplexing. The optical signal represents the superposition of all interference patterns which can be decomposed using the Fourier transform. The frequency spectrum was analyzed and sensor’s properties were defined. Following that, a quasi-distributed sensor was numerically simulated. Our sensor simulation considers sensor properties, signal processing, noise system, and instrumentation. The numerical results show the behavior of resolution vs. signal-to-noise ratio. From our results, the Fabry-Perot sensor has high resolution and low resolution. Both resolutions are conceivable because the Fourier Domain Phase Analysis (FDPA) algorithm elaborates two evaluations of Bragg wavelength shift. PMID:28420083
Guillen Bonilla, José Trinidad; Guillen Bonilla, Alex; Rodríguez Betancourtt, Verónica M; Guillen Bonilla, Héctor; Casillas Zamora, Antonio
2017-04-14
The application of the sensor optical fibers in the areas of scientific instrumentation and industrial instrumentation is very attractive due to its numerous advantages. In the industry of civil engineering for example, quasi-distributed sensors made with optical fiber are used for reliable strain and temperature measurements. Here, a quasi-distributed sensor in the frequency domain is discussed. The sensor consists of a series of low-finesse Fabry-Perot interferometers where each Fabry-Perot interferometer acts as a local sensor. Fabry-Perot interferometers are formed by pairs of identical low reflective Bragg gratings imprinted in a single mode fiber. All interferometer sensors have different cavity length, provoking frequency-domain multiplexing. The optical signal represents the superposition of all interference patterns which can be decomposed using the Fourier transform. The frequency spectrum was analyzed and sensor's properties were defined. Following that, a quasi-distributed sensor was numerically simulated. Our sensor simulation considers sensor properties, signal processing, noise system, and instrumentation. The numerical results show the behavior of resolution vs. signal-to-noise ratio. From our results, the Fabry-Perot sensor has high resolution and low resolution. Both resolutions are conceivable because the Fourier Domain Phase Analysis (FDPA) algorithm elaborates two evaluations of Bragg wavelength shift.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.
Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single moleculemore » super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.« less
Ehler, Martin; Dobrosotskaya, Julia; Cunningham, Denise; Wong, Wai T.; Chew, Emily Y.; Czaja, Wojtek; Bonner, Robert F.
2015-01-01
We introduce and describe a novel non-invasive in-vivo method for mapping local rod rhodopsin distribution in the human retina over a 30-degree field. Our approach is based on analyzing the brightening of detected lipofuscin autofluorescence within small pixel clusters in registered imaging sequences taken with a commercial 488nm confocal scanning laser ophthalmoscope (cSLO) over a 1 minute period. We modeled the kinetics of rhodopsin bleaching by applying variational optimization techniques from applied mathematics. The physical model and the numerical analysis with its implementation are outlined in detail. This new technique enables the creation of spatial maps of the retinal rhodopsin and retinal pigment epithelium (RPE) bisretinoid distribution with an ≈ 50μm resolution. PMID:26196397
Feasibility study of an optically coherent telescope array in space
NASA Technical Reports Server (NTRS)
Traub, W. A.
1983-01-01
Numerical methods of image construction which can be used to produce very high angular resolution images at optical wavelengths of astronomical objects from an orbiting array of telescopes are discussed and a concept is presented for a phase-coherent optical telescope array which may be deployed by space shuttle in the 1990's. The system would start as a four-element linear array with a 12 m baseline. The initial module is a minimum redundant array with a photon-counting collecting area three times larger than space telescope and a one dimensional resolution of better than 0.01 arc seconds in the visible range. Later additions to the array would build up facility capability. The advantages of a VLBI observatory in space are considered as well as apertures for the telescopes.
NASA Astrophysics Data System (ADS)
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
NASA Astrophysics Data System (ADS)
Belair, S.; Bernier, N.; Tong, L.; Mailhot, J.
2008-05-01
The 2010 Winter Olympic and Paralympic Games will take place in Vancouver, Canada, from 12 to 28 February 2010 and from 12 to 21 March 2010, respectively. In order to provide the best possible guidance achievable with current state-of-the-art science and technology, Environment Canada is currently setting up an experimental numerical prediction system for these special events. This system consists of a 1-km limited-area atmospheric model that will be integrated for 16h, twice a day, with improved microphysics compared with the system currently operational at the Canadian Meteorological Centre. In addition, several new and original tools will be used to adapt and refine predictions near and at the surface. Very high-resolution two-dimensional surface systems, with 100-m and 20-m grid size, will cover the Vancouver Olympic area. Using adaptation methods to improve the forcing from the lower-resolution atmospheric models, these 2D surface models better represent surface processes, and thus lead to better predictions of snow conditions and near-surface air temperature. Based on a similar strategy, a single-point model will be implemented to better predict surface characteristics at each station of an observing network especially installed for the 2010 events. The main advantage of this single-point system is that surface observations are used as forcing for the land surface models, and can even be assimilated (although this is not expected in the first version of this new tool) to improve initial conditions of surface variables such as snow depth and surface temperatures. Another adaptation tool, based on 2D stationnary solutions of a simple dynamical system, will be used to produce near-surface winds on the 100-m grid, coherent with the high- resolution orography. The configuration of the experimental numerical prediction system will be presented at the conference, together with preliminary results for winter 2007-2008.
Tempest: Tools for Addressing the Needs of Next-Generation Climate Models
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.; Pinheiro, M. C.; Fong, J.
2015-12-01
Tempest is a comprehensive simulation-to-science infrastructure that tackles the needs of next-generation, high-resolution, data intensive climate modeling activities. This project incorporates three key components: TempestDynamics, a global modeling framework for experimental numerical methods and high-performance computing; TempestRemap, a toolset for arbitrary-order conservative and consistent remapping between unstructured grids; and TempestExtremes, a suite of detection and characterization tools for identifying weather extremes in large climate datasets. In this presentation, the latest advances with the implementation of this framework will be discussed, and a number of projects now utilizing these tools will be featured.
A high resolution cavity BPM for the CLIC Test Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chritin, N.; Schmickler, H.; Soby, L.
2010-08-01
In frame of the development of a high resolution BPM system for the CLIC Main Linac we present the design of a cavity BPM prototype. It consists of a waveguide loaded dipole mode resonator and a monopole mode reference cavity, both operating at 15 GHz, to be compatible with the bunch frequencies at the CLIC Test Facility. Requirements, design concept, numerical analysis, and practical considerations are discussed.
High spatial resolution passive microwave sounding systems
NASA Technical Reports Server (NTRS)
Staelin, D. H.; Rosenkranz, P. W.; Bonanni, P. G.; Gasiewski, A. W.
1986-01-01
Two extensive series of flights aboard the ER-2 aircraft were conducted with the MIT 118 GHz imaging spectrometer together with a 53.6 GHz nadir channel and a TV camera record of the mission. Other microwave sensors, including a 183 GHz imaging spectrometer were flown simultaneously by other research groups. Work also continued on evaluating the impact of high-resolution passive microwave soundings upon numerical weather prediction models.
GENESIS: new self-consistent models of exoplanetary spectra
NASA Astrophysics Data System (ADS)
Gandhi, Siddharth; Madhusudhan, Nikku
2017-12-01
We are entering the era of high-precision and high-resolution spectroscopy of exoplanets. Such observations herald the need for robust self-consistent spectral models of exoplanetary atmospheres to investigate intricate atmospheric processes and to make observable predictions. Spectral models of plane-parallel exoplanetary atmospheres exist, mostly adapted from other astrophysical applications, with different levels of sophistication and accuracy. There is a growing need for a new generation of models custom-built for exoplanets and incorporating state-of-the-art numerical methods and opacities. The present work is a step in this direction. Here we introduce GENESIS, a plane-parallel, self-consistent, line-by-line exoplanetary atmospheric modelling code that includes (a) formal solution of radiative transfer using the Feautrier method, (b) radiative-convective equilibrium with temperature correction based on the Rybicki linearization scheme, (c) latest absorption cross-sections, and (d) internal flux and external irradiation, under the assumptions of hydrostatic equilibrium, local thermodynamic equilibrium and thermochemical equilibrium. We demonstrate the code here with cloud-free models of giant exoplanetary atmospheres over a range of equilibrium temperatures, metallicities, C/O ratios and spanning non-irradiated and irradiated planets, with and without thermal inversions. We provide the community with theoretical emergent spectra and pressure-temperature profiles over this range, along with those for several known hot Jupiters. The code can generate self-consistent spectra at high resolution and has the potential to be integrated into general circulation and non-equilibrium chemistry models as it is optimized for efficiency and convergence. GENESIS paves the way for high-fidelity remote sensing of exoplanetary atmospheres at high resolution with current and upcoming observations.
Multiple directed graph large-class multi-spectral processor
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki
1988-01-01
Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.
A study of optical scattering methods in laboratory plasma diagnosis
NASA Technical Reports Server (NTRS)
Phipps, C. R., Jr.
1972-01-01
Electron velocity distributions are deduced along axes parallel and perpendicular to the magnetic field in a pulsed, linear Penning discharge in hydrogen by means of a laser Thomson scattering experiment. Results obtained are numerical averages of many individual measurements made at specific space-time points in the plasma evolution. Because of the high resolution in k-space and the relatively low maximum electron density 2 x 10 to the 13th power/cu cm, special techniques were required to obtain measurable scattering signals. These techniques are discussed and experimental results are presented.
Parshintsev, Jevgeni; Vaikkinen, Anu; Lipponen, Katriina; Vrkoslav, Vladimir; Cvačka, Josef; Kostiainen, Risto; Kotiaho, Tapio; Hartonen, Kari; Riekkola, Marja-Liisa; Kauppila, Tiina J
2015-07-15
On-line chemical characterization methods of atmospheric aerosols are essential to increase our understanding of physicochemical processes in the atmosphere, and to study biosphere-atmosphere interactions. Several techniques, including aerosol mass spectrometry, are nowadays available, but they all suffer from some disadvantages. In this research, desorption atmospheric pressure photoionization high-resolution (Orbitrap) mass spectrometry (DAPPI-HRMS) is introduced as a complementary technique for the fast analysis of aerosol chemical composition without the need for sample preparation. Atmospheric aerosols from city air were collected on a filter, desorbed in a DAPPI source with a hot stream of toluene and nitrogen, and ionized using a vacuum ultraviolet lamp at atmospheric pressure. To study the applicability of the technique for ambient aerosol analysis, several samples were collected onto filters and analyzed, with the focus being on selected organic acids. To compare the DAPPI-HRMS data with results obtained by an established method, each filter sample was divided into two equal parts, and the second half of the filter was extracted and analyzed by liquid chromatography/mass spectrometry (LC/MS). The DAPPI results agreed with the measured aerosol particle number. In addition to the targeted acids, the LC/MS and DAPPI-HRMS methods were found to detect different compounds, thus providing complementary information about the aerosol samples. DAPPI-HRMS showed several important oxidation products of terpenes, and numerous compounds were tentatively identified. Thanks to the soft ionization, high mass resolution, fast analysis, simplicity and on-line applicability, the proposed methodology has high potential in the field of atmospheric research. Copyright © 2015 John Wiley & Sons, Ltd.
Explicit filtering in large eddy simulation using a discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Brazell, Matthew J.
The discontinuous Galerkin (DG) method is a formulation of the finite element method (FEM). DG provides the ability for a high order of accuracy in complex geometries, and allows for highly efficient parallelization algorithms. These attributes make the DG method attractive for solving the Navier-Stokes equations for large eddy simulation (LES). The main goal of this work is to investigate the feasibility of adopting an explicit filter in the numerical solution of the Navier-Stokes equations with DG. Explicit filtering has been shown to increase the numerical stability of under-resolved simulations and is needed for LES with dynamic sub-grid scale (SGS) models. The explicit filter takes advantage of DG's framework where the solution is approximated using a polyno- mial basis where the higher modes of the solution correspond to a higher order polynomial basis. By removing high order modes, the filtered solution contains low order frequency content much like an explicit low pass filter. The explicit filter implementation is tested on a simple 1-D solver with an initial condi- tion that has some similarity to turbulent flows. The explicit filter does restrict the resolution as well as remove accumulated energy in the higher modes from aliasing. However, the ex- plicit filter is unable to remove numerical errors causing numerical dissipation. A second test case solves the 3-D Navier-Stokes equations of the Taylor-Green vortex flow (TGV). The TGV is useful for SGS model testing because it is initially laminar and transitions into a fully turbulent flow. The SGS models investigated include the constant coefficient Smagorinsky model, dynamic Smagorinsky model, and dynamic Heinz model. The constant coefficient Smagorinsky model is over dissipative, this is generally not desirable however it does add stability. The dynamic Smagorinsky model generally performs better, especially during the laminar-turbulent transition region as expected. The dynamic Heinz model which is based on an improved model, handles the laminar-turbulent transition region well while also showing additional robustness.
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Patra, Prabir; Breivik, Knut
2015-04-01
Lagrangian transport models based on times series of Eulerian fields provide a computationally affordable way of achieving very high resolution for limited areas and time periods. This makes them especially suitable for the analysis of point-wise measurements of atmospheric tracers. We present an application illustrated with examples of greenhouse gases from anthropogenic emissions in urban areas and biogenic emissions in Japan and of pollutants in the Arctic. We asses the algorithmic complexity of the numerical implementation as well as the use of non-procedural techniques such as Object-Oriented programming. We discuss aspects related to the quantification of uncertainty from prior information in the presence of model error and limited number of observations. The case of non-linear constraints is explored using direct numerical optimisation methods.
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hori, T.; Hirahara, K.; Hashimoto, C.; Hori, M.
2016-12-01
Estimation of the coseismic/postseismic slip using postseismic deformation observation data is an important topic in the field of geodetic inversion. Estimation methods for this purpose are expected to be improved by introducing numerical simulation tools (e.g. finite element (FE) method) of viscoelastic deformation, in which the computation model is of high fidelity to the available high-resolution crustal data. The authors have proposed a large-scale simulation method using such FE high-fidelity models (HFM), assuming use of a large-scale computation environment such as the K computer in Japan (Ichimura et al. 2016). On the other hand, the values of viscosity in the heterogeneous viscoelastic structure in the high-fidelity model are not trivial. In this study, we developed an adjoint-based optimization method incorporating HFM, in which fault slip and asthenosphere viscosity are simultaneously estimated. We carried out numerical experiments using synthetic crustal deformation data. We constructed an HFM in the domain of 2048x1536x850 km, which includes the Tohoku region in northeast Japan based on Ichimura et al. (2013). We used the model geometry data set of JTOPO30 (2003), Koketsu et al. (2008) and CAMP standard model (Hashimoto et al. 2004). The geometry of crustal structures in HFM is in 1km resolution, resulting in 36 billion degrees-of-freedom. Synthetic crustal deformation data due to prescribed coseismic slip and after slips in the location of GEONET, GPS/A observation points, and S-net are used. The target inverse analysis is formulated as minimization of L2 norm of the difference between the FE simulation results and the observation data with respect to viscosity and fault slip, combining the quasi-Newton algorithm with the adjoint method. Use of this combination decreases the necessary number of forward analyses in the optimization calculation. As a result, we are now able to finish the estimation using 2560 computer nodes of the K computer for less than 17 hours. Thus, the target inverse analysis is completed in a realistic time because of the combination of the fast solver and the adjoint method. In the future, we would like to apply the method to the actual data.
Fercher, A; Hitzenberger, C; Sticker, M; Zawadzki, R; Karamata, B; Lasser, T
2001-12-03
Dispersive samples introduce a wavelength dependent phase distortion to the probe beam. This leads to a noticeable loss of depth resolution in high resolution OCT using broadband light sources. The standard technique to avoid this consequence is to balance the dispersion of the sample byarrangingadispersive materialinthereference arm. However, the impact of dispersion is depth dependent. A corresponding depth dependent dispersion balancing technique is diffcult to implement. Here we present a numerical dispersion compensation technique for Partial Coherence Interferometry (PCI) and Optical Coherence Tomography (OCT) based on numerical correlation of the depth scan signal with a depth variant kernel. It can be used a posteriori and provides depth dependent dispersion compensation. Examples of dispersion compensated depth scan signals obtained from microscope cover glasses are presented.
Advanced imaging of the macrostructure and microstructure of bone
NASA Technical Reports Server (NTRS)
Genant, H. K.; Gordon, C.; Jiang, Y.; Link, T. M.; Hans, D.; Majumdar, S.; Lang, T. F.
2000-01-01
Noninvasive and/or nondestructive techniques are capable of providing more macro- or microstructural information about bone than standard bone densitometry. Although the latter provides important information about osteoporotic fracture risk, numerous studies indicate that bone strength is only partially explained by bone mineral density. Quantitative assessment of macro- and microstructural features may improve our ability to estimate bone strength. The methods available for quantitatively assessing macrostructure include (besides conventional radiographs) quantitative computed tomography (QCT) and volumetric quantitative computed tomography (vQCT). Methods for assessing microstructure of trabecular bone noninvasively and/or nondestructively include high-resolution computed tomography (hrCT), micro-computed tomography (muCT), high-resolution magnetic resonance (hrMR), and micromagnetic resonance (muMR). vQCT, hrCT and hrMR are generally applicable in vivo; muCT and muMR are principally applicable in vitro. Although considerable progress has been made in the noninvasive and/or nondestructive imaging of the macro- and microstructure of bone, considerable challenges and dilemmas remain. From a technical perspective, the balance between spatial resolution versus sampling size, or between signal-to-noise versus radiation dose or acquisition time, needs further consideration, as do the trade-offs between the complexity and expense of equipment and the availability and accessibility of the methods. The relative merits of in vitro imaging and its ultrahigh resolution but invasiveness versus those of in vivo imaging and its modest resolution but noninvasiveness also deserve careful attention. From a clinical perspective, the challenges for bone imaging include balancing the relative advantages of simple bone densitometry against the more complex architectural features of bone or, similarly, the deeper research requirements against the broader clinical needs. The considerable potential biological differences between the peripheral appendicular skeleton and the central axial skeleton have to be addressed further. Finally, the relative merits of these sophisticated imaging techniques have to be weighed with respect to their applications as diagnostic procedures requiring high accuracy or reliability on one hand and their monitoring applications requiring high precision or reproducibility on the other. Copyright 2000 S. Karger AG, Basel.
High resolution modelling of extreme precipitation events in urban areas
NASA Astrophysics Data System (ADS)
Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave
2015-04-01
The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with significant soil consolidation and the low-lying areas are prone to urban flooding. The simulation results are compared with measurements in the sewer network. References [1] Guus S. Stelling G.S., 2012. Quadtree flood simulations with subgrid digital elevation models. Water Management 165 (WM1):1329-1354. [2] Vincenzo Cassuli and Guus S. Stelling, 2013. A semi-implicit numerical model for urban drainage systems. International Journal for Numerical Methods in Fluids. Vol. 73:600-614. DOI: 10.1002/fld.3817
NUMERICAL SIMULATIONS OF CORONAL HEATING THROUGH FOOTPOINT BRAIDING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansteen, V.; Pontieu, B. De; Carlsson, M.
2015-10-01
Advanced three-dimensional (3D) radiative MHD simulations now reproduce many properties of the outer solar atmosphere. When including a domain from the convection zone into the corona, a hot chromosphere and corona are self-consistently maintained. Here we study two realistic models, with different simulated areas, magnetic field strength and topology, and numerical resolution. These are compared in order to characterize the heating in the 3D-MHD simulations which self-consistently maintains the structure of the atmosphere. We analyze the heating at both large and small scales and find that heating is episodic and highly structured in space, but occurs along loop-shaped structures, andmore » moves along with the magnetic field. On large scales we find that the heating per particle is maximal near the transition region and that widely distributed opposite-polarity field in the photosphere leads to a greater heating scale height in the corona. On smaller scales, heating is concentrated in current sheets, the thicknesses of which are set by the numerical resolution. Some current sheets fragment in time, this process occurring more readily in the higher-resolution model leading to spatially highly intermittent heating. The large-scale heating structures are found to fade in less than about five minutes, while the smaller, local, heating shows timescales of the order of two minutes in one model and one minutes in the other, higher-resolution, model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
Numerical Simulation of Rolling-Airframes Using a Multi-Level Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2002-01-01
A supersonic rolling missile with two synchronous canard control surfaces is analyzed using an automated, inviscid, Cartesian method. Sequential-static and time-dependent dynamic simulations of the complete motion are computed for canard dither schedules for level flight, pitch, and yaw maneuver. The dynamic simulations are compared directly against both high-resolution viscous simulations and relevant experimental data, and are also utilized to compute dynamic stability derivatives. The results show that both the body roll rate and canard dither motion influence the roll-averaged forces and moments on the body. At the relatively, low roll rates analyzed in the current work these dynamic effects are modest, however the dynamic computations are effective in predicting the dynamic stability derivatives which can be significant for highly-maneuverable missiles.
A high-stability non-contact dilatometer for low-amplitude temperature-modulated measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luckabauer, Martin; Sprengel, Wolfgang; Würschum, Roland
2016-07-15
Temperature modulated thermophysical measurements can deliver valuable insights into the phase transformation behavior of many different materials. While especially for non-metallic systems at low temperatures numerous powerful methods exist, no high-temperature device suitable for modulated measurements of bulk metallic alloy samples is available for routine use. In this work a dilatometer for temperature modulated isothermal and non-isothermal measurements in the temperature range from room temperature to 1300 K is presented. The length measuring system is based on a two-beam Michelson laser interferometer with an incremental resolution of 20 pm. The non-contact measurement principle allows for resolving sinusoidal length change signalsmore » with amplitudes in the sub-500 nm range and physically decouples the length measuring system from the temperature modulation and heating control. To demonstrate the low-amplitude capabilities, results for the thermal expansion of nickel for two different modulation frequencies are presented. These results prove that the novel method can be used to routinely resolve length-change signals of metallic samples with temperature amplitudes well below 1 K. This high resolution in combination with the non-contact measurement principle significantly extends the application range of modulated dilatometry towards high-stability phase transformation measurements on complex alloys.« less
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.
2011-01-01
This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.
NASA Astrophysics Data System (ADS)
Qiu, Jianrong; Shen, Yi; Shangguan, Ziwei; Bao, Wen; Yang, Shanshan; Li, Peng; Ding, Zhihua
2018-04-01
Although methods have been proposed to maintain high transverse resolution over an increased depth range, it is not straightforward to scale down the bulk-optic solutions to minimized probes of optical coherence tomography (OCT). In this paper, we propose a high-efficient fiber-based filter in an all-fiber OCT probe to realize an extended depth of focus (DOF) while maintaining a high transverse resolution. Mode interference in the probe is exploited to modulate the complex field with controllable radial distribution. The principle of DOF extension by the fiber-based filter is theoretically analyzed. Numerical simulations are conducted to evaluate the performances of the designed probes. A DOF extension ratio of 2.6 over conventional Gaussian beam is obtainable in one proposed probe under a focused beam diameter of 4 . 6 μm. Coupling efficiencies of internal interfaces of the proposed probe are below -40 dB except the last probe-air interface, which can also be depressed to be -44 dB after minor modification in lengths for the filter. Length tolerance of the proposed probe is determined to be - 28 / + 20 μm, which is readily satisfied in fabrication. With the merits of extended-DOF, high-resolution, high-efficiency and easy-fabrication, the proposed probe is promising in endoscopic applications.
Pixel-based absolute surface metrology by three flat test with shifted and rotated maps
NASA Astrophysics Data System (ADS)
Zhai, Dede; Chen, Shanyong; Xue, Shuai; Yin, Ziqiang
2018-03-01
In traditional three flat test, it only provides the absolute profile along one surface diameter. In this paper, an absolute testing algorithm based on shift-rotation with three flat test has been proposed to reconstruct two-dimensional surface exactly. Pitch and yaw error during shift procedure is analyzed and compensated in our method. Compared with multi-rotation method proposed before, it only needs a 90° rotation and a shift, which is easy to carry out especially in condition of large size surface. It allows pixel level spatial resolution to be achieved without interpolation or assumption to the test surface. In addition, numerical simulations and optical tests are implemented and show the high accuracy recovery capability of the proposed method.
Reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning.
Song, Ying; Zhu, Zhen; Lu, Yang; Liu, Qiegen; Zhao, Jun
2014-03-01
To improve the magnetic resonance imaging (MRI) data acquisition speed while maintaining the reconstruction quality, a novel method is proposed for multislice MRI reconstruction from undersampled k-space data based on compressed-sensing theory using dictionary learning. There are two aspects to improve the reconstruction quality. One is that spatial correlation among slices is used by extending the atoms in dictionary learning from patches to blocks. The other is that the dictionary-learning scheme is used at two resolution levels; i.e., a low-resolution dictionary is used for sparse coding and a high-resolution dictionary is used for image updating. Numerical experiments are carried out on in vivo 3D MR images of brains and abdomens with a variety of undersampling schemes and ratios. The proposed method (dual-DLMRI) achieves better reconstruction quality than conventional reconstruction methods, with the peak signal-to-noise ratio being 7 dB higher. The advantages of the dual dictionaries are obvious compared with the single dictionary. Parameter variations ranging from 50% to 200% only bias the image quality within 15% in terms of the peak signal-to-noise ratio. Dual-DLMRI effectively uses the a priori information in the dual-dictionary scheme and provides dramatically improved reconstruction quality. Copyright © 2013 Wiley Periodicals, Inc.
High-resolution subgrid models: background, grid generation, and implementation
NASA Astrophysics Data System (ADS)
Sehili, Aissa; Lang, Günther; Lippert, Christoph
2014-04-01
The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.
Nonlinear Conservation Laws and Finite Volume Methods
NASA Astrophysics Data System (ADS)
Leveque, Randall J.
Introduction Software Notation Classification of Differential Equations Derivation of Conservation Laws The Euler Equations of Gas Dynamics Dissipative Fluxes Source Terms Radiative Transfer and Isothermal Equations Multi-dimensional Conservation Laws The Shock Tube Problem Mathematical Theory of Hyperbolic Systems Scalar Equations Linear Hyperbolic Systems Nonlinear Systems The Riemann Problem for the Euler Equations Numerical Methods in One Dimension Finite Difference Theory Finite Volume Methods Importance of Conservation Form - Incorrect Shock Speeds Numerical Flux Functions Godunov's Method Approximate Riemann Solvers High-Resolution Methods Other Approaches Boundary Conditions Source Terms and Fractional Steps Unsplit Methods Fractional Step Methods General Formulation of Fractional Step Methods Stiff Source Terms Quasi-stationary Flow and Gravity Multi-dimensional Problems Dimensional Splitting Multi-dimensional Finite Volume Methods Grids and Adaptive Refinement Computational Difficulties Low-Density Flows Discrete Shocks and Viscous Profiles Start-Up Errors Wall Heating Slow-Moving Shocks Grid Orientation Effects Grid-Aligned Shocks Magnetohydrodynamics The MHD Equations One-Dimensional MHD Solving the Riemann Problem Nonstrict Hyperbolicity Stiffness The Divergence of B Riemann Problems in Multi-dimensional MHD Staggered Grids The 8-Wave Riemann Solver Relativistic Hydrodynamics Conservation Laws in Spacetime The Continuity Equation The 4-Momentum of a Particle The Stress-Energy Tensor Finite Volume Methods Multi-dimensional Relativistic Flow Gravitation and General Relativity References
NASA Astrophysics Data System (ADS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
An investigation of the convective region of numerically simulated squall lines
NASA Astrophysics Data System (ADS)
Bryan, George Howard
High resolution numerical simulations are utilized to investigate the thermodynamic and kinematic structure of the convective region of squall lines. A new numerical modeling system was developed for this purpose. The model incorporates several new and/or recent advances in numerical modeling, including: a mass- and energy-conserving equation set, based on the compressible system of equations; third-order Runge-Kutta time integration, with high (third to sixth) order spatial discretization; and a new method for conserved-variable mixing in saturated environments, utilizing an exact definition for ice-liquid water potential temperature. A benchmark simulation for moist environments was designed to evaluate the new model. It was found that the mass- and energy-conserving equation set was necessary to produce acceptable results, and that traditional equation sets have a cool bias that leads to systematic underprediction of vertical velocity. The model was developed to run on massively-parallel distributed memory computing systems. This allows for simulations with very high resolution. In this study, squall lines were simulated with grid spacing of 125 m over a 300 km x 60 km x 18 km domain. Results show that the 125 m simulations contain sub-cloud-scale turbulent eddies that stretch and distort plumes of high equivalent potential temperature (thetae) that rise from the pre-squall-line boundary layer. In contrast, with 1 km grid spacing the high thetae plumes rise in a laminar manner, and require parameterized subgrid terms to diffuse the high theta e air. The high resolution output is used to refine the conceptual model of the structure and lifecycle of moist absolutely unstable layers (MAULs). Moist absolute instability forms in the inflow region of the squall line and is subsequently removed by turbulent processes of varying scales. Three general MAUL regimes (MRs) are identified: a laminar MR, characterized by deep (˜2 km) MAULs that extend continuously in both the cross-line and along-line directions; a convective MR, containing deep (˜10 km) cellular pulses and plumes; and a turbulent MR, characterized by numerous moist turbulent eddies that are a few km (or smaller) in scale. The character of the laminar MR is of particular interest. Parcels in this region experience moist absolute instability for 11--17 minutes before beginning to overturn. Conventional theory suggests that overturning would ensue immediately in these conditions. Two explanations are offered to elucidate why this layer persists without overturning. First, it is found that buoyancy forcing (defined as the sum of buoyancy and the vertical pressure gradient due to the buoyancy field) is reduced in the laminar MR as compared to that of an isolated parcel. The geometry of the laminar MR is directly responsible for this reduction in buoyancy forcing; specifically, the MAUL extends continuously in the along-line direction and for 10 km in the cross-line direction, which inhibits the development of vertical motions due to mass continuity considerations. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Ogawa, Masahiko; Shidoji, Kazunori
2011-03-01
High-resolution stereoscopic images are effective for use in virtual reality and teleoperation systems. However, the higher the image resolution, the higher is the cost of computer processing and communication. To reduce this cost, numerous earlier studies have suggested the use of multi-resolution images, which have high resolution in region of interests and low resolution in other areas. However, observers can perceive unpleasant sensations and incorrect depth because they can see low-resolution areas in their field of vision. In this study, we conducted an experiment to research the relationship between the viewing field and the perception of image resolution, and determined respective thresholds of image-resolution perception for various positions of the viewing field. The results showed that participants could not distinguish between the high-resolution stimulus and the decreased stimulus, 63 ppi, at positions more than 8 deg outside the gaze point. Moreover, with positions shifted a further 11 and 13 deg from the gaze point, participants could not distinguish between the high-resolution stimulus and the decreased stimuli whose resolution densities were 42 and 25 ppi. Hence, we will propose the composition of multi-resolution images in which observers do not perceive unpleasant sensations and incorrect depth with data reduction (compression).
Optical path difference microscopy with a Shack-Hartmann wavefront sensor.
Gong, Hai; Agbana, Temitope E; Pozzi, Paolo; Soloviev, Oleg; Verhaegen, Michel; Vdovin, Gleb
2017-06-01
In this Letter, we show that a Shack-Hartmann wavefront sensor can be used for the quantitative measurement of the specimen optical path difference (OPD) in an ordinary incoherent optical microscope, if the spatial coherence of the illumination light in the plane of the specimen is larger than the microscope resolution. To satisfy this condition, the illumination numerical aperture should be smaller than the numerical aperture of the imaging lens. This principle has been successfully applied to build a high-resolution reference-free instrument for the characterization of the OPD of micro-optical components and microscopic biological samples.
Enhancing Deep-Water Low-Resolution Gridded Bathymetry Using Single Image Super-Resolution
NASA Astrophysics Data System (ADS)
Elmore, P. A.; Nock, K.; Bonanno, D.; Smith, L.; Ferrini, V. L.; Petry, F. E.
2017-12-01
We present research to employ single-image super-resolution (SISR) algorithms to enhance knowledge of the seafloor using the 1-minute GEBCO 2014 grid when 100m grids from high-resolution sonar systems are available for training. Our numerical upscaling experiments of x15 upscaling of the GEBCO grid along three areas of the Eastern Pacific Ocean along mid-ocean ridge systems where we have these 100m gridded bathymetry data sets, which we accept as ground-truth. We show that four SISR algorithms can enhance this low-resolution knowledge of bathymetry versus bicubic or Spline-In-Tension algorithms through upscaling under these conditions: 1) rough topography is present in both training and testing areas and 2) the range of depths and features in the training area contains the range of depths in the enhancement area. We quantitatively judged successful SISR enhancement versus bicubic interpolation when Student's hypothesis testing show significant improvement of the root-mean squared error (RMSE) between upscaled bathymetry and 100m gridded ground-truth bathymetry at p < 0.05. In addition, we found evidence that random forest based SISR methods may provide more robust enhancements versus non-forest based SISR algorithms.
Generic Sensor Modeling Using Pulse Method
NASA Technical Reports Server (NTRS)
Helder, Dennis L.; Choi, Taeyoung
2005-01-01
Recent development of high spatial resolution satellites such as IKONOS, Quickbird and Orbview enable observation of the Earth's surface with sub-meter resolution. Compared to the 30 meter resolution of Landsat 5 TM, the amount of information in the output image was dramatically increased. In this era of high spatial resolution, the estimation of spatial quality of images is gaining attention. Historically, the Modulation Transfer Function (MTF) concept has been used to estimate an imaging system's spatial quality. Sometimes classified by target shapes, various methods were developed in laboratory environment utilizing sinusoidal inputs, periodic bar patterns and narrow slits. On-orbit sensor MTF estimation was performed on 30-meter GSD Landsat4 Thematic Mapper (TM) data from the bridge pulse target as a pulse input . Because of a high resolution sensor s small Ground Sampling Distance (GSD), reasonably sized man-made edge, pulse, and impulse targets can be deployed on a uniform grassy area with accurate control of ground targets using tarps and convex mirrors. All the previous work cited calculated MTF without testing the MTF estimator's performance. In previous report, a numerical generic sensor model had been developed to simulate and improve the performance of on-orbit MTF estimating techniques. Results from the previous sensor modeling report that have been incorporated into standard MTF estimation work include Fermi edge detection and the newly developed 4th order modified Savitzky-Golay (MSG) interpolation technique. Noise sensitivity had been studied by performing simulations on known noise sources and a sensor model. Extensive investigation was done to characterize multi-resolution ground noise. Finally, angle simulation was tested by using synthetic pulse targets with angles from 2 to 15 degrees, several brightness levels, and different noise levels from both ground targets and imaging system. As a continuing research activity using the developed sensor model, this report was dedicated to MTF estimation via pulse input method characterization using the Fermi edge detection and 4th order MSG interpolation method. The relationship between pulse width and MTF value at Nyquist was studied including error detection and correction schemes. Pulse target angle sensitivity was studied by using synthetic targets angled from 2 to 12 degrees. In this report, from the ground and system noise simulation, a minimum SNR value was suggested for a stable MTF value at Nyquist for the pulse method. Target width error detection and adjustment technique based on a smooth transition of MTF profile is presented, which is specifically applicable only to the pulse method with 3 pixel wide targets.
Optimal and fast E/B separation with a dual messenger field
NASA Astrophysics Data System (ADS)
Kodi Ramanah, Doogesh; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-05-01
We adapt our recently proposed dual messenger algorithm for spin field reconstruction and showcase its efficiency and effectiveness in Wiener filtering polarized cosmic microwave background (CMB) maps. Unlike conventional preconditioned conjugate gradient (PCG) solvers, our preconditioner-free technique can deal with high-resolution joint temperature and polarization maps with inhomogeneous noise distributions and arbitrary mask geometries with relative ease. Various convergence diagnostics illustrate the high quality of the dual messenger reconstruction. In contrast, the PCG implementation fails to converge to a reasonable solution for the specific problem considered. The implementation of the dual messenger method is straightforward and guarantees numerical stability and convergence. We show how the algorithm can be modified to generate fluctuation maps, which, combined with the Wiener filter solution, yield unbiased constrained signal realizations, consistent with observed data. This algorithm presents a pathway to exact global analyses of high-resolution and high-sensitivity CMB data for a statistically optimal separation of E and B modes. It is therefore relevant for current and next-generation CMB experiments, in the quest for the elusive primordial B-mode signal.
Time-reversal transcranial ultrasound beam focusing using a k-space method
Jing, Yun; Meral, F. Can; Clement, Greg. T.
2012-01-01
This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477
USDA-ARS?s Scientific Manuscript database
Due to the availability of numerous spectral, spatial, and contextual features, the determination of optimal features and class separabilities can be a time consuming process in object-based image analysis (OBIA). While several feature selection methods have been developed to assist OBIA, a robust c...
NASA Astrophysics Data System (ADS)
Hirt, Christian; Kuhn, Michael
2017-08-01
Theoretically, spherical harmonic (SH) series expansions of the external gravitational potential are guaranteed to converge outside the Brillouin sphere enclosing all field-generating masses. Inside that sphere, the series may be convergent or may be divergent. The series convergence behavior is a highly unstable quantity that is little studied for high-resolution mass distributions. Here we shed light on the behavior of SH series expansions of the gravitational potential of the Moon. We present a set of systematic numerical experiments where the gravity field generated by the topographic masses is forward-modeled in spherical harmonics and with numerical integration techniques at various heights and different levels of resolution, increasing from harmonic degree 90 to 2160 ( 61 to 2.5 km scales). The numerical integration is free from any divergence issues and therefore suitable to reliably assess convergence versus divergence of the SH series. Our experiments provide unprecedented detailed insights into the divergence issue. We show that the SH gravity field of degree-180 topography is convergent anywhere in free space. When the resolution of the topographic mass model is increased to degree 360, divergence starts to affect very high degree gravity signals over regions deep inside the Brillouin sphere. For degree 2160 topography/gravity models, severe divergence (with several 1000 mGal amplitudes) prohibits accurate gravity modeling over most of the topography. As a key result, we formulate a new hypothesis to predict divergence: if the potential degree variances show a minimum, then the SH series expansions diverge somewhere inside the Brillouin sphere and modeling of the internal potential becomes relevant.
Novel Multistatic Adaptive Microwave Imaging Methods for Early Breast Cancer Detection
NASA Astrophysics Data System (ADS)
Xie, Yao; Guo, Bin; Li, Jian; Stoica, Petre
2006-12-01
Multistatic adaptive microwave imaging (MAMI) methods are presented and compared for early breast cancer detection. Due to the significant contrast between the dielectric properties of normal and malignant breast tissues, developing microwave imaging techniques for early breast cancer detection has attracted much interest lately. MAMI is one of the microwave imaging modalities and employs multiple antennas that take turns to transmit ultra-wideband (UWB) pulses while all antennas are used to receive the reflected signals. MAMI can be considered as a special case of the multi-input multi-output (MIMO) radar with the multiple transmitted waveforms being either UWB pulses or zeros. Since the UWB pulses transmitted by different antennas are displaced in time, the multiple transmitted waveforms are orthogonal to each other. The challenge to microwave imaging is to improve resolution and suppress strong interferences caused by the breast skin, nipple, and so forth. The MAMI methods we investigate herein utilize the data-adaptive robust Capon beamformer (RCB) to achieve high resolution and interference suppression. We will demonstrate the effectiveness of our proposed methods for breast cancer detection via numerical examples with data simulated using the finite-difference time-domain method based on a 3D realistic breast model.
Automated Detection of Salt Marsh Platforms : a Topographic Method
NASA Astrophysics Data System (ADS)
Goodwin, G.; Mudd, S. M.; Clubb, F. J.
2017-12-01
Monitoring the topographic evolution of coastal marshes is a crucial step toward improving the management of these valuable landscapes under the pressure of relative sea level rise and anthropogenic modification. However, determining their geometrically complex boundaries currently relies on spectral vegetation detection methods or requires labour-intensive field surveys and digitisation.We propose a novel method to reproducibly isolate saltmarsh scarps and platforms from a DEM. Field observations and numerical models show that saltmarshes mature into sub-horizontal platforms delineated by sub-vertical scarps: based on this premise, we identify scarps as lines of local maxima on a slope*relief raster, then fill landmasses from the scarps upward, thus isolating mature marsh platforms. Non-dimensional search parameters allow batch-processing of data without recalibration. We test our method using lidar-derived DEMs of six saltmarshes in England with varying tidal ranges and geometries, for which topographic platforms were manually isolated from tidal flats. Agreement between manual and automatic segregation exceeds 90% for resolutions of 1m, with all but one sites maintaining this performance for resolutions up to 3.5m. For resolutions of 1m, automatically detected platforms are comparable in surface area and elevation distribution to digitised platforms. We also find that our method allows the accurate detection of local bloc failures 3 times larger than the DEM resolution.Detailed inspection reveals that although tidal creeks were digitised as part of the marsh platform, automatic detection classifies them as part of the tidal flat, causing an increase in false negatives and overall platform perimeter. This suggests our method would benefit from a combination with existing creek detection algorithms. Fallen blocs and pioneer zones are inconsistently identified, particularly in macro-tidal marshes, leading to differences between digitisation and the automated method: this also suggests that these areas must be carefully considered when analysing erosion and accretion processes. Ultimately, we have shown that automatic detection of marsh platforms from high-resolution topography is possible and sufficient to monitor and analyse topographic evolution.
Random element method for numerical modeling of diffusional processes
NASA Technical Reports Server (NTRS)
Ghoniem, A. F.; Oppenheim, A. K.
1982-01-01
The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.
Hou, Bin; Wang, Yunhong; Liu, Qingjie
2016-01-01
Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation. PMID:27618903
Hou, Bin; Wang, Yunhong; Liu, Qingjie
2016-08-27
Characterizations of up to date information of the Earth's surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.
Unsupervised detection of salt marsh platforms: a topographic method
NASA Astrophysics Data System (ADS)
Goodwin, Guillaume C. H.; Mudd, Simon M.; Clubb, Fiona J.
2018-03-01
Salt marshes filter pollutants, protect coastlines against storm surges, and sequester carbon, yet are under threat from sea level rise and anthropogenic modification. The sustained existence of the salt marsh ecosystem depends on the topographic evolution of marsh platforms. Quantifying marsh platform topography is vital for improving the management of these valuable landscapes. The determination of platform boundaries currently relies on supervised classification methods requiring near-infrared data to detect vegetation, or demands labour-intensive field surveys and digitisation. We propose a novel, unsupervised method to reproducibly isolate salt marsh scarps and platforms from a digital elevation model (DEM), referred to as Topographic Identification of Platforms (TIP). Field observations and numerical models show that salt marshes mature into subhorizontal platforms delineated by subvertical scarps. Based on this premise, we identify scarps as lines of local maxima on a slope raster, then fill landmasses from the scarps upward, thus isolating mature marsh platforms. We test the TIP method using lidar-derived DEMs from six salt marshes in England with varying tidal ranges and geometries, for which topographic platforms were manually isolated from tidal flats. Agreement between manual and unsupervised classification exceeds 94 % for DEM resolutions of 1 m, with all but one site maintaining an accuracy superior to 90 % for resolutions up to 3 m. For resolutions of 1 m, platforms detected with the TIP method are comparable in surface area to digitised platforms and have similar elevation distributions. We also find that our method allows for the accurate detection of local block failures as small as 3 times the DEM resolution. Detailed inspection reveals that although tidal creeks were digitised as part of the marsh platform, unsupervised classification categorises them as part of the tidal flat, causing an increase in false negatives and overall platform perimeter. This suggests our method may benefit from combination with existing creek detection algorithms. Fallen blocks and high tidal flat portions, associated with potential pioneer zones, can also lead to differences between our method and supervised mapping. Although pioneer zones prove difficult to classify using a topographic method, we suggest that these transition areas should be considered when analysing erosion and accretion processes, particularly in the case of incipient marsh platforms. Ultimately, we have shown that unsupervised classification of marsh platforms from high-resolution topography is possible and sufficient to monitor and analyse topographic evolution.
Image scanning fluorescence emission difference microscopy based on a detector array.
Li, Y; Liu, S; Liu, D; Sun, S; Kuang, C; Ding, Z; Liu, X
2017-06-01
We propose a novel imaging method that enables the enhancement of three-dimensional resolution of confocal microscopy significantly and achieve experimentally a new fluorescence emission difference method for the first time, based on the parallel detection with a detector array. Following the principles of photon reassignment in image scanning microscopy, images captured by the detector array were arranged. And by selecting appropriate reassign patterns, the imaging result with enhanced resolution can be achieved with the method of fluorescence emission difference. Two specific methods are proposed in this paper, showing that the difference between an image scanning microscopy image and a confocal image will achieve an improvement of transverse resolution by approximately 43% compared with that in confocal microscopy, and the axial resolution can also be enhanced by at least 22% experimentally and 35% theoretically. Moreover, the methods presented in this paper can improve the lateral resolution by around 10% than fluorescence emission difference and 15% than Airyscan. The mechanism of our methods is verified by numerical simulations and experimental results, and it has significant potential in biomedical applications. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Kim, Hyun Dae; Liu, Nan-Suey
1992-01-01
A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
High resolution simulations of a variable HH jet
NASA Astrophysics Data System (ADS)
Raga, A. C.; de Colle, F.; Kajdič, P.; Esquivel, A.; Cantó, J.
2007-04-01
Context: In many papers, the flows in Herbig-Haro (HH) jets have been modeled as collimated outflows with a time-dependent ejection. In particular, a supersonic variability of the ejection velocity leads to the production of "internal working surfaces" which (for appropriate forms of the time-variability) can produce emitting knots that resemble the chains of knots observed along HH jets. Aims: In this paper, we present axisymmetric simulations of an "internal working surface" in a radiative jet (produced by an ejection velocity variability). We concentrate on a given parameter set (i.e., on a jet with a constante ejection density, and a sinusoidal velocity variability with a 20 yr period and a 40 km s-1 half-amplitude), and carry out a study of the behaviour of the solution for increasing numerical resolutions. Methods: In our simulations, we solve the gasdynamic equations together with a 17-species atomic/ionic network, and we are therefore able to compute emission coefficients for different emission lines. Results: We compute 3 adaptive grid simulations, with 20, 163 and 1310 grid points (at the highest grid resolution) across the initial jet radius. From these simulations we see that successively more complex structures are obtained for increasing numerical resolutions. Such an effect is seen in the stratifications of the flow variables as well as in the predicted emission line intensity maps. Conclusions: .We find that while the detailed structure of an internal working surface depends on resolution, the predicted emission line luminosities (integrated over the volume of the working surface) are surprisingly stable. This is definitely good news for the future computation of predictions from radiative jet models for carrying out comparisons with observations of HH objects.
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...
2016-12-20
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
Tsunami hazard maps of spanish coast at national scale from seismic sources
NASA Astrophysics Data System (ADS)
Aniel-Quiroga, Íñigo; González, Mauricio; Álvarez-Gómez, José Antonio; García, Pablo
2017-04-01
Tsunamis are a moderately frequent phenomenon in the NEAM (North East Atlantic and Mediterranean) region, and consequently in Spain, as historic and recent events have affected this area. I.e., the 1755 earthquake and tsunami affected the Spanish Atlantic coasts of Huelva and Cadiz and the 2003 Boumerdés earthquake triggered a tsunami that reached Balearic island coast in less than 45 minutes. The risk in Spain is real and, its population and tourism rate makes it vulnerable to this kind of catastrophic events. The Indian Ocean tsunami in 2004 and the tsunami in Japan in 2011 launched the worldwide development and application of tsunami risk reduction measures that have been taken as a priority in this field. On November 20th 2015 the directive of the Spanish civil protection agency on planning under the emergency of tsunami was presented. As part of the Spanish National Security strategy, this document specifies the structure of the action plans at different levels: National, regional and local. In this sense, the first step is the proper evaluation of the tsunami hazard at National scale. This work deals with the assessment of the tsunami hazard in Spain, by means of numerical simulations, focused on the elaboration of tsunami hazard maps at National scale. To get this, following a deterministic approach, the seismic structures whose earthquakes could generate the worst tsunamis affecting the coast of Spain have been compiled and characterized. These worst sources have been propagated numerically along a reconstructed bathymetry, built from the best resolution available data. This high-resolution bathymetry was joined with a 25-m resolution DTM, to generate continuous offshore-onshore space, allowing the calculation of the flooded areas prompted by each selected source. The numerical model applied for the calculation of the tsunami propagations was COMCOT. The maps resulting from the numerical simulations show not only the tsunami amplitude at coastal areas but also the run-up and inundation length from the coastline. The run-up has been calculated with numerical model, complemented with an alternative method, based on interpolation on a tsunami run-up database created ad hoc. These estimated variables allow the identification of the most affected areas in case of tsunami and they are also the base for the local authorities to evaluate the necessity of new higher resolution studies at local scale on specific areas.
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Resolution Gas Chromatography (HRGC) with High Resolution Mass Spectrometry (HRMS) is the method of choice... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Resolution Gas Chromatography (HRGC) with High Resolution Mass Spectrometry (HRMS) is the method of choice... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas...
ULTRA-SHARP solution of the Smith-Hutton problem
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Mokhtari, Simin
1992-01-01
Highly convective scalar transport involving near-discontinuities and strong streamline curvature was addressed in a paper by Smith and Hutton in 1982, comparing several different convection schemes applied to a specially devised test problem. First order methods showed significant artificial diffusion, whereas higher order methods gave less smearing but had a tendency to overshoot and oscillate. Perhaps because unphysical oscillations are more obvious than unphysical smearing, the intervening period has seen a rise in popularity of low order artificially diffusive schemes, especially in the numerical heat transfer industry. The present paper describes an alternate strategy of using non-artificially diffusive high order methods, while maintaining strictly monotonic transitions through the use of simple flux limited constraints. Limited third order upwinding is usually found to be the most cost effective basic convection scheme. Tighter resolution of discontinuities can be obtained at little additional cost by using automatic adaptive stencil expansion to higher order in local regions, as needed.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-07-01
A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.
NASA Astrophysics Data System (ADS)
Luo, Chun-Ling; Zhuo, Ling-Qing
2017-01-01
Imaging through atmospheric turbulence is a topic with a long history and grand challenges still exist in the remote sensing and astro observation fields. In this letter, we try to propose a simple scheme to improve the resolution of imaging through turbulence based on the computational ghost imaging (CGI) and computational ghost diffraction (CGD) setup via the laser beam shaping techniques. A unified theory of CGI and CGD through turbulence with the multi-Gaussian shaped incoherent source is developed, and numerical examples are given to see clearly the effects of the system parameters to CGI and CGD. Our results show that the atmospheric effect to the CGI and CGD system is closely related to the propagation distance between the source and the object. In addition, by properly increasing the beam order of the multi-Gaussian source, we can improve the resolution of CGI and CGD through turbulence relative to the commonly used Gaussian source. Therefore our results may find applications in remote sensing and astro observation.
Selker, Frank; Selker, John S.
2018-01-01
There are few methods to provide high-resolution in-situ characterization of flow in aquifers and reservoirs. We present a method that has the potential to quantify lateral and vertical (magnitude and direction) components of flow with spatial resolution of about one meter and temporal resolution of about one day. A fiber optic distributed temperature sensor is used with a novel heating system. Temperatures before heating may be used to evaluate background geothermal gradient and vertical profile of thermal diffusivity. The innovation presented is the use of variable energy application along the well, in this case concentrated heating at equally-spaced (2 m) localized areas (0.5 m). Relative to uniform warming this offers greater opportunity to estimate water movement, reduces required heating power, and increases practical length that can be heated. Numerical simulations are presented which illustrate expected behaviors. We estimate relative advection rates near the well using the times at which various locations diverge from a heating trajectory expected for pure conduction in the absence of advection. The concept is demonstrated in a grouted 600 m borehole with 300 heated patches, though evidence of vertical water movement was not seen. PMID:29596339
Selker, Frank; Selker, John S
2018-03-29
There are few methods to provide high-resolution in-situ characterization of flow in aquifers and reservoirs. We present a method that has the potential to quantify lateral and vertical (magnitude and direction) components of flow with spatial resolution of about one meter and temporal resolution of about one day. A fiber optic distributed temperature sensor is used with a novel heating system. Temperatures before heating may be used to evaluate background geothermal gradient and vertical profile of thermal diffusivity. The innovation presented is the use of variable energy application along the well, in this case concentrated heating at equally-spaced (2 m) localized areas (0.5 m). Relative to uniform warming this offers greater opportunity to estimate water movement, reduces required heating power, and increases practical length that can be heated. Numerical simulations are presented which illustrate expected behaviors. We estimate relative advection rates near the well using the times at which various locations diverge from a heating trajectory expected for pure conduction in the absence of advection. The concept is demonstrated in a grouted 600 m borehole with 300 heated patches, though evidence of vertical water movement was not seen.
Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow
NASA Astrophysics Data System (ADS)
Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca
2017-11-01
The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.
Ultra high energy resolution focusing monochromator for inelastic X-ray scattering spectrometer
Suvorov, Alexey; Cunsolo, Alessandro; Chubar, Oleg; ...
2015-11-25
Further development of a focusing monochromator concept for X-ray energy resolution of 0.1 meV and below is presented. Theoretical analysis of several optical layouts based on this concept was supported by numerical simulations performed in the “Synchrotron Radiation Workshop” software package using the physical-optics approach and careful modeling of partially-coherent synchrotron (undulator) radiation. Along with the energy resolution, the spectral shape of the energy resolution function was investigated. We show that under certain conditions the decay of the resolution function tails can be faster than that of the Gaussian function.
NASA Astrophysics Data System (ADS)
Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo
2017-07-01
A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.
Patch-Based Super-Resolution of MR Spectroscopic Images: Application to Multiple Sclerosis
Jain, Saurabh; Sima, Diana M.; Sanaei Nezhad, Faezeh; Hangel, Gilbert; Bogner, Wolfgang; Williams, Stephen; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2017-01-01
Purpose: Magnetic resonance spectroscopic imaging (MRSI) provides complementary information to conventional magnetic resonance imaging. Acquiring high resolution MRSI is time consuming and requires complex reconstruction techniques. Methods: In this paper, a patch-based super-resolution method is presented to increase the spatial resolution of metabolite maps computed from MRSI. The proposed method uses high resolution anatomical MR images (T1-weighted and Fluid-attenuated inversion recovery) to regularize the super-resolution process. The accuracy of the method is validated against conventional interpolation techniques using a phantom, as well as simulated and in vivo acquired human brain images of multiple sclerosis subjects. Results: The method preserves tissue contrast and structural information, and matches well with the trend of acquired high resolution MRSI. Conclusions: These results suggest that the method has potential for clinically relevant neuroimaging applications. PMID:28197066
An efficient multi-dimensional implementation of VSIAM3 and its applications to free surface flows
NASA Astrophysics Data System (ADS)
Yokoi, Kensuke; Furuichi, Mikito; Sakai, Mikio
2017-12-01
We propose an efficient multidimensional implementation of VSIAM3 (volume/surface integrated average-based multi-moment method). Although VSIAM3 is a highly capable fluid solver based on a multi-moment concept and has been used for a wide variety of fluid problems, VSIAM3 could not simulate some simple benchmark problems well (for instance, lid-driven cavity flows) due to relatively high numerical viscosity. In this paper, we resolve the issue by using the efficient multidimensional approach. The proposed VSIAM3 is shown to capture lid-driven cavity flows of the Reynolds number up to Re = 7500 with a Cartesian grid of 128 × 128, which was not capable for the original VSIAM3. We also tested the proposed framework in free surface flow problems (droplet collision and separation of We = 40 and droplet splashing on a superhydrophobic substrate). The numerical results by the proposed VSIAM3 showed reasonable agreements with these experiments. The proposed VSIAM3 could capture droplet collision and separation of We = 40 with a low numerical resolution (8 meshes for the initial diameter of droplets). We also simulated free surface flows including particles toward non-Newtonian flow applications. These numerical results have showed that the proposed VSIAM3 can robustly simulate interactions among air, particles (solid), and liquid.
Numerical Estimation of the Outer Bank Resistance Characteristics in AN Evolving Meandering River
NASA Astrophysics Data System (ADS)
Wang, D.; Konsoer, K. M.; Rhoads, B. L.; Garcia, M. H.; Best, J.
2017-12-01
Few studies have examined the three-dimensional flow structure and its interaction with bed morphology within elongate loops of large meandering rivers. The present study uses a numerical model to simulate the flow pattern and sediment transport, especially the flow close to the outer-bank, at two elongate meandering loops in Wabash River, USA. The numerical grid for the model is based on a combination of airborne LIDAR data on floodplains and the multibeam data within the river channel. A Finite Element Method (FEM) is used to solve the non-hydrostatic RANS equation using a K-epsilon turbulence closure scheme. High-resolution topographic data allows detailed numerical simulation of flow patterns along the outer bank and model calibration involves comparing simulated velocities to ADCP measurements at 41 cross sections near this bank. Results indicate that flow along the outer bank is strongly influenced by large resistance elements, including woody debris, large erosional scallops within the bank face, and outcropping bedrock. In general, patterns of bank migration conform with zones of high near-bank velocity and shear stress. Using the existing model, different virtual events can be simulated to explore the impacts of different resistance characteristics on patterns of flow, sediment transport, and bank erosion.
Mesosacle eddies in a high resolution OGCM and coupled ocean-atmosphere GCM
NASA Astrophysics Data System (ADS)
Yu, Y.; Liu, H.; Lin, P.
2017-12-01
The present study described high-resolution climate modeling efforts including oceanic, atmospheric and coupled general circulation model (GCM) at the state key laboratory of numerical modeling for atmospheric sciences and geophysical fluid dynamics (LASG), Institute of Atmospheric Physics (IAP). The high-resolution OGCM is established based on the latest version of the LASG/IAP Climate system Ocean Model (LICOM2.1), but its horizontal resolution and vertical resolution are increased to 1/10° and 55 layers, respectively. Forced by the surface fluxes from the reanalysis and observed data, the model has been integrated for approximately more than 80 model years. Compared with the simulation of the coarse-resolution OGCM, the eddy-resolving OGCM not only better simulates the spatial-temporal features of mesoscale eddies and the paths and positions of western boundary currents but also reproduces the large meander of the Kuroshio Current and its interannual variability. Another aspect, namely, the complex structures of equatorial Pacific currents and currents in the coastal ocean of China, are better captured due to the increased horizontal and vertical resolution. Then we coupled the high resolution OGCM to NCAR CAM4 with 25km resolution, in which the mesoscale air-sea interaction processes are better captured.
NASA Astrophysics Data System (ADS)
Durant, Bradford; Hackl, Jason; Balachandar, Sivaramakrishnan
2017-11-01
Nodal discontinuous Galerkin schemes present an attractive approach to robust high-order solution of the equations of fluid mechanics, but remain accompanied by subtle challenges in their consistent stabilization. The effect of quadrature choices (full mass matrix vs spectral elements), over-integration to manage aliasing errors, and explicit artificial viscosity on the numerical solution of a steady homentropic vortex are assessed over a wide range of resolutions and polynomial orders using quadrilateral elements. In both stagnant and advected vortices in periodic and non-periodic domains the need arises for explicit stabilization beyond the numerical surface fluxes of discontinuous Galerkin spectral elements. Artificial viscosity via the entropy viscosity method is assessed as a stabilizing mechanism. It is shown that the regularity of the artificial viscosity field is essential to its use for long-time stabilization of small-scale features in nodal discontinuous Galerkin solutions of the Euler equations of gas dynamics. Supported by the Department of Energy Predictive Science Academic Alliance Program Contract DE-NA0002378.
A numerical solution method for acoustic radiation from axisymmetric bodies
NASA Technical Reports Server (NTRS)
Caruthers, John E.; Raviprakash, G. K.
1995-01-01
A new and very efficient numerical method for solving equations of the Helmholtz type is specialized for problems having axisymmetric geometry. It is then demonstrated by application to the classical problem of acoustic radiation from a vibrating piston set in a stationary infinite plane. The method utilizes 'Green's Function Discretization', to obtain an accurate resolution of the waves using only 2-3 points per wave. Locally valid free space Green's functions, used in the discretization step, are obtained by quadrature. Results are computed for a range of grid spacing/piston radius ratios at a frequency parameter, omega R/c(sub 0), of 2 pi. In this case, the minimum required grid resolution appears to be fixed by the need to resolve a step boundary condition at the piston edge rather than by the length scale imposed by the wave length of the acoustic radiation. It is also demonstrated that a local near-field radiation boundary procedure allows the domain to be truncated very near the radiating source with little effect on the solution.
Micro-computed tomography pore-scale study of flow in porous media: Effect of voxel resolution
NASA Astrophysics Data System (ADS)
Shah, S. M.; Gray, F.; Crawshaw, J. P.; Boek, E. S.
2016-09-01
A fundamental understanding of flow in porous media at the pore-scale is necessary to be able to upscale average displacement processes from core to reservoir scale. The study of fluid flow in porous media at the pore-scale consists of two key procedures: Imaging - reconstruction of three-dimensional (3D) pore space images; and modelling such as with single and two-phase flow simulations with Lattice-Boltzmann (LB) or Pore-Network (PN) Modelling. Here we analyse pore-scale results to predict petrophysical properties such as porosity, single-phase permeability and multi-phase properties at different length scales. The fundamental issue is to understand the image resolution dependency of transport properties, in order to up-scale the flow physics from pore to core scale. In this work, we use a high resolution micro-computed tomography (micro-CT) scanner to image and reconstruct three dimensional pore-scale images of five sandstones (Bentheimer, Berea, Clashach, Doddington and Stainton) and five complex carbonates (Ketton, Estaillades, Middle Eastern sample 3, Middle Eastern sample 5 and Indiana Limestone 1) at four different voxel resolutions (4.4 μm, 6.2 μm, 8.3 μm and 10.2 μm), scanning the same physical field of view. Implementing three phase segmentation (macro-pore phase, intermediate phase and grain phase) on pore-scale images helps to understand the importance of connected macro-porosity in the fluid flow for the samples studied. We then compute the petrophysical properties for all the samples using PN and LB simulations in order to study the influence of voxel resolution on petrophysical properties. We then introduce a numerical coarsening scheme which is used to coarsen a high voxel resolution image (4.4 μm) to lower resolutions (6.2 μm, 8.3 μm and 10.2 μm) and study the impact of coarsening data on macroscopic and multi-phase properties. Numerical coarsening of high resolution data is found to be superior to using a lower resolution scan because it avoids the problem of partial volume effects and reduces the scaling effect by preserving the pore-space properties influencing the transport properties. This is evidently compared in this study by predicting several pore network properties such as number of pores and throats, average pore and throat radius and coordination number for both scan based analysis and numerical coarsened data.
NASA Astrophysics Data System (ADS)
Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe
2018-02-01
A stable numerical solution of the steady Stokes problem requires compatibility between the choice of velocity and pressure approximation that has traditionally proven problematic for meshless methods. In this work, we present a discretization that couples a staggered scheme for pressure approximation with a divergence-free velocity reconstruction to obtain an adaptive, high-order, finite difference-like discretization that can be efficiently solved with conventional algebraic multigrid techniques. We use analytic benchmarks to demonstrate equal-order convergence for both velocity and pressure when solving problems with curvilinear geometries. In order to study problems in dense suspensions, we couple the solution for the flow to the equations of motion for freely suspended particles in an implicit monolithic scheme. The combination of high-order accuracy with fully-implicit schemes allows the accurate resolution of stiff lubrication forces directly from the solution of the Stokes problem without the need to introduce sub-grid lubrication models.
NASA Astrophysics Data System (ADS)
Puckett, E. G.; Turcotte, D. L.; He, Y.; Lokavarapu, H. V.; Robey, J.; Kellogg, L. H.
2017-12-01
Geochemical observations of mantle-derived rocks favor a nearly homogeneous upper mantle, the source of mid-ocean ridge basalts (MORB), and heterogeneous lower mantle regions.Plumes that generate ocean island basalts are thought to sample the lower mantle regions and exhibit more heterogeneity than MORB.These regions have been associated with lower mantle structures known as large low shear velocity provinces below Africa and the South Pacific.The isolation of these regions is attributed to compositional differences and density stratification that, consequently, have been the subject of computational and laboratory modeling designed to determine the parameter regime in which layering is stable and understanding how layering evolves.Mathematical models of persistent compositional interfaces in the Earth's mantle may be inherently unstable, at least in some regions of the parameter space relevant to the mantle.Computing approximations to solutions of such problems presents severe challenges, even to state-of-the-art numerical methods.Some numerical algorithms for modeling the interface between distinct compositions smear the interface at the boundary between compositions, such as methods that add numerical diffusion or `artificial viscosity' in order to stabilize the algorithm. We present two new algorithms for maintaining high-resolution and sharp computational boundaries in computations of these types of problems: a discontinuous Galerkin method with a bound preserving limiter and a Volume-of-Fluid interface tracking algorithm.We compare these new methods with two approaches widely used for modeling the advection of two distinct thermally driven compositional fields in mantle convection computations: a high-order accurate finite element advection algorithm with entropy viscosity and a particle method.We compare the performance of these four algorithms on three problems, including computing an approximation to the solution of an initially compositionally stratified fluid at Ra = 105 with buoyancy numbers {B} that vary from no stratification at B = 0 to stratified flow at large B.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Bernhard W.; Mane, Anil U.; Elam, Jeffrey W.
X-ray detectors that combine two-dimensional spatial resolution with a high time resolution are needed in numerous applications of synchrotron radiation. Most detectors with this combination of capabilities are based on semiconductor technology and are therefore limited in size. Furthermore, the time resolution is often realised through rapid time-gating of the acquisition, followed by a slower readout. Here, a detector technology is realised based on relatively inexpensive microchannel plates that uses GHz waveform sampling for a millimeter-scale spatial resolution and better than 100 ps time resolution. The technology is capable of continuous streaming of time- and location-tagged events at rates greatermore » than 10 7events per cm 2. Time-gating can be used for improved dynamic range.« less
NASA Astrophysics Data System (ADS)
Liu, X.; Gurnis, M.; Stadler, G.; Rudi, J.; Ratnaswamy, V.; Ghattas, O.
2017-12-01
Dynamic topography, or uncompensated topography, is controlled by internal dynamics, and provide constraints on the buoyancy structure and rheological parameters in the mantle. Compared with other surface manifestations such as the geoid, dynamic topography is very sensitive to shallower and more regional mantle structure. For example, the significant dynamic topography above the subduction zone potentially provides a rich mine for inferring the rheological and mechanical properties such as plate coupling, flow, and lateral viscosity variations, all critical in plate tectonics. However, employing subduction zone topography in the inversion study requires that we have a better understanding of the topography from forward models, especially the influence of the viscosity formulation, numerical resolution, and other factors. One common approach to formulating a fault between the subducted slab and the overriding plates in viscous flow models assumes a thin weak zone. However, due to the large lateral variation in viscosity, topography from free-slip numerical models typically has artificially large magnitude as well as high-frequency undulations over subduction zone, which adds to the difficulty in making comparisons between model results and observations. In this study, we formulate a weak zone with the transversely isotropic viscosity (TI) where the tangential viscosity is much smaller than the viscosity in the normal direction. Similar with isotropic weak zone models, TI models effectively decouple subducted slabs from the overriding plates. However, we find that the topography in TI models is largely reduced compared with that in weak zone models assuming an isotropic viscosity. Moreover, the artificial `tooth paste' squeezing effect observed in isotropic weak zone models vanishes in TI models, although the difference becomes less significant when the dip angle is small. We also implement a free-surface condition in our numerical models, which has a smoothing effect on the topography. With the improved model configuration, we can use the adjoint inversion method in a high-resolution model and employ topography in addition to other observables such as the plate motion to infer critical mechanical and rheological parameters in the subduction zone.
NASA Technical Reports Server (NTRS)
Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.
2014-01-01
A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.
Super-resolution differential interference contrast microscopy by structured illumination.
Chen, Jianling; Xu, Yan; Lv, Xiaohua; Lai, Xiaomin; Zeng, Shaoqun
2013-01-14
We propose a structured illumination differential interference contrast (SI-DIC) microscopy, breaking the diffraction resolution limit of differential interference contrast (DIC) microscopy. SI-DIC extends the bandwidth of coherent transfer function of the DIC imaging system, thus the resolution is improved. With 0.8 numerical aperture condenser and objective, the reconstructed SI-DIC image of 53 nm polystyrene beads reveals lateral resolution of approximately 190 nm, doubling that of the conventional DIC image. We also demonstrate biological observations of label-free cells with improved spatial resolution. The SI-DIC microscopy can provide sub-diffraction resolution and high contrast images with marker-free specimens, and has the potential for achieving sub-diffraction resolution quantitative phase imaging.
Bianchi, S; Rajamanickam, V P; Ferrara, L; Di Fabrizio, E; Liberale, C; Di Leonardo, R
2013-12-01
The use of individual multimode optical fibers in endoscopy applications has the potential to provide highly miniaturized and noninvasive probes for microscopy and optical micromanipulation. A few different strategies have been proposed recently, but they all suffer from intrinsically low resolution related to the low numerical aperture of multimode fibers. Here, we show that two-photon polymerization allows for direct fabrication of micro-optics components on the fiber end, resulting in an increase of the numerical aperture to a value that is close to 1. Coupling light into the fiber through a spatial light modulator, we were able to optically scan a submicrometer spot (300 nm FWHM) over an extended region, facing the opposite fiber end. Fluorescence imaging with improved resolution is also demonstrated.
The development and validation of command schedules for SeaWiFS
NASA Astrophysics Data System (ADS)
Woodward, Robert H.; Gregg, Watson W.; Patt, Frederick S.
1994-11-01
An automated method for developing and assessing spacecraft and instrument command schedules is presented for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) project. SeaWiFS is to be carried on the polar-orbiting SeaStar satellite in 1995. The primary goal of the SeaWiFS mission is to provide global ocean chlorophyll concentrations every four days by employing onboard recorders and a twice-a-day data downlink schedule. Global Area Coverage (GAC) data with about 4.5 km resolution will be used to produce the global coverage. Higher resolution (1.1 km resolution) Local Area Coverage (LAC) data will also be recorded to calibrate the sensor. In addition, LAC will be continuously transmitted from the satellite and received by High Resolution Picture Transmission (HRPT) stations. The methods used to generate commands for SeaWiFS employ numerous hierarchical checks as a means of maximizing coverage of the Earth's surface and fulfilling the LAC data requirements. The software code is modularized and written in Fortran with constructs to mirror the pre-defined mission rules. The overall method is specifically developed for low orbit Earth-observing satellites with finite onboard recording capabilities and regularly scheduled data downlinks. Two software packages using the Interactive Data Language (IDL) for graphically displaying and verifying the resultant command decisions are presented. Displays can be generated which show portions of the Earth viewed by the sensor and spacecraft sub-orbital locations during onboard calibration activities. An IDL-based interactive method of selecting and testing LAC targets and calibration activities for command generation is also discussed.
Gravitational geons in asymptotically anti-de Sitter spacetimes
NASA Astrophysics Data System (ADS)
Martinon, Grégoire; Fodor, Gyula; Grandclément, Philippe; Forgács, Peter
2017-06-01
We report on numerical constructions of fully non-linear geons in asymptotically anti-de Sitter (AdS) spacetimes in four dimensions. Our approach is based on 3 + 1 formalism and spectral methods in a gauge combining maximal slicing and spatial harmonic coordinates. We are able to construct several families of geons seeded by different families of spherical harmonics. We can reach unprecedentedly high amplitudes, with mass of order ∼1/2 of the AdS length, and with deviations of the order of 50% compared to third order perturbative approaches. The consistency of our results with numerical resolution is carefully checked and we give extensive precision monitoring techniques. All global quantities, such as mass and angular momentum, are computed using two independent frameworks that agree with each other at the 0.1% level. We also provide strong evidence for the existence of ‘excited’ (i.e. with one radial node) geon solutions of Einstein equations in asymptotically AdS spacetimes by constructing them numerically.
The void spectrum in two-dimensional numerical simulations of gravitational clustering
NASA Technical Reports Server (NTRS)
Kauffmann, Guinevere; Melott, Adrian L.
1992-01-01
An algorithm for deriving a spectrum of void sizes from two-dimensional high-resolution numerical simulations of gravitational clustering is tested, and it is verified that it produces the correct results where those results can be anticipated. The method is used to study the growth of voids as clustering proceeds. It is found that the most stable indicator of the characteristic void 'size' in the simulations is the mean fractional area covered by voids of diameter d, in a density field smoothed at its correlation length. Very accurate scaling behavior is found in power-law numerical models as they evolve. Eventually, this scaling breaks down as the nonlinearity reaches larger scales. It is shown that this breakdown is a manifestation of the undesirable effect of boundary conditions on simulations, even with the very large dynamic range possible here. A simple criterion is suggested for deciding when simulations with modest large-scale power may systematically underestimate the frequency of larger voids.
Interfaces detection after corneal refractive surgery by low coherence optical interferometry
Verrier, I.; Veillas, C.; Lépine, T.; Nguyen, F.; Thuret, G.; Gain, P.
2010-01-01
The detection of refractive corneal surgery by LASIK, during the storage of corneas in Eye Banks will become a challenge when the numerous operated patients will arrive at the age of cornea donation. The subtle changes of corneal structure and refraction are highly suspected to negatively influence clinical results in recipients of such corneas. In order to detect LASIK cornea interfaces we developed a low coherence interferometry technique using a broadband continuum source. Real time signal recording, without moving any optical elements and without need of a Fourier Transform operation, combined with good measurement resolution is the main asset of this interferometer. The associated numerical processing is based on a method initially used in astronomy and offers an optimal correlation signal without the necessity to image the whole cornea that is time consuming. The detection of corneal interfaces - both outer and inner surface and the buried interface corresponding to the surgical wound – is then achieved directly by the innovative combination of interferometry and this original numerical process. PMID:21258562
Investigating Anomalies in the Output Generated by the Weather Research and Forecasting (WRF) Model
NASA Astrophysics Data System (ADS)
Decicco, Nicholas; Trout, Joseph; Manson, J. Russell; Rios, Manny; King, David
2015-04-01
The Weather Research and Forecasting (WRF) model is an advanced mesoscale numerical weather prediction (NWP) model comprised of two numerical cores, the Numerical Mesoscale Modeling (NMM) core, and the Advanced Research WRF (ARW) core. An investigation was done to determine the source of erroneous output generated by the NMM core. In particular were the appearance of zero values at regularly spaced grid cells in output fields and the NMM core's evident (mis)use of static geographic information at a resolution lower than the nesting level for which the core is performing computation. A brief discussion of the high-level modular architecture of the model is presented as well as methods utilized to identify the cause of these problems. Presented here are the initial results from a research grant, ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''.
Modeling of Powder Bed Manufacturing Defects
NASA Astrophysics Data System (ADS)
Mindt, H.-W.; Desmaison, O.; Megahed, M.; Peralta, A.; Neumann, J.
2018-01-01
Powder bed additive manufacturing offers unmatched capabilities. The deposition resolution achieved is extremely high enabling the production of innovative functional products and materials. Achieving the desired final quality is, however, hampered by many potential defects that have to be managed in due course of the manufacturing process. Defects observed in products manufactured via powder bed fusion have been studied experimentally. In this effort we have relied on experiments reported in the literature and—when experimental data were not sufficient—we have performed additional experiments providing an extended foundation for defect analysis. There is large interest in reducing the effort and cost of additive manufacturing process qualification and certification using integrated computational material engineering. A prerequisite is, however, that numerical methods can indeed capture defects. A multiscale multiphysics platform is developed and applied to predict and explain the origin of several defects that have been observed experimentally during laser-based powder bed fusion processes. The models utilized are briefly introduced. The ability of the models to capture the observed defects is verified. The root cause of the defects is explained by analyzing the numerical results thus confirming the ability of numerical methods to provide a foundation for rapid process qualification.
Numerical restoration of surface vortices in Nb films measured by a scanning SQUID microscope
NASA Astrophysics Data System (ADS)
Ito, Atsuki; Thanh Huy, Ho; Dang, Vu The; Miyoshi, Hiroki; Hayashi, Masahiko; Ishida, Takekazu
2017-07-01
In the present work, we investigated a vortex profile appeared on a pure Nb film (500 nm in thickness, 10 mm x 10 mm) by using a scanning SQUID microscope. We found that the local magnetic distribution thus observed is broadened compared to a true vortex profile in the superconducting film. We therefore applied the numerical method to improve a spatial resolution of the scanning SQUID microscope. The method is based on the inverse Biot-Savart law and the Fourier transformation to recover a real-space image. We found that the numerical analyses give a smaller vortex than the raw vortex profile observed by the scanning microscope.
NASA Astrophysics Data System (ADS)
Kumkar, Yogesh V.; Sen, P. N.; Chaudhari, Hemankumar S.; Oh, Jai-Ho
2018-02-01
In this paper, an attempt has been made to conduct a numerical experiment with the high-resolution global model GME to predict the tropical storms in the North Indian Ocean during the year 2007. Numerical integrations using the icosahedral hexagonal grid point global model GME were performed to study the evolution of tropical cyclones, viz., Akash, Gonu, Yemyin and Sidr over North Indian Ocean during 2007. It has been seen that the GME model forecast underestimates cyclone's intensity, but the model can capture the evolution of cyclone's intensity especially its weakening during landfall, which is primarily due to the cutoff of the water vapor supply in the boundary layer as cyclones approach the coastal region. A series of numerical simulation of tropical cyclones have been performed with GME to examine model capability in prediction of intensity and track of the cyclones. The model performance is evaluated by calculating the root mean square errors as cyclone track errors.
Large Scale, High Resolution, Mantle Dynamics Modeling
NASA Astrophysics Data System (ADS)
Geenen, T.; Berg, A. V.; Spakman, W.
2007-12-01
To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D spherical models. We also applied the above mentioned method to a high resolution (~ 1 km) 2D mantle convection model with temperature, pressure and phase dependent rheology including several phase transitions. We focus on a model of a subducting lithospheric slab which is subject to strong folding at the bottom of the mantle's D" region which includes the postperovskite phase boundary. For a detailed description of this model we refer to poster [Mantel convection models of the D" region, U17] [Saad, 2003] Saad, Y. (2003). Iterative methods for sparse linear systems. [Sala, 2006] Sala. M (2006) An Object-Oriented Framework for the Development of Scalable Parallel Multilevel Preconditioners. ACM Transactions on Mathematical Software, 32 (3), 2006 [Patankar, 1980] Patankar, S. V.(1980) Numerical Heat Transfer and Fluid Flow, Hemisphere, Washington.
Optical circular deflector with attosecond resolution for ultrashort electron beam
Zhang, Zhen; Du, Yingchao; Tang, Chuanxiang; ...
2017-05-25
A novel method using high-power laser as a circular deflector is proposed for the measurement of femtosecond (fs) and sub-fs electron beam. In the scheme, the electron beam interacts with a laser pulse operating in a radially polarized doughnut mode ( TEM 01 * ) in a helical undulator, generating angular kicks along the beam in two directions at the same time. The phase difference between the two angular kicks makes the beam form a ring after a propagation section with appropriate phase advance, which can reveal the current profile of the electron beam. Detailed theoretical analysis of the methodmore » and numerical results with reasonable parameters are both presented. Lastly, it is shown that the temporal resolution can reach up to ~ 100 attosecond, which is a significant improvement for the diagnostics of ultrashort electron beam.« less
Optical circular deflector with attosecond resolution for ultrashort electron beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhen; Du, Yingchao; Tang, Chuanxiang
A novel method using high-power laser as a circular deflector is proposed for the measurement of femtosecond (fs) and sub-fs electron beam. In the scheme, the electron beam interacts with a laser pulse operating in a radially polarized doughnut mode ( TEM 01 * ) in a helical undulator, generating angular kicks along the beam in two directions at the same time. The phase difference between the two angular kicks makes the beam form a ring after a propagation section with appropriate phase advance, which can reveal the current profile of the electron beam. Detailed theoretical analysis of the methodmore » and numerical results with reasonable parameters are both presented. Lastly, it is shown that the temporal resolution can reach up to ~ 100 attosecond, which is a significant improvement for the diagnostics of ultrashort electron beam.« less
A new method to extract modal parameters using output-only responses
NASA Astrophysics Data System (ADS)
Kim, Byeong Hwa; Stubbs, Norris; Park, Taehyo
2005-04-01
This work proposes a new output-only modal analysis method to extract mode shapes and natural frequencies of a structure. The proposed method is based on an approach with a single-degree-of-freedom in the time domain. For a set of given mode-isolated signals, the un-damped mode shapes are extracted utilizing the singular value decomposition of the output energy correlation matrix with respect to sensor locations. The natural frequencies are extracted from a noise-free signal that is projected on the estimated modal basis. The proposed method is particularly efficient when a high resolution of mode shape is essential. The accuracy of the method is numerically verified using a set of time histories that are simulated using a finite-element method. The feasibility and practicality of the method are verified using experimental data collected at the newly constructed King Storm Water Bridge in California, United States.
NASA Astrophysics Data System (ADS)
Bentz, Brian Z.
Many human cancer cell types over-express folate receptors, and this provides an opportunity to develop targeted anti-cancer drugs. For these drugs to be effective, their kinetics must be well understood in vivo and in deep tissue where tumors occur. We demonstrate a method for imaging these parameters by incorporating a kinetic compartment model and fluorescence into optical diffusion tomography (ODT). The kinetics were imaged in a live mouse, and found to be in agreement with previous in vitro studies, demonstrating the validity of the method and its feasibility as an effective tool in preclinical drug development studies. Progress in developing optical imaging for biomedical applications requires customizable and often complex objects known as "phantoms" for testing and evaluation. We present new optical phantoms fabricated using inexpensive 3D printing methods with multiple materials, allowing for the placement of complex inhomogeneities in heterogeneous or anatomically realistic geometries, as opposed to previous phantoms which were limited to simple shapes formed by molds or machining. Furthermore, we show that Mie theory can be used to design the optical properties to match a target tissue. The phantom fabrication methods are versatile, can be applied to optical imaging methods besides diffusive imaging, and can be used in the calibration of live animal imaging data. Applications of diffuse optical imaging in the operating theater have been limited in part due to computational burden. We present an approach for the fast localization of arteries in the roof of the mouth that has the potential to reduce complications. Furthermore, we use the extracted position information to fabricate a custom surgical guide using 3D printing that could protect the arteries during surgery. The resolution of ODT is severely limited by the attenuation of high spatial frequencies. We present a super-resolution method achieved through the point localization of fluorescent inhomogeneities in a tissue-like scattering medium, and examine the localization uncertainty numerically and experimentally. Furthermore, we show numerical results for the localization of multiple fluorescent inhomogeneities by distinguishing them based on temporal characteristics. Potential applications include imaging neuron activation in the brain.
Higashiura, Akifumi; Ohta, Kazunori; Masaki, Mika; Sato, Masaru; Inaka, Koji; Tanaka, Hiroaki; Nakagawa, Atsushi
2013-11-01
Recently, many technical improvements in macromolecular X-ray crystallography have increased the number of structures deposited in the Protein Data Bank and improved the resolution limit of protein structures. Almost all high-resolution structures have been determined using a synchrotron radiation source in conjunction with cryocooling techniques, which are required in order to minimize radiation damage. However, optimization of cryoprotectant conditions is a time-consuming and difficult step. To overcome this problem, the high-pressure cryocooling method was developed (Kim et al., 2005) and successfully applied to many protein-structure analyses. In this report, using the high-pressure cryocooling method, the X-ray crystal structure of bovine H-protein was determined at 0.86 Å resolution. Structural comparisons between high- and ambient-pressure cryocooled crystals at ultra-high resolution illustrate the versatility of this technique. This is the first ultra-high-resolution X-ray structure obtained using the high-pressure cryocooling method.
NASA Astrophysics Data System (ADS)
Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.
2017-09-01
Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.
NASA Astrophysics Data System (ADS)
Zhang, Kai; Yang, Fanlin; Zhang, Hande; Su, Dianpeng; Li, QianQian
2017-06-01
The correlation between seafloor morphological features and biological complexity has been identified in numerous recent studies. This research focused on the potential for accurate characterization of coral reefs based on high-resolution bathymetry from multiple sources. A standard deviation (STD) based method for quantitatively characterizing terrain complexity was developed that includes robust estimation to correct for irregular bathymetry and a calibration for the depth-dependent variablity of measurement noise. Airborne lidar and shipborne sonar bathymetry measurements from Yuanzhi Island, South China Sea, were merged to generate seamless high-resolution coverage of coral bathymetry from the shoreline to deep water. The new algorithm was applied to the Yuanzhi Island surveys to generate maps of quantitive terrain complexity, which were then compared to in situ video observations of coral abundance. The terrain complexity parameter is significantly correlated with seafloor coral abundance, demonstrating the potential for accurately and efficiently mapping coral abundance through seafloor surveys, including combinations of surveys using different sensors.
Kayen, Robert E.; Barnhardt, Walter A.; Ashford, Scott; Rollins, Kyle
2000-01-01
A ground penetrating radar (GPR) experiment at the Treasure Island Test Site [TILT] was performed to non-destructively image the soil column for changes in density prior to, and following, a liquefaction event. The intervening liquefaction was achieved by controlled blasting. A geotechnical borehole radar technique was used to acquire high-resolution 2-D radar velocity data. This method of non-destructive site characterization uses radar trans-illumination surveys through the soil column and tomographic data manipulation techniques to construct radar velocity tomograms, from which averaged void ratios can be derived at 0.25 - 0.5m pixel footprints. Tomograms of void ratio were constructed through the relation between soil porosity and dielectric constant. Both pre- and post-blast tomograms were collected and indicate that liquefaction related densification occurred at the site. Volumetric strains estimated from the tomograms correlate well with the observed settlement at the site. The 2-D imagery of void ratio can serve as high-resolution data layers for numerical site response analysis.
Eulerian adaptive finite-difference method for high-velocity impact and penetration problems
NASA Astrophysics Data System (ADS)
Barton, P. T.; Deiterding, R.; Meiron, D.; Pullin, D.
2013-05-01
Owing to the complex processes involved, faithful prediction of high-velocity impact events demands a simulation method delivering efficient calculations based on comprehensively formulated constitutive models. Such an approach is presented herein, employing a weighted essentially non-oscillatory (WENO) method within an adaptive mesh refinement (AMR) framework for the numerical solution of hyperbolic partial differential equations. Applied widely in computational fluid dynamics, these methods are well suited to the involved locally non-smooth finite deformations, circumventing any requirement for artificial viscosity functions for shock capturing. Application of the methods is facilitated through using a model of solid dynamics based upon hyper-elastic theory comprising kinematic evolution equations for the elastic distortion tensor. The model for finite inelastic deformations is phenomenologically equivalent to Maxwell's model of tangential stress relaxation. Closure relations tailored to the expected high-pressure states are proposed and calibrated for the materials of interest. Sharp interface resolution is achieved by employing level-set functions to track boundary motion, along with a ghost material method to capture the necessary internal boundary conditions for material interactions and stress-free surfaces. The approach is demonstrated for the simulation of high velocity impacts of steel projectiles on aluminium target plates in two and three dimensions.
Non-oscillatory central differencing for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Nessyahu, Haim; Tadmor, Eitan
1988-01-01
Many of the recently developed high resolution schemes for hyperbolic conservation laws are based on upwind differencing. The building block for these schemes is the averaging of an appropriate Godunov solver; its time consuming part involves the field-by-field decomposition which is required in order to identify the direction of the wind. Instead, the use of the more robust Lax-Friedrichs (LxF) solver is proposed. The main advantage is simplicity: no Riemann problems are solved and hence field-by-field decompositions are avoided. The main disadvantage is the excessive numerical viscosity typical to the LxF solver. This is compensated for by using high-resolution MUSCL-type interpolants. Numerical experiments show that the quality of results obtained by such convenient central differencing is comparable with those of the upwind schemes.
Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G.; Menon, Rajesh; Gerton, Jordan M.; Jorgensen, Erik M.; Zalevsky, Zeev
2013-01-01
Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution. PMID:24466491
Axial field shaping under high-numerical-aperture focusing
NASA Astrophysics Data System (ADS)
Jabbour, Toufic G.; Kuebler, Stephen M.
2007-03-01
Kant reported [J. Mod. Optics47, 905 (2000)] a formulation for solving the inverse problem of vector diffraction, which accurately models high-NA focusing. Here, Kant's formulation is adapted to the method of generalized projections to obtain an algorithm for designing diffractive optical elements (DOEs) that reshape the axial point-spread function (PSF). The algorithm is applied to design a binary phase-only DOE that superresolves the axial PSF with controlled increase in axial sidelobes. An 11-zone DOE is identified that axially narrows the PSF central lobe by 29% while maintaining the sidelobe intensity at or below 52% of the peak intensity. This DOE could improve the resolution achievable in several applications without significantly complicating the optical system.
A Data Assimilation System For Operational Weather Forecast In Galicia Region (nw Spain)
NASA Astrophysics Data System (ADS)
Balseiro, C. F.; Souto, M. J.; Pérez-Muñuzuri, V.; Brewster, K.; Xue, M.
Regional weather forecast models, such as the Advanced Regional Prediction System (ARPS), over complex environments with varying local influences require an accurate meteorological analysis that should include all local meteorological measurements available. In this work, the ARPS Data Analysis System (ADAS) (Xue et al. 2001) is applied as a three-dimensional weather analysis tool to include surface station and rawinsonde data with the NCEP AVN forecasts as the analysis background. Currently in ADAS, a set of five meteorological variables are considered during the analysis: horizontal grid-relative wind components, pressure, potential temperature and spe- cific humidity. The analysis is used for high resolution numerical weather prediction for the Galicia region. The analysis method used in ADAS is based on the successive corrective scheme of Bratseth (1986), which asymptotically approaches the result of a statistical (optimal) interpolation, but at lower computational cost. As in the optimal interpolation scheme, the Bratseth interpolation method can take into account the rel- ative error between background and observational data, therefore they are relatively insensitive to large variations in data density and can integrate data of mixed accuracy. This method can be applied economically in an operational setting, providing signifi- cant improvement over the background model forecast as well as any analysis without high-resolution local observations. A one-way nesting is applied for weather forecast in Galicia region, and the use of this assimilation system in both domains shows better results not only in initial conditions but also in all forecast periods. Bratseth, A.M. (1986): "Statistical interpolation by means of successive corrections." Tellus, 38A, 439-447. Souto, M. J., Balseiro, C. F., Pérez-Muñuzuri, V., Xue, M. Brewster, K., (2001): "Im- pact of cloud analysis on numerical weather prediction in the galician region of Spain". Submitted to Journal of Applied Meteorology. Xue, M., Wang. D., Gao, J., Brewster, K, Droegemeier, K. K., (2001): "The Advanced Regional Prediction System (ARPS), storm-scale numerical weather prediction and data assimilation". Meteor. Atmos Physics. Accepted
Resolution requirements for numerical simulations of transition
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Krist, Steven E.; Hussaini, M. Yousuff
1989-01-01
The resolution requirements for direct numerical simulations of transition to turbulence are investigated. A reliable resolution criterion is determined from the results of several detailed simulations of channel and boundary-layer transition.
A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM
NASA Astrophysics Data System (ADS)
Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui
2014-12-01
Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
USDA-ARS?s Scientific Manuscript database
The availability of numerous spectral, spatial, and contextual features with object-based image analysis (OBIA) renders the selection of optimal features a time consuming and subjective process. While several feature election methods have been used in conjunction with OBIA, a robust comparison of th...
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Pasin, Daniel; Cawley, Adam; Bidny, Sergei; Fu, Shanlin
2017-10-01
The proliferation of new psychoactive substances (NPS) in recent years has resulted in the development of numerous analytical methods for the detection and identification of known and unknown NPS derivatives. High-resolution mass spectrometry (HRMS) has been identified as the method of choice for broad screening of NPS in a wide range of analytical contexts because of its ability to measure accurate masses using data-independent acquisition (DIA) techniques. Additionally, it has shown promise for non-targeted screening strategies that have been developed in order to detect and identify novel analogues without the need for certified reference materials (CRMs) or comprehensive mass spectral libraries. This paper reviews the applications of HRMS for the analysis of NPS in forensic drug chemistry and analytical toxicology. It provides an overview of the sample preparation procedures in addition to data acquisition, instrumental analysis, and data processing techniques. Furthermore, it gives an overview of the current state of non-targeted screening strategies with discussion on future directions and perspectives of this technique. Graphical Abstract Missing the bullseye - a graphical respresentation of non-targeted screening. Image courtesy of Christian Alonzo.
Particle displacement tracking applied to air flows
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.
Thermophysical modelling for high-resolution digital terrain models
NASA Astrophysics Data System (ADS)
Pelivan, I.
2018-07-01
A method is presented for efficiently calculating surface temperatures for highly resolved celestial body shapes. A thorough investigation of the necessary conditions leading to reach model convergence shows that the speed of surface temperature convergence depends on factors such as the quality of initial boundary conditions, thermal inertia, illumination conditions, and resolution of the numerical depth grid. The optimization process to shorten the simulation time while increasing or maintaining the accuracy of model results includes the introduction of facet-specific boundary conditions such as pre-computed temperature estimates and pre-evaluated simulation times. The individual facet treatment also allows for assigning other facet-specific properties such as local thermal inertia. The approach outlined in this paper is particularly useful for very detailed digital terrain models in combination with unfavourable illumination conditions such as little-to-no sunlight at all for a period of time as experienced locally on comet 67P/Churyumov-Gerasimenko. Possible science applications include thermal analysis of highly resolved local (landing) sites experiencing seasonal, environment, and lander shadowing. In combination with an appropriate roughness model, the method is very suitable for application to disc-integrated and disc-resolved data. Further applications are seen where the complexity of the task has led to severe shape or thermophysical model simplifications such as in studying surface activity or thermal cracking.
Thermophysical modeling for high-resolution digital terrain models
NASA Astrophysics Data System (ADS)
Pelivan, I.
2018-04-01
A method is presented for efficiently calculating surface temperatures for highly resolved celestial body shapes. A thorough investigation of the necessary conditions leading to reach model convergence shows that the speed of surface temperature convergence depends on factors such as the quality of initial boundary conditions, thermal inertia, illumination conditions, and resolution of the numerical depth grid. The optimization process to shorten the simulation time while increasing or maintaining the accuracy of model results includes the introduction of facet-specific boundary conditions such as pre-computed temperature estimates and pre-evaluated simulation times. The individual facet treatment also allows for assigning other facet-specific properties such as local thermal inertia. The approach outlined in this paper is particularly useful for very detailed digital terrain models in combination with unfavorable illumination conditions such as little to no sunlight at all for a period of time as experienced locally on comet 67P/Churyumov-Gerasimenko. Possible science applications include thermal analysis of highly resolved local (landing) sites experiencing seasonal, environment and lander shadowing. In combination with an appropriate roughness model, the method is very suitable for application to disk-integrated and disk-resolved data. Further applications are seen where the complexity of the task has led to severe shape or thermophysical model simplifications such as in studying surface activity or thermal cracking.
Triebl, Alexander; Trötzmüller, Martin; Hartler, Jürgen; Stojakovic, Tatjana; Köfeler, Harald C
2017-05-15
An improved approach for selective and sensitive identification and quantitation of lipid molecular species using reversed phase chromatography coupled to high resolution mass spectrometry was developed. The method is applicable to a wide variety of biological matrices using a simple liquid-liquid extraction procedure. Together, this approach combines multiple selectivity criteria: Reversed phase chromatography separates lipids according to their acyl chain length and degree of unsaturation and is capable of resolving positional isomers of lysophospholipids, as well as structural isomers of diacyl phospholipids and glycerolipids. Orbitrap mass spectrometry delivers the elemental composition of both positive and negative ions with high mass accuracy. Finally, automatically generated tandem mass spectra provide structural insight into numerous glycerolipids, phospholipids, and sphingolipids within a single run. Calibration showed linearity ranges of more than four orders of magnitude, good values for accuracy and precision at biologically relevant concentration levels, and limits of quantitation of a few femtomoles on column. Hundreds of lipid molecular species were detected and quantified in three different biological matrices, which cover well the wide variety and complexity of various model organisms in lipidomic research. Together with a software package, this method is a prime choice for global lipidomic analysis of even the most complex biological samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-01-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-05-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.
Quasi-most unstable modes: a window to 'À la carte' ensemble diversity?
NASA Astrophysics Data System (ADS)
Homar Santaner, Victor; Stensrud, David J.
2010-05-01
The atmospheric scientific community is nowadays facing the ambitious challenge of providing useful forecasts of atmospheric events that produce high societal impact. The low level of social resilience to false alarms creates tremendous pressure on forecasting offices to issue accurate, timely and reliable warnings.Currently, no operational numerical forecasting system is able to respond to the societal demand for high-resolution (in time and space) predictions in the 12-72h time span. The main reasons for such deficiencies are the lack of adequate observations and the high non-linearity of the numerical models that are currently used. The whole weather forecasting problem is intrinsically probabilistic and current methods aim at coping with the various sources of uncertainties and the error propagation throughout the forecasting system. This probabilistic perspective is often created by generating ensembles of deterministic predictions that are aimed at sampling the most important sources of uncertainty in the forecasting system. The ensemble generation/sampling strategy is a crucial aspect of their performance and various methods have been proposed. Although global forecasting offices have been using ensembles of perturbed initial conditions for medium-range operational forecasts since 1994, no consensus exists regarding the optimum sampling strategy for high resolution short-range ensemble forecasts. Bred vectors, however, have been hypothesized to better capture the growing modes in the highly nonlinear mesoscale dynamics of severe episodes than singular vectors or observation perturbations. Yet even this technique is not able to produce enough diversity in the ensembles to accurately and routinely predict extreme phenomena such as severe weather. Thus, we propose a new method to generate ensembles of initial conditions perturbations that is based on the breeding technique. Given a standard bred mode, a set of customized perturbations is derived with specified amplitudes and horizontal scales. This allows the ensemble to excite growing modes across a wider range of scales. Results show that this approach produces significantly more spread in the ensemble prediction than standard bred modes alone. Several examples that illustrate the benefits from this approach for severe weather forecasts will be provided.
Detailed Characterization of Nearshore Processes During NCEX
NASA Astrophysics Data System (ADS)
Holland, K.; Kaihatu, J. M.; Plant, N.
2004-12-01
Recent technology advances have allowed the coupling of remote sensing methods with advanced wave and circulation models to yield detailed characterizations of nearshore processes. This methodology was demonstrated as part of the Nearshore Canyon EXperiment (NCEX) in La Jolla, CA during Fall 2003. An array of high-resolution, color digital cameras was installed to monitor an alongshore distance of nearly 2 km out to depths of 25 m. This digital imagery was analyzed over the three-month period through an automated process to produce hourly estimates of wave period, wave direction, breaker height, shoreline position, sandbar location, and bathymetry at numerous locations during daylight hours. Interesting wave propagation patterns in the vicinity of the canyons were observed. In addition, directional wave spectra and swash / surf flow velocities were estimated using more computationally intensive methods. These measurements were used to provide forcing and boundary conditions for the Delft3D wave and circulation model, giving additional estimates of nearshore processes such as dissipation and rip currents. An optimal approach for coupling these remotely sensed observations to the numerical model was selected to yield accurate, but also timely characterizations. This involved assimilation of directional spectral estimates near the offshore boundary to mimic forcing conditions achieved under traditional approaches involving nested domains. Measurements of breaker heights and flow speeds were also used to adaptively tune model parameters to provide enhanced accuracy. Comparisons of model predictions and video observations show significant correlation. As compared to nesting within larger-scale and coarser resolution models, the advantages of providing boundary conditions data using remote sensing is much improved resolution and fidelity. For example, rip current development was both modeled and observed. These results indicate that this approach to data-model coupling is tenable and may be useful in near-real-time characterizations required by many applied scenarios.
CLIP-related methodologies and their application to retrovirology.
Bieniasz, Paul D; Kutluay, Sebla B
2018-05-02
Virtually every step of HIV-1 replication and numerous cellular antiviral defense mechanisms are regulated by the binding of a viral or cellular RNA-binding protein (RBP) to distinct sequence or structural elements on HIV-1 RNAs. Until recently, these protein-RNA interactions were studied largely by in vitro binding assays complemented with genetics approaches. However, these methods are highly limited in the identification of the relevant targets of RBPs in physiologically relevant settings. Development of crosslinking-immunoprecipitation sequencing (CLIP) methodology has revolutionized the analysis of protein-nucleic acid complexes. CLIP combines immunoprecipitation of covalently crosslinked protein-RNA complexes with high-throughput sequencing, providing a global account of RNA sequences bound by a RBP of interest in cells (or virions) at near-nucleotide resolution. Numerous variants of the CLIP protocol have recently been developed, some with major improvements over the original. Herein, we briefly review these methodologies and give examples of how CLIP has been successfully applied to retrovirology research.
High-order fractional partial differential equation transform for molecular surface construction.
Hu, Langhua; Chen, Duan; Wei, Guo-Wei
2013-01-01
Fractional derivative or fractional calculus plays a significant role in theoretical modeling of scientific and engineering problems. However, only relatively low order fractional derivatives are used at present. In general, it is not obvious what role a high fractional derivative can play and how to make use of arbitrarily high-order fractional derivatives. This work introduces arbitrarily high-order fractional partial differential equations (PDEs) to describe fractional hyperdiffusions. The fractional PDEs are constructed via fractional variational principle. A fast fractional Fourier transform (FFFT) is proposed to numerically integrate the high-order fractional PDEs so as to avoid stringent stability constraints in solving high-order evolution PDEs. The proposed high-order fractional PDEs are applied to the surface generation of proteins. We first validate the proposed method with a variety of test examples in two and three-dimensional settings. The impact of high-order fractional derivatives to surface analysis is examined. We also construct fractional PDE transform based on arbitrarily high-order fractional PDEs. We demonstrate that the use of arbitrarily high-order derivatives gives rise to time-frequency localization, the control of the spectral distribution, and the regulation of the spatial resolution in the fractional PDE transform. Consequently, the fractional PDE transform enables the mode decomposition of images, signals, and surfaces. The effect of the propagation time on the quality of resulting molecular surfaces is also studied. Computational efficiency of the present surface generation method is compared with the MSMS approach in Cartesian representation. We further validate the present method by examining some benchmark indicators of macromolecular surfaces, i.e., surface area, surface enclosed volume, surface electrostatic potential and solvation free energy. Extensive numerical experiments and comparison with an established surface model indicate that the proposed high-order fractional PDEs are robust, stable and efficient for biomolecular surface generation.
Recent advances in flexible low power cholesteric LCDs
NASA Astrophysics Data System (ADS)
Khan, Asad; Shiyanovskaya, Irina; Montbach, Erica; Schneider, Tod; Nicholson, Forrest; Miller, Nick; Marhefka, Duane; Ernst, Todd; Doane, J. W.
2006-05-01
Bistable reflective cholesteric displays are a liquid crystal display technology developed to fill a market need for very low power displays. Their unique look, high reflectivity, bistability, and simple structure make them an ideal flat panel display choice for handheld or other portable devices where small lightweight batteries with long lifetimes are important. Applications ranging from low resolution large signs to ultra high resolution electronic books can utilize cholesteric displays to not only benefit from the numerous features, but also create enabling features that other flat panel display technologies cannot. Flexible displays are the focus of attention of numerous research groups and corporations worldwide. Cholesteric displays have been demonstrated to be highly amenable to flexible substrates. This paper will review recent advances in flexible cholesteric displays including both phase separation and emulsification approaches to encapsulation. Both approaches provide unique benefits to various aspects of manufacturability, processes, flexibility, and conformability.
NASA Astrophysics Data System (ADS)
González, J. A.; Guzmán, F. S.
2018-03-01
We present a method for estimating the velocity of a wandering black hole and the equation of state for the gas around it based on a catalog of numerical simulations. The method uses machine-learning methods based on convolutional neural networks applied to the classification of images resulting from numerical simulations. Specifically we focus on the supersonic velocity regime and choose the direction of the black hole to be parallel to its spin. We build a catalog of 900 simulations by numerically solving Euler's equations onto the fixed space-time background of a black hole, for two parameters: the adiabatic index Γ with values in the range [1.1, 5 /3 ], and the asymptotic relative velocity of the black hole with respect to the surroundings v∞, with values within [0.2 ,0.8 ]c . For each simulation we produce a 2D image of the gas density once the process of accretion has approached a stationary regime. The results obtained show that the implemented convolutional neural networks are able to correctly classify the adiabatic index 87.78% of the time within an uncertainty of ±0.0284 , while the prediction of the velocity is correct 96.67% of the time within an uncertainty of ±0.03 c . We expect that this combination of a massive number of numerical simulations and machine-learning methods will help us analyze more complicated scenarios related to future high-resolution observations of black holes, like those from the Event Horizon Telescope.
NASA Astrophysics Data System (ADS)
Cavalié, T.; Billebaud, F.; Encrenaz, T.; Dobrijevic, M.; Brillet, J.; Forget, F.; Lellouch, E.
2008-10-01
Aims: We have recorded high spectral resolution spectra and derived precise atmospheric temperature profiles and wind velocities in the atmosphere of Mars. We have compared observations of the planetary mean thermal profile and mesospheric wind velocities on the disk, obtained with our millimetric observations of CO rotational lines, to predictions from the Laboratoire de Météorologie Dynamique (LMD) Mars General Circulation Model, as provided through the Mars Climate Database (MCD) numerical tool. Methods: We observed the atmosphere of Mars at CO(1-0) and CO(2-1) wavelengths with the IRAM 30-m antenna in June 2001 and November 2005. We retrieved the mean thermal profile of the planet from high and low spectral resolution data with an inversion method detailed here. High spectral resolution spectra were used to derive mesospheric wind velocities on the planetary disk. We also report here the use of 13CO(2-1) line core shifts to measure wind velocities at 40 km. Results: Neither the Mars Year 24 (MY24) nor the Dust Storm scenario from the Mars Climate Database (MCD) provides satisfactory fits to the 2001 and 2005 data when retrieving the thermal profiles. The Warm scenario only provides good fits for altitudes lower than 30 km. The atmosphere is warmer than predicted up to 60 km and then becomes colder. Dust loading could be the reason for this mismatch. The MCD MY24 scenario predicts a thermal inversion layer between 40 and 60 km, which is not retrieved from the high spectral resolution data. Our results are generally in agreement with other observations from 10 to 40 km in altitude, but our results obtained from the high spectral resolution spectra differ in the 40-70 km layer, where the instruments are the most sensitive. The wind velocities we retrieve from our 12CO observations confirm MCD predictions for 2001 and 2005. Velocities obtained from 13CO observations are consistent with MCD predictions in 2001, but are lower than predicted in 2005.
A Novel Hyperbolization Procedure for The Two-Phase Six-Equation Flow Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samet Y. Kadioglu; Robert Nourgaliev; Nam Dinh
2011-10-01
We introduce a novel approach for the hyperbolization of the well-known two-phase six equation flow model. The six-equation model has been frequently used in many two-phase flow applications such as bubbly fluid flows in nuclear reactors. One major drawback of this model is that it can be arbitrarily non-hyperbolic resulting in difficulties such as numerical instability issues. Non-hyperbolic behavior can be associated with complex eigenvalues that correspond to characteristic matrix of the system. Complex eigenvalues are often due to certain flow parameter choices such as the definition of inter-facial pressure terms. In our method, we prevent the characteristic matrix receivingmore » complex eigenvalues by fine tuning the inter-facial pressure terms with an iterative procedure. In this way, the characteristic matrix possesses all real eigenvalues meaning that the characteristic wave speeds are all real therefore the overall two-phase flowmodel becomes hyperbolic. The main advantage of this is that one can apply less diffusive highly accurate high resolution numerical schemes that often rely on explicit calculations of real eigenvalues. We note that existing non-hyperbolic models are discretized mainly based on low order highly dissipative numerical techniques in order to avoid stability issues.« less
Navier-Stokes simulations of unsteady transonic flow phenomena
NASA Technical Reports Server (NTRS)
Atwood, C. A.
1992-01-01
Numerical simulations of two classes of unsteady flows are obtained via the Navier-Stokes equations: a blast-wave/target interaction problem class and a transonic cavity flow problem class. The method developed for the viscous blast-wave/target interaction problem assumes a laminar, perfect gas implemented in a structured finite-volume framework. The approximately factored implicit scheme uses Newton subiterations to obtain the spatially and temporally second-order accurate time history of the blast-waves with stationary targets. The inviscid flux is evaluated using either of two upwind techniques, while the full viscous terms are computed by central differencing. Comparisons of unsteady numerical, analytical, and experimental results are made in two- and three-dimensions for Couette flows, a starting shock-tunnel, and a shock-tube blockage study. The results show accurate wave speed resolution and nonoscillatory discontinuity capturing of the predominantly inviscid flows. Viscous effects were increasingly significant at large post-interaction times. While the blast-wave/target interaction problem benefits from high-resolution methods applied to the Euler terms, the transonic cavity flow problem requires the use of an efficient scheme implemented in a geometrically flexible overset mesh environment. Hence, the Reynolds averaged Navier-Stokes equations implemented in a diagonal form are applied to the cavity flow class of problems. Comparisons between numerical and experimental results are made in two-dimensions for free shear layers and both rectangular and quieted cavities, and in three-dimensions for Stratospheric Observatory For Infrared Astronomy (SOFIA) geometries. The acoustic behavior of the rectangular and three-dimensional cavity flows compare well with experiment in terms of frequency, magnitude, and quieting trends. However, there is a more rapid decrease in computed acoustic energy with frequency than observed experimentally owing to numerical dissipation. In addition, optical phase distortion due to the time-varying density field is modelled using geometrical constructs. The computed optical distortion trends compare with the experimentally inferred result, but underpredicts the fluctuating phase difference magnitude.
NASA Astrophysics Data System (ADS)
Nunes, Ana
2015-04-01
Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.
Multi-dimensional upwinding-based implicit LES for the vorticity transport equations
NASA Astrophysics Data System (ADS)
Foti, Daniel; Duraisamy, Karthik
2017-11-01
Complex turbulent flows such as rotorcraft and wind turbine wakes are characterized by the presence of strong coherent structures that can be compactly described by vorticity variables. The vorticity-velocity formulation of the incompressible Navier-Stokes equations is employed to increase numerical efficiency. Compared to the traditional velocity-pressure formulation, high order numerical methods and sub-grid scale models for the vorticity transport equation (VTE) have not been fully investigated. Consistent treatment of the convection and stretching terms also needs to be addressed. Our belief is that, by carefully designing sharp gradient-capturing numerical schemes, coherent structures can be more efficiently captured using the vorticity-velocity formulation. In this work, a multidimensional upwind approach for the VTE is developed using the generalized Riemann problem-based scheme devised by Parish et al. (Computers & Fluids, 2016). The algorithm obtains high resolution by augmenting the upwind fluxes with transverse and normal direction corrections. The approach is investigated with several canonical vortex-dominated flows including isolated and interacting vortices and turbulent flows. The capability of the technique to represent sub-grid scale effects is also assessed. Navy contract titled ``Turbulence Modelling Across Disparate Length Scales for Naval Computational Fluid Dynamics Applications,'' through Continuum Dynamics, Inc.
Analysis of impact of general-purpose graphics processor units in supersonic flow modeling
NASA Astrophysics Data System (ADS)
Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.
2017-06-01
Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.
Performance of Low Dissipative High Order Shock-Capturing Schemes for Shock-Turbulence Interactions
NASA Technical Reports Server (NTRS)
Sandham, N. D.; Yee, H. C.
1998-01-01
Accurate and efficient direct numerical simulation of turbulence in the presence of shock waves represents a significant challenge for numerical methods. The objective of this paper is to evaluate the performance of high order compact and non-compact central spatial differencing employing total variation diminishing (TVD) shock-capturing dissipations as characteristic based filters for two model problems combining shock wave and shear layer phenomena. A vortex pairing model evaluates the ability of the schemes to cope with shear layer instability and eddy shock waves, while a shock wave impingement on a spatially-evolving mixing layer model studies the accuracy of computation of vortices passing through a sequence of shock and expansion waves. A drastic increase in accuracy is observed if a suitable artificial compression formulation is applied to the TVD dissipations. With this modification to the filter step the fourth-order non-compact scheme shows improved results in comparison to second-order methods, while retaining the good shock resolution of the basic TVD scheme. For this characteristic based filter approach, however, the benefits of compact schemes or schemes with higher than fourth order are not sufficient to justify the higher complexity near the boundary and/or the additional computational cost.
NASA Astrophysics Data System (ADS)
Mazlin, Viacheslav; Xiao, Peng; Dalimier, Eugénie; Grieve, Kate; Irsch, Kristina; Sahel, José; Fink, Mathias; Boccara, Claude
2018-02-01
Despite obvious improvements in visualization of the in vivo cornea through the faster imaging speeds and higher axial resolutions, cellular imaging stays unresolvable task for OCT, as en face viewing with a high lateral resolution is required. The latter is possible with FFOCT, a method that relies on a camera, moderate numerical aperture (NA) objectives and an incoherent light source to provide en face images with a micrometer-level resolution. Recently, we for the first time demonstrated the ability of FFOCT to capture images from the in vivo human cornea1. In the current paper we present an extensive study of appearance of healthy in vivo human corneas under FFOCT examination. En face corneal images with a micrometer-level resolution were obtained from the three healthy subjects. For each subject it was possible to acquire images through the entire corneal depth and visualize the epithelium structures, Bowman's layer, sub-basal nerve plexus (SNP) fibers, anterior, middle and posterior stroma, endothelial cells with nuclei. Dimensions and densities of the structures visible with FFOCT, are in agreement with those seen by other cornea imaging methods. Cellular-level details in the images obtained together with the relatively large field-of-view (FOV) and contactless way of imaging make this device a promising candidate for becoming a new tool in ophthalmological diagnostics.
NASA Astrophysics Data System (ADS)
Min, Junwei; Yao, Baoli; Ketelhut, Steffi; Kemper, Björn
2017-02-01
The modular combination of optical microscopes with digital holographic microscopy (DHM) has been proven to be a powerful tool for quantitative live cell imaging. The introduction of condenser and different microscope objectives (MO) simplifies the usage of the technique and makes it easier to measure different kinds of specimens with different magnifications. However, the high flexibility of illumination and imaging also causes variable phase aberrations that need to be eliminated for high resolution quantitative phase imaging. The existent phase aberrations compensation methods either require add additional elements into the reference arm or need specimen free reference areas or separate reference holograms to build up suitable digital phase masks. These inherent requirements make them unpractical for usage with highly variable illumination and imaging systems and prevent on-line monitoring of living cells. In this paper, we present a simple numerical method for phase aberration compensation based on the analysis of holograms in spatial frequency domain with capabilities for on-line quantitative phase imaging. From a single shot off-axis hologram, the whole phase aberration can be eliminated automatically without numerical fitting or pre-knowledge of the setup. The capabilities and robustness for quantitative phase imaging of living cancer cells are demonstrated.
Improved wavefront correction for coherent image restoration.
Zelenka, Claudius; Koch, Reinhard
2017-08-07
Coherent imaging has a wide range of applications in, for example, microscopy, astronomy, and radar imaging. Particularly interesting is the field of microscopy, where the optical quality of the lens is the main limiting factor. In this article, novel algorithms for the restoration of blurred images in a system with known optical aberrations are presented. Physically motivated by the scalar diffraction theory, the new algorithms are based on Haugazeau POCS and FISTA, and are faster and more robust than methods presented earlier. With the new approach the level of restoration quality on real images is very high, thereby blurring and ringing caused by defocus can be effectively removed. In classical microscopy, lenses with very low aberration must be used, which puts a practical limit on their size and numerical aperture. A coherent microscope using the novel restoration method overcomes this limitation. In contrast to incoherent microscopy, severe optical aberrations including defocus can be removed, hence the requirements on the quality of the optics are lower. This can be exploited for an essential price reduction of the optical system. It can be also used to achieve higher resolution than in classical microscopy, using lenses with high numerical aperture and high aberration. All this makes the coherent microscopy superior to the traditional incoherent in suited applications.
High-resolution seismic reflection profiling for mapping shallow aquifers in Lee County, Florida
Missimer, T.M.; Gardner, Richard Alfred
1976-01-01
High-resolution continuous seismic reflection profiling equipment was utilized to define the configuration of sedimentary layers underlying part of Lee County, Florida. About 45 miles (72 kilometers) of profile were made on the Caloosahatchee River Estuary and San Carlos Bay. Two different acoustic energy sources, a high resolution boomer and a 45-electrode high resolution sparker, both having a power input of 300 joules, were used to obtain both adequate penetration and good resolution. The seismic profiles show that much of the strata of middle Miocene to Holocene age apparently are extensively folded but not faulted. Initial interpretations indicate that: (1) the top of the Hawthorn Formation (which contains the upper Hawthorn aquifer) has much relief due chiefly to apparent folding; (2) the limestone, sandstone, and unconsolidated sand and phosphorite, which together compose the sandstone aquifer, appear to be discontinuous; (3) the green clay unit of the Tamiami Formation contains large scale angular beds dipping eastward; and (4) numerous deeply cut alluvium-filled paleochannels underlie the Caloosahatchee River. (Woodard-USGS)
Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition
NASA Astrophysics Data System (ADS)
Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen
2017-04-01
Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.
Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki
2008-08-01
Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.