A study of pressure-based methodology for resonant flows in non-linear combustion instabilities
NASA Technical Reports Server (NTRS)
Yang, H. Q.; Pindera, M. Z.; Przekwas, A. J.; Tucker, K.
1992-01-01
This paper presents a systematic assessment of a large variety of spatial and temporal differencing schemes on nonstaggered grids by the pressure-based methods for the problems of fast transient flows. The observation from the present study is that for steady state flow problems, pressure-based methods can be very competitive with the density-based methods. For transient flow problems, pressure-based methods utilizing the same differencing scheme are less accurate, even though the wave speeds are correctly predicted.
Yao, Qian; Cao, Xiao-Mei; Zong, Wen-Gang; Sun, Xiao-Hui; Li, Ze-Rong; Li, Xiang-Yuan
2018-05-31
The isodesmic reaction method is applied to calculate the potential energy surface (PES) along the reaction coordinates and the rate constants of the barrierless reactions for unimolecular dissociation reactions of alkanes to form two alkyl radicals and their reverse recombination reactions. The reaction class is divided into 10 subclasses depending upon the type of carbon atoms in the reaction centers. A correction scheme based on isodesmic reaction theory is proposed to correct the PESs at UB3LYP/6-31+G(d,p) level. To validate the accuracy of this scheme, a comparison of the PESs at B3LYP level and the corrected PESs with the PESs at CASPT2/aug-cc-pVTZ level is performed for 13 representative reactions, and it is found that the deviations of the PESs at B3LYP level are up to 35.18 kcal/mol and are reduced to within 2 kcal/mol after correction, indicating that the PESs for barrierless reactions in a subclass can be calculated meaningfully accurately at a low level of ab initio method using our correction scheme. High-pressure limit rate constants and pressure dependent rate constants of these reactions are calculated based on their corrected PESs and the results show the pressure dependence of the rate constants cannot be ignored, especially at high temperatures. Furthermore, the impact of molecular size on the pressure-dependent rate constants of decomposition reactions of alkanes and their reverse reactions has been studied. The present work provides an effective method to generate meaningfully accurate PESs for large molecular system.
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
NASA Astrophysics Data System (ADS)
Piatkowski, Marian; Müthing, Steffen; Bastian, Peter
2018-03-01
In this paper we consider discontinuous Galerkin (DG) methods for the incompressible Navier-Stokes equations in the framework of projection methods. In particular we employ symmetric interior penalty DG methods within the second-order rotational incremental pressure correction scheme. The major focus of the paper is threefold: i) We propose a modified upwind scheme based on the Vijayasundaram numerical flux that has favourable properties in the context of DG. ii) We present a novel postprocessing technique in the Helmholtz projection step based on H (div) reconstruction of the pressure correction that is computed locally, is a projection in the discrete setting and ensures that the projected velocity satisfies the discrete continuity equation exactly. As a consequence it also provides local mass conservation of the projected velocity. iii) Numerical results demonstrate the properties of the scheme for different polynomial degrees applied to two-dimensional problems with known solution as well as large-scale three-dimensional problems. In particular we address second-order convergence in time of the splitting scheme as well as its long-time stability.
Investigation of Convection and Pressure Treatment with Splitting Techniques
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei; Liou, Meng-Sing
1995-01-01
Treatment of convective and pressure fluxes in the Euler and Navier-Stokes equations using splitting formulas for convective velocity and pressure is investigated. Two schemes - controlled variation scheme (CVS) and advection upstream splitting method (AUSM) - are explored for their accuracy in resolving sharp gradients in flows involving moving or reflecting shock waves as well as a one-dimensional combusting flow with a strong heat release source term. For two-dimensional compressible flow computations, these two schemes are implemented in one of the pressure-based algorithms, whose very basis is the separate treatment of convective and pressure fluxes. For the convective fluxes in the momentum equations as well as the estimation of mass fluxes in the pressure correction equation (which is derived from the momentum and continuity equations) of the present algorithm, both first- and second-order (with minmod limiter) flux estimations are employed. Some issues resulting from the conventional use in pressure-based methods of a staggered grid, for the location of velocity components and pressure, are also addressed. Using the second-order fluxes, both CVS and AUSM type schemes exhibit sharp resolution. Overall, the combination of upwinding and splitting for the convective and pressure fluxes separately exhibits robust performance for a variety of flows and is particularly amenable for adoption in pressure-based methods.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1993-01-01
A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.
Thermodynamic evaluation of transonic compressor rotors using the finite volume approach
NASA Technical Reports Server (NTRS)
Moore, J.; Nicholson, S.; Moore, J. G.
1985-01-01
Research at NASA Lewis Research Center gave the opportunity to incorporate new control volumes in the Denton 3-D finite-volume time marching code. For duct flows, the new control volumes require no transverse smoothing and this allows calculations with large transverse gradients in properties without significant numerical total pressure losses. Possibilities for improving the Denton code to obtain better distributions of properties through shocks were demonstrated. Much better total pressure distributions through shocks are obtained when the interpolated effective pressure, needed to stabilize the solution procedure, is used to calculate the total pressure. This simple change largely eliminates the undershoot in total pressure down-stream of a shock. Overshoots and undershoots in total pressure can then be further reduced by a factor of 10 by adopting the effective density method, rather than the effective pressure method. Use of a Mach number dependent interpolation scheme for pressure then removes the overshoot in static pressure downstream of a shock. The stability of interpolation schemes used for the calculation of effective density is analyzed and a Mach number dependent scheme is developed, combining the advantages of the correct perfect gas equation for subsonic flow with the stability of 2-point and 3-point interpolation schemes for supersonic flow.
A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
NASA Technical Reports Server (NTRS)
Ghosh, Amitabha
1997-01-01
This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.
A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
NASA Technical Reports Server (NTRS)
Ghosh, Amitabha
1997-01-01
This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.
Calculations of separated 3-D flows with a pressure-staggered Navier-Stokes equations solver
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1991-01-01
A Navier-Stokes equations solver based on a pressure correction method with a pressure-staggered mesh and calculations of separated three-dimensional flows are presented. It is shown that the velocity pressure decoupling, which occurs when various pressure correction algorithms are used for pressure-staggered meshes, is caused by the ill-conditioned discrete pressure correction equation. The use of a partial differential equation for the incremental pressure eliminates the velocity pressure decoupling mechanism by itself and yields accurate numerical results. Example flows considered are a three-dimensional lid driven cavity flow and a laminar flow through a 90 degree bend square duct. For the lid driven cavity flow, the present numerical results compare more favorably with the measured data than those obtained using a formally third order accurate quadratic upwind interpolation scheme. For the curved duct flow, the present numerical method yields a grid independent solution with a very small number of grid points. The calculated velocity profiles are in good agreement with the measured data.
NASA Technical Reports Server (NTRS)
Partridge, William P.; Laurendeau, Normand M.
1997-01-01
We have experimentally assessed the quantitative nature of planar laser-induced fluorescence (PLIF) measurements of NO concentration in a unique atmospheric pressure, laminar, axial inverse diffusion flame (IDF). The PLIF measurements were assessed relative to a two-dimensional array of separate laser saturated fluorescence (LSF) measurements. We demonstrated and evaluated several experimentally-based procedures for enhancing the quantitative nature of PLIF concentration images. Because these experimentally-based PLIF correction schemes require only the ability to make PLIF and LSF measurements, they produce a more broadly applicable PLIF diagnostic compared to numerically-based correction schemes. We experimentally assessed the influence of interferences on both narrow-band and broad-band fluorescence measurements at atmospheric and high pressures. Optimum excitation and detection schemes were determined for the LSF and PLIF measurements. Single-input and multiple-input, experimentally-based PLIF enhancement procedures were developed for application in test environments with both negligible and significant quench-dependent error gradients. Each experimentally-based procedure provides an enhancement of approximately 50% in the quantitative nature of the PLIF measurements, and results in concentration images nominally as quantitative as LSF point measurements. These correction procedures can be applied to other species, including radicals, for which no experimental data are available from which to implement numerically-based PLIF enhancement procedures.
Air-braked cycle ergometers: validity of the correction factor for barometric pressure.
Finn, J P; Maxwell, B F; Withers, R T
2000-10-01
Barometric pressure exerts by far the greatest influence of the three environmental factors (barometric pressure, temperature and humidity) on power outputs from air-braked ergometers. The barometric pressure correction factor for power outputs from air-braked ergometers is in widespread use but apparently has never been empirically validated. Our experiment validated this correction factor by calibrating two air-braked cycle ergometers in a hypobaric chamber using a dynamic calibration rig. The results showed that if the power output correction for changes in air resistance at barometric pressures corresponding to altitudes of 38, 600, 1,200 and 1,800 m above mean sea level were applied, then the coefficients of variation were 0.8-1.9% over the range of 160-1,597 W. The overall mean error was 3.0 % but this included up to 0.73 % for the propagated error that was associated with errors in the measurement of: a) temperature b) relative humidity c) barometric pressure d) force, distance and angular velocity by the dynamic calibration rig. The overall mean error therefore approximated the +/- 2.0% of true load that was specified by the Laboratory Standards Assistance Scheme of the Australian Sports Commission. The validity of the correction factor for barometric pressure on power output was therefore demonstrated over the altitude range of 38-1,800 m.
Dahlgren, Björn; Reif, Maria M; Hünenberger, Philippe H; Hansen, Niels
2012-10-09
The raw ionic solvation free energies calculated on the basis of atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [Kastenholz, M. A.; Hünenberger, P. H. J. Chem. Phys.2006, 124, 224501 and Reif, M. M.; Hünenberger, P. H. J. Chem. Phys.2011, 134, 144104], the application of an appropriate correction scheme allows for a conversion of the methodology-dependent raw data into methodology-independent results. In this work, methodology-independent derivative thermodynamic hydration and aqueous partial molar properties are calculated for the Na(+) and Cl(-) ions at P° = 1 bar and T(-) = 298.15 K, based on the SPC water model and on ion-solvent Lennard-Jones interaction coefficients previously reoptimized against experimental hydration free energies. The hydration parameters considered are the hydration free energy and enthalpy. The aqueous partial molar parameters considered are the partial molar entropy, volume, heat capacity, volume-compressibility, and volume-expansivity. Two alternative calculation methods are employed to access these properties. Method I relies on the difference in average volume and energy between two aqueous systems involving the same number of water molecules, either in the absence or in the presence of the ion, along with variations of these differences corresponding to finite pressure or/and temperature changes. Method II relies on the calculation of the hydration free energy of the ion, along with variations of this free energy corresponding to finite pressure or/and temperature changes. Both methods are used considering two distinct variants in the application of the correction scheme. In variant A, the raw values from the simulations are corrected after the application of finite difference in pressure or/and temperature, based on correction terms specifically designed for derivative parameters at P° and T(-). In variant B, these raw values are corrected prior to differentiation, based on corresponding correction terms appropriate for the different simulation pressures P and temperatures T. The results corresponding to the different calculation schemes show that, except for the hydration free energy itself, accurate methodological independence and quantitative agreement with even the most reliable experimental parameters (ion-pair properties) are not yet reached. Nevertheless, approximate internal consistency and qualitative agreement with experimental results can be achieved, but only when an appropriate correction scheme is applied, along with a careful consideration of standard-state issues. In this sense, the main merit of the present study is to set a clear framework for these types of calculations and to point toward directions for future improvements, with the ultimate goal of reaching a consistent and quantitative description of single-ion hydration thermodynamics in molecular dynamics simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franzelli, B.; Riber, E.; Sanjose, M.
A reduced two-step scheme (called 2S-KERO-BFER) for kerosene-air premixed flames is presented in the context of Large Eddy Simulation of reacting turbulent flows in industrial applications. The chemical mechanism is composed of two reactions corresponding to the fuel oxidation into CO and H{sub 2}O, and the CO - CO{sub 2} equilibrium. To ensure the validity of the scheme for rich combustion, the pre-exponential constants of the two reactions are tabulated versus the local equivalence ratio. The fuel and oxidizer exponents are chosen to guarantee the correct dependence of laminar flame speed with pressure. Due to a lack of experimental results,more » the detailed mechanism of Dagaut composed of 209 species and 1673 reactions, and the skeletal mechanism of Luche composed of 91 species and 991 reactions have been used to validate the reduced scheme. Computations of one-dimensional laminar flames have been performed with the 2S{sub K}ERO{sub B}FER scheme using the CANTERA and COSILAB softwares for a wide range of pressure ([1; 12] atm), fresh gas temperature ([300; 700] K), and equivalence ratio ([0.6; 2.0]). Results show that the flame speed is correctly predicted for the whole range of parameters, showing a maximum for stoichiometric flames, a decrease for rich combustion and a satisfactory pressure dependence. The burnt gas temperature and the dilution by Exhaust Gas Recirculation are also well reproduced. Moreover, the results for ignition delay time are in good agreement with the experiments. (author)« less
a Cell Vertex Algorithm for the Incompressible Navier-Stokes Equations on Non-Orthogonal Grids
NASA Astrophysics Data System (ADS)
Jessee, J. P.; Fiveland, W. A.
1996-08-01
The steady, incompressible Navier-Stokes (N-S) equations are discretized using a cell vertex, finite volume method. Quadrilateral and hexahedral meshes are used to represent two- and three-dimensional geometries respectively. The dependent variables include the Cartesian components of velocity and pressure. Advective fluxes are calculated using bounded, high-resolution schemes with a deferred correction procedure to maintain a compact stencil. This treatment insures bounded, non-oscillatory solutions while maintaining low numerical diffusion. The mass and momentum equations are solved with the projection method on a non-staggered grid. The coupling of the pressure and velocity fields is achieved using the Rhie and Chow interpolation scheme modified to provide solutions independent of time steps or relaxation factors. An algebraic multigrid solver is used for the solution of the implicit, linearized equations.A number of test cases are anlaysed and presented. The standard benchmark cases include a lid-driven cavity, flow through a gradual expansion and laminar flow in a three-dimensional curved duct. Predictions are compared with data, results of other workers and with predictions from a structured, cell-centred, control volume algorithm whenever applicable. Sensitivity of results to the advection differencing scheme is investigated by applying a number of higher-order flux limiters: the MINMOD, MUSCL, OSHER, CLAM and SMART schemes. As expected, studies indicate that higher-order schemes largely mitigate the diffusion effects of first-order schemes but also shown no clear preference among the higher-order schemes themselves with respect to accuracy. The effect of the deferred correction procedure on global convergence is discussed.
Location verification algorithm of wearable sensors for wireless body area networks.
Wang, Hua; Wen, Yingyou; Zhao, Dazhe
2018-01-01
Knowledge of the location of sensor devices is crucial for many medical applications of wireless body area networks, as wearable sensors are designed to monitor vital signs of a patient while the wearer still has the freedom of movement. However, clinicians or patients can misplace the wearable sensors, thereby causing a mismatch between their physical locations and their correct target positions. An error of more than a few centimeters raises the risk of mistreating patients. The present study aims to develop a scheme to calculate and detect the position of wearable sensors without beacon nodes. A new scheme was proposed to verify the location of wearable sensors mounted on the patient's body by inferring differences in atmospheric air pressure and received signal strength indication measurements from wearable sensors. Extensive two-sample t tests were performed to validate the proposed scheme. The proposed scheme could easily recognize a 30-cm horizontal body range and a 65-cm vertical body range to correctly perform sensor localization and limb identification. All experiments indicate that the scheme is suitable for identifying wearable sensor positions in an indoor environment.
A predictor-corrector scheme for vortex identification
NASA Technical Reports Server (NTRS)
Singer, Bart A.; Banks, David C.
1994-01-01
A new algorithm for identifying and characterizing vortices in complex flows is presented. The scheme uses both the vorticity and pressure fields. A skeleton line along the center of a vortex is produced by a two-step predictor-corrector scheme. The technique uses the vector field to move in the direction of the skeleton line and the scalar field to correct the location in the plane perpendicular to the skeleton line. A general vortex cross section can be concisely defined with five parameters at each point along the skeleton line. The details of the method and examples of its use are discussed.
An accurate front capturing scheme for tumor growth models with a free boundary limit
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Tang, Min; Wang, Li; Zhou, Zhennan
2018-07-01
We consider a class of tumor growth models under the combined effects of density-dependent pressure and cell multiplication, with a free boundary model as its singular limit when the pressure-density relationship becomes highly nonlinear. In particular, the constitutive law connecting pressure p and density ρ is p (ρ) = m/m-1 ρ m - 1, and when m ≫ 1, the cell density ρ may evolve its support according to a pressure-driven geometric motion with sharp interface along its boundary. The nonlinearity and degeneracy in the diffusion bring great challenges in numerical simulations. Prior to the present paper, there is lack of standard mechanism to numerically capture the front propagation speed as m ≫ 1. In this paper, we develop a numerical scheme based on a novel prediction-correction reformulation that can accurately approximate the front propagation even when the nonlinearity is extremely strong. We show that the semi-discrete scheme naturally connects to the free boundary limit equation as m → ∞. With proper spatial discretization, the fully discrete scheme has improved stability, preserves positivity, and can be implemented without nonlinear solvers. Finally, extensive numerical examples in both one and two dimensions are provided to verify the claimed properties in various applications.
Highly Parallel Alternating Directions Algorithm for Time Dependent Problems
NASA Astrophysics Data System (ADS)
Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.
2011-11-01
In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.
Investigation of supersonic jet plumes using an improved two-equation turbulence model
NASA Technical Reports Server (NTRS)
Lakshmanan, B.; Abdol-Hamid, Khaled S.
1994-01-01
Supersonic jet plumes were studied using a two-equation turbulence model employing corrections for compressible dissipation and pressure-dilatation. A space-marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that two-equation models employing corrections for compressible dissipation and pressure-dilatation yield improved agreement with the experimental data. In addition, the numerical study demonstrates that the computed results are sensitive to the effect of grid refinement and insensitive to the type of velocity profiles used at the inflow boundary for the cases considered in the present study.
Signal processing of aircraft flyover noise
NASA Technical Reports Server (NTRS)
Kelly, Jeffrey J.
1991-01-01
A detailed analysis of signal processing concerns for measuring aircraft flyover noise is presented. Development of a de-Dopplerization scheme for both corrected time history and spectral data is discussed along with an analysis of motion effects on measured spectra. A computer code was written to implement the de-Dopplerization scheme. Input to the code is the aircraft position data and the pressure time histories. To facilitate ensemble averaging, a uniform level flyover is considered but the code can accept more general flight profiles. The effects of spectral smearing and its removal is discussed. Using data acquired from XV-15 tilt rotor flyover test comparisons are made showing the measured and corrected spectra. Frequency shifts are accurately accounted for by the method. It is shown that correcting for spherical spreading, Doppler amplitude, and frequency can give some idea about source directivity. The analysis indicated that smearing increases with frequency and is more severe on approach than recession.
Viscous compressible flow direct and inverse computation and illustrations
NASA Technical Reports Server (NTRS)
Yang, T. T.; Ntone, F.
1986-01-01
An algorithm for laminar and turbulent viscous compressible two dimensional flows is presented. For the application of precise boundary conditions over an arbitrary body surface, a body-fitted coordinate system is used in the physical plane. A thin-layer approximation of tne Navier-Stokes equations is introduced to keep the viscous terms relatively simple. The flow field computation is performed in the transformed plane. A factorized, implicit scheme is used to facilitate the computation. Sample calculations, for Couette flow, developing pipe flow, an isolated airflow, two dimensional compressor cascade flow, and segmental compressor blade design are presented. To a certain extent, the effective use of the direct solver depends on the user's skill in setting up the gridwork, the time step size and the choice of the artificial viscosity. The design feature of the algorithm, an iterative scheme to correct geometry for a specified surface pressure distribution, works well for subsonic flows. A more elaborate correction scheme is required in treating transonic flows where local shock waves may be involved.
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number of time derivatives of surface-normal velocity (consistent with no flow through the boundary) up to arbitrarily high order. The corrections for the first-order spatial derivatives of pressure are calculated by use of the first-order time derivative velocity. The corrected first-order spatial derivatives are used to calculate the second- order time derivatives of velocity, which, in turn, are used to calculate the corrections for the second-order pressure derivatives. The process as described is repeated, progressing through increasing orders of derivatives, until the desired accuracy is attained.
Density enhancement mechanism of upwind schemes for low Mach number flows
NASA Astrophysics Data System (ADS)
Lin, Bo-Xi; Yan, Chao; Chen, Shu-Sheng
2018-06-01
Many all-speed Roe schemes have been proposed to improve performance in terms of low speeds. Among them, the F-Roe and T-D-Roe schemes have been found to get incorrect density fluctuation in low Mach flows, which is expected to be with the square of Mach number. Asymptotic analysis presents the mechanism of how the density fluctuation problem relates to the incorrect order of terms in the energy equation \\tilde{ρ {\\tilde{a}} {\\tilde{U}}Δ U}. It is known that changing the upwind scheme coefficients of the pressure-difference dissipation term D^P and the velocity-difference dissipation term in the momentum equation D^{ρ U} to the order of O(c^{-1}) and O(c0) can improve the level of pressure and velocity accuracy at low speeds. This paper shows that corresponding changes in energy equation can also improve the density accuracy in low speeds. We apply this modification to a recently proposed scheme, TV-MAS, to get a new scheme, TV-MAS2. Unsteady Gresho vortex flow, double shear-layer flow, low Mach number flows over the inviscid cylinder, and NACA0012 airfoil show that energy equation modification in these schemes can obtain the expected square Ma scaling of density fluctuations, which is in good agreement with corresponding asymptotic analysis. Therefore, this density correction is expected to be widely implemented into all-speed compressible flow solvers.
Effect of Combined Loading Due to Bending and Internal Pressure on Pipe Flaw Evaluation Criteria
NASA Astrophysics Data System (ADS)
Miura, Naoki; Sakai, Shinsuke
Considering a rule for the rationalization of maintenance of Light Water Reactor piping, reliable flaw evaluation criteria are essential for determining how a detected flaw will be detrimental to continuous plant operation. Ductile fracture is one of the dominant failure modes that must be considered for carbon steel piping and can be analyzed by elastic-plastic fracture mechanics. Some analytical efforts have provided various flaw evaluation criteria using load correction factors, such as the Z-factors in the JSME codes on fitness-for-service for nuclear power plants and the section XI of the ASME boiler and pressure vessel code. The present Z-factors were conventionally determined, taking conservativity and simplicity into account; however, the effect of internal pressure, which is an important factor under actual plant conditions, was not adequately considered. Recently, a J-estimation scheme, LBB.ENGC for the ductile fracture analysis of circumferentially through-wall-cracked pipes subjected to combined loading was developed for more accurate prediction under more realistic conditions. This method explicitly incorporates the contributions of both bending and tension due to internal pressure by means of a scheme that is compatible with an arbitrary combined-loading history. In this study, the effect of internal pressure on the flaw evaluation criteria was investigated using the new J-estimation scheme. The Z-factor obtained in this study was compared with the presently used Z-factors, and the predictability of the current flaw evaluation criteria was quantitatively evaluated in consideration of the internal pressure.
A Rotational Pressure-Correction Scheme for Incompressible Two-Phase Flows with Open Boundaries
Dong, S.; Wang, X.
2016-01-01
Two-phase outflows refer to situations where the interface formed between two immiscible incompressible fluids passes through open portions of the domain boundary. We present several new forms of open boundary conditions for two-phase outflow simulations within the phase field framework, as well as a rotational pressure correction based algorithm for numerically treating these open boundary conditions. Our algorithm gives rise to linear algebraic systems for the velocity and the pressure that involve only constant and time-independent coefficient matrices after discretization, despite the variable density and variable viscosity of the two-phase mixture. By comparing simulation results with theory and the experimental data, we show that the method produces physically accurate results. We also present numerical experiments to demonstrate the long-term stability of the method in situations where large density contrast, large viscosity contrast, and backflows occur at the two-phase open boundaries. PMID:27163909
Evaluation of a Multigrid Scheme for the Incompressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.
2004-01-01
A fast multigrid solver for the steady, incompressible Navier-Stokes equations is presented. The multigrid solver is based upon a factorizable discrete scheme for the velocity-pressure form of the Navier-Stokes equations. This scheme correctly distinguishes between the advection-diffusion and elliptic parts of the operator, allowing efficient smoothers to be constructed. To evaluate the multigrid algorithm, solutions are computed for flow over a flat plate, parabola, and a Karman-Trefftz airfoil. Both nonlifting and lifting airfoil flows are considered, with a Reynolds number range of 200 to 800. Convergence and accuracy of the algorithm are discussed. Using Gauss-Seidel line relaxation in alternating directions, multigrid convergence behavior approaching that of O(N) methods is achieved. The computational efficiency of the numerical scheme is compared with that of Runge-Kutta and implicit upwind based multigrid methods.
Partial Molar Volumes of Aqua Ions from First Principles.
Wiktor, Julia; Bruneval, Fabien; Pasquarello, Alfredo
2017-08-08
Partial molar volumes of ions in water solution are calculated through pressures obtained from ab initio molecular dynamics simulations. The correct definition of pressure in charged systems subject to periodic boundary conditions requires access to the variation of the electrostatic potential upon a change of volume. We develop a scheme for calculating such a variation in liquid systems by setting up an interface between regions of different density. This also allows us to determine the absolute deformation potentials for the band edges of liquid water. With the properly defined pressures, we obtain partial molar volumes of a series of aqua ions in very good agreement with experimental values.
NASA Astrophysics Data System (ADS)
Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.
2018-04-01
Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.
Plasma equilibrium with fast ion orbit width, pressure anisotropy, and toroidal flow effects
Gorelenkov, Nikolai N.; Zakharov, Leonid E.
2018-04-27
Here, we formulate the problem of tokamak plasma equilibrium including the toroidal flow and fast ion (or energetic particle, EP) pressure anisotropy and the finite drift orbit width (FOW) effects. The problem is formulated via the standard Grad-Shafranov equation (GShE) amended by the solvability condition which imposes physical constraints on allowed spacial dependencies of the anisotropic pressure. The GShE problem employs the pressure coupling scheme and includes the dominant diagonal terms and non-diagonal corrections to the standard pressure tensor. The anisotropic tensor elements are obtained via the distribution function represented in the factorized form via the constants of motion. Consideredmore » effects on the plasma equilibrium are estimated analytically, if possible, to understand their importance for GShE tokamak plasma problem.« less
Plasma equilibrium with fast ion orbit width, pressure anisotropy, and toroidal flow effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorelenkov, Nikolai N.; Zakharov, Leonid E.
Here, we formulate the problem of tokamak plasma equilibrium including the toroidal flow and fast ion (or energetic particle, EP) pressure anisotropy and the finite drift orbit width (FOW) effects. The problem is formulated via the standard Grad-Shafranov equation (GShE) amended by the solvability condition which imposes physical constraints on allowed spacial dependencies of the anisotropic pressure. The GShE problem employs the pressure coupling scheme and includes the dominant diagonal terms and non-diagonal corrections to the standard pressure tensor. The anisotropic tensor elements are obtained via the distribution function represented in the factorized form via the constants of motion. Consideredmore » effects on the plasma equilibrium are estimated analytically, if possible, to understand their importance for GShE tokamak plasma problem.« less
A computer code for multiphase all-speed transient flows in complex geometries. MAST version 1.0
NASA Technical Reports Server (NTRS)
Chen, C. P.; Jiang, Y.; Kim, Y. M.; Shang, H. M.
1991-01-01
The operation of the MAST code, which computes transient solutions to the multiphase flow equations applicable to all-speed flows, is described. Two-phase flows are formulated based on the Eulerian-Lagrange scheme in which the continuous phase is described by the Navier-Stokes equation (or Reynolds equations for turbulent flows). Dispersed phase is formulated by a Lagrangian tracking scheme. The numerical solution algorithms utilized for fluid flows is a newly developed pressure-implicit algorithm based on the operator-splitting technique in generalized nonorthogonal coordinates. This operator split allows separate operation on each of the variable fields to handle pressure-velocity coupling. The obtained pressure correction equation has the hyperbolic nature and is effective for Mach numbers ranging from the incompressible limit to supersonic flow regimes. The present code adopts a nonstaggered grid arrangement; thus, the velocity components and other dependent variables are collocated at the same grid. A sequence of benchmark-quality problems, including incompressible, subsonic, transonic, supersonic, gas-droplet two-phase flows, as well as spray-combustion problems, were performed to demonstrate the robustness and accuracy of the present code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedict, Lorin X.; Aberg, Daniel; Soderlind, Per
2015-10-26
We explore the use of particular variants of DFT + U and DFT + orbital polarization (OP) to calculate the electronic structure and magnetic properties of YCo5 under hydrostatic pressures up to 600 kbar. While the speci c DFT + U (with U= 0.75 eV) and DFT + OP schemes we employ produce magneto-crystalline anisotropy energies for YCo5 in good agreement with experiments performed in ambient conditions, our DFT + U results are shown to greatly overestimate the pressure at which a high-spin to low-spin (HS-LS) transition is known to occur. In contrast, our DFT + OP results predict themore » HS-LS transition to occur at the same stress as DFT, and in better agreement with experiment. This sensitivity suggests that care should be taken when attempting to model magnetic properties with self-interaction and/or correlation corrections to DFT for this and related materials, and highlights the usefulness of moderate pressure as an additional parameter to vary when discriminating between candidate theoretical schemes.« less
2011-07-01
10%. These results demonstrate that the IOP-based BRDF correction scheme (which is composed of the R„ model along with the IOP retrieval...distribution was averaged over 10 min 5. Validation of the lOP-Based BRDF Correction Scheme The IOP-based BRDF correction scheme is applied to both...oceanic and coastal waters were very consistent qualitatively and quantitatively and thus validate the IOP- based BRDF correction system, at least
An Eulerian/Lagrangian coupling procedure for three-dimensional vortical flows
NASA Technical Reports Server (NTRS)
Felici, Helene M.; Drela, Mark
1993-01-01
A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of 3D vortical flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method, added to the Eulerian time-marching procedure, provides a correction of the Eulerian solution. In turn, the Eulerian solution is used to integrate the Lagrangian state-vector along the particles trajectories. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers describe accurately the convection properties and enhance the vorticity and entropy capturing capabilities of the Eulerian solver. The Eulerian/Lagrangian coupling strategies are discussed and the combined scheme is tested on a constant stagnation pressure flow in a 90 deg bend and on a swirling pipe flow. As the numerical diffusion is reduced when using the Lagrangian correction, a vorticity gradient augmentation is identified as a basic problem of this inviscid calculation.
Mass-corrections for the conservative coupling of flow and transport on collocated meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waluga, Christian, E-mail: waluga@ma.tum.de; Wohlmuth, Barbara; Rüde, Ulrich
2016-01-15
Buoyancy-driven flow models demand a careful treatment of the mass-balance equation to avoid spurious source and sink terms in the non-linear coupling between flow and transport. In the context of finite-elements, it is therefore commonly proposed to employ sufficiently rich pressure spaces, containing piecewise constant shape functions to obtain local or even strong mass-conservation. In three-dimensional computations, this usually requires nonconforming approaches, special meshes or higher order velocities, which make these schemes prohibitively expensive for some applications and complicate the implementation into legacy code. In this paper, we therefore propose a lean and conservatively coupled scheme based on standard stabilizedmore » linear equal-order finite elements for the Stokes part and vertex-centered finite volumes for the energy equation. We show that in a weak mass-balance it is possible to recover exact conservation properties by a local flux-correction which can be computed efficiently on the control volume boundaries of the transport mesh. We discuss implementation aspects and demonstrate the effectiveness of the flux-correction by different two- and three-dimensional examples which are motivated by geophysical applications.« less
Yu, Yu-Ning; Doctor, Faiyaz; Fan, Shou-Zen; Shieh, Jiann-Shing
2018-04-13
During surgical procedures, bispectral index (BIS) is a well-known measure used to determine the patient's depth of anesthesia (DOA). However, BIS readings can be subject to interference from many factors during surgery, and other parameters such as blood pressure (BP) and heart rate (HR) can provide more stable indicators. However, anesthesiologist still consider BIS as a primary measure to determine if the patient is correctly anaesthetized while relaying on the other physiological parameters to monitor and ensure the patient's status is maintained. The automatic control of administering anesthesia using intelligent control systems has been the subject of recent research in order to alleviate the burden on the anesthetist to manually adjust drug dosage in response physiological changes for sustaining DOA. A system proposed for the automatic control of anesthesia based on type-2 Self Organizing Fuzzy Logic Controllers (T2-SOFLCs) has been shown to be effective in the control of DOA under simulated scenarios while contending with uncertainties due to signal noise and dynamic changes in pharmacodynamics (PD) and pharmacokinetic (PK) effects of the drug on the body. This study considers both BIS and BP as part of an adaptive automatic control scheme, which can adjust to the monitoring of either parameter in response to changes in the availability and reliability of BIS signals during surgery. The simulation of different control schemes using BIS data obtained during real surgical procedures to emulate noise and interference factors have been conducted. The use of either or both combined parameters for controlling the delivery Propofol to maintain safe target set points for DOA are evaluated. The results show that combing BIS and BP based on the proposed adaptive control scheme can ensure the target set points and the correct amount of drug in the body is maintained even with the intermittent loss of BIS signal that could otherwise disrupt an automated control system.
NASA Astrophysics Data System (ADS)
Bakholdin, Igor
2018-02-01
Various models of a tube with elastic walls are investigated: with controlled pressure, filled with incompressible fluid, filled with compressible gas. The non-linear theory of hyperelasticity is applied. The walls of a tube are described with complete membrane model. It is proposed to use linear model of plate in order to take the bending resistance of walls into account. The walls of the tube were treated previously as inviscid and incompressible. Compressibility of material of walls and viscosity of material, either gas or liquid are considered. Equations are solved numerically. Three-layer time and space centered reversible numerical scheme and similar two-layer space reversible numerical scheme with approximation of time derivatives by Runge-Kutta method are used. A method of correction of numerical schemes by inclusion of terms with highorder derivatives is developed. Simplified hyperbolic equations are derived.
NASA Astrophysics Data System (ADS)
Busto, S.; Ferrín, J. L.; Toro, E. F.; Vázquez-Cendón, M. E.
2018-01-01
In this paper the projection hybrid FV/FE method presented in [1] is extended to account for species transport equations. Furthermore, turbulent regimes are also considered thanks to the k-ε model. Regarding the transport diffusion stage new schemes of high order of accuracy are developed. The CVC Kolgan-type scheme and ADER methodology are extended to 3D. The latter is modified in order to profit from the dual mesh employed by the projection algorithm and the derivatives involved in the diffusion term are discretized using a Galerkin approach. The accuracy and stability analysis of the new method are carried out for the advection-diffusion-reaction equation. Within the projection stage the pressure correction is computed by a piecewise linear finite element method. Numerical results are presented, aimed at verifying the formal order of accuracy of the scheme and to assess the performance of the method on several realistic test problems.
Multigrid calculation of internal flows in complex geometries
NASA Technical Reports Server (NTRS)
Smith, K. M.; Vanka, S. P.
1992-01-01
The development, validation, and application of a general purpose multigrid solution algorithm and computer program for the computation of elliptic flows in complex geometries is presented. This computer program combines several desirable features including a curvilinear coordinate system, collocated arrangement of the variables, and Full Multi-Grid/Full Approximation Scheme (FMG/FAS). Provisions are made for the inclusion of embedded obstacles and baffles inside the flow domain. The momentum and continuity equations are solved in a decoupled manner and a pressure corrective equation is used to update the pressures such that the fluxes at the cell faces satisfy local mass continuity. Despite the computational overhead required in the restriction and prolongation phases of the multigrid cycling, the superior convergence results in reduced overall CPU time. The numerical scheme and selected results of several validation flows are presented. Finally, the procedure is applied to study the flowfield in a side-inlet dump combustor and twin jet impingement from a simulated aircraft fuselage.
NASA Astrophysics Data System (ADS)
Nonaka, Andrew; Day, Marcus S.; Bell, John B.
2018-01-01
We present a numerical approach for low Mach number combustion that conserves both mass and energy while remaining on the equation of state to a desired tolerance. We present both unconfined and confined cases, where in the latter the ambient pressure changes over time. Our overall scheme is a projection method for the velocity coupled to a multi-implicit spectral deferred corrections (SDC) approach to integrate the mass and energy equations. The iterative nature of SDC methods allows us to incorporate a series of pressure discrepancy corrections naturally that lead to additional mass and energy influx/outflux in each finite volume cell in order to satisfy the equation of state. The method is second order, and satisfies the equation of state to a desired tolerance with increasing iterations. Motivated by experimental results, we test our algorithm on hydrogen flames with detailed kinetics. We examine the morphology of thermodiffusively unstable cylindrical premixed flames in high-pressure environments for confined and unconfined cases. We also demonstrate that our algorithm maintains the equation of state for premixed methane flames and non-premixed dimethyl ether jet flames.
Scalability and performance of data-parallel pressure-based multigrid methods for viscous flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blosch, E.L.; Shyy, W.
1996-05-01
A full-approximation storage multigrid method for solving the steady-state 2-d incompressible Navier-Stokes equations on staggered grids has been implemented in Fortran on the CM-5, using the array aliasing feature in CM-Fortran to avoid declaring fine-grid-sized arrays on all levels while still allowing a variable number of grid levels. Thus, the storage cost scales with the number of unknowns, allowing us to consider significantly larger problems than would otherwise be possible. Timings over a range of problem sizes and numbers of processors, up to 4096 x 4096 on 512 nodes, show that the smoothing procedure, a pressure-correction technique, is scalable andmore » that the restriction and prolongation steps are nearly so. The performance obtained for the multigrid method is 333 Mflops out of the theoretical peak 4 Gflops on a 32-node CM-5. In comparison, a single-grid computation obtained 420 Mflops. The decrease is due to the inefficiency of the smoothing iterations on the coarse grid levels. W cycles cost much more and are much less efficient than V cycles, due to the increased contribution from the coarse grids. The convergence rate characteristics of the pressure-correction multigrid method are investigated in a Re = 5000 lid-driven cavity flow and a Re = 300 symmetric backward-facing step flow, using either a defect-correction scheme or a second-order upwind scheme. A heuristic technique relating the convergence tolerances for the course grids to the truncation error of the discretization has been found effective and robust. With second-order upwinding on all grid levels, a 5-level 320 x 80 step flow solution was obtained in 20 V cycles, which corresponds to a smoothing rate of 0.7, and required 25 s on a 32-node CM-5. Overall, the convergence rates obtained in the present work are comparable to the most competitive findings reported in the literature. 62 refs., 13 figs.« less
Scalability and Performance of Data-Parallel Pressure-Based Multigrid Methods for Viscous Flows
NASA Astrophysics Data System (ADS)
Blosch, Edwin L.; Shyy, Wei
1996-05-01
A full-approximation storage multigrid method for solving the steady-state 2-dincompressible Navier-Stokes equations on staggered grids has been implemented in Fortran on the CM-5,using the array aliasing feature in CM-Fortran to avoid declaring fine-grid-sized arrays on all levels while still allowing a variable number of grid levels. Thus, the storage cost scales with the number of unknowns,allowing us to consider significantly larger problems than would otherwise be possible. Timings over a range of problem sizes and numbers of processors, up to 4096 × 4096 on 512 nodes, show that the smoothing procedure, a pressure-correction technique, is scalable and that the restriction and prolongation steps are nearly so. The performance obtained for the multigrid method is 333 Mflops out of the theoretical peak 4 Gflops on a 32-node CM-5. In comparison, a single-grid computation obtained 420 Mflops. The decrease is due to the inefficiency of the smoothing iterations on the coarse grid levels. W cycles cost much more and are much less efficient than V cycles, due to the increased contribution from the coarse grids. The convergence rate characteristics of the pressure-correction multigrid method are investigated in a Re = 5000 lid-driven cavity flow and a Re = 300 symmetric backward-facing step flow, using either a defect-correction scheme or a second-order upwind scheme. A heuristic technique relating the convergence tolerances for the coarse grids to the truncation error of the discretization has been found effective and robust. With second-order upwinding on all grid levels, a 5-level 320× 80 step flow solution was obtained in 20 V cycles, which corresponds to a smoothing rate of 0.7, and required 25 s on a 32-node CM-5. Overall, the convergence rates obtained in the present work are comparable to the most competitive findings reported in the literature.
Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code
NASA Astrophysics Data System (ADS)
Phillips, William; Russwurm, George M.
1999-02-01
This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.
A coupled Eulerian/Lagrangian method for the solution of three-dimensional vortical flows
NASA Technical Reports Server (NTRS)
Felici, Helene Marie
1992-01-01
A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of three-dimensional rotational flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method using particle markers is added to the Eulerian time-marching procedure and provides a correction of the Eulerian solution. In turn, the Eulerian solutions is used to integrate the Lagrangian state-vector along the particles trajectories. The Lagrangian correction technique does not require any a-priori information on the structure or position of the vortical regions. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers, used as 'accuracy boosters,' take advantage of the accurate convection description of the Lagrangian solution and enhance the vorticity and entropy capturing capabilities of standard Eulerian finite-volume methods. The combined solution procedures is tested in several applications. The convection of a Lamb vortex in a straight channel is used as an unsteady compressible flow preservation test case. The other test cases concern steady incompressible flow calculations and include the preservation of turbulent inlet velocity profile, the swirling flow in a pipe, and the constant stagnation pressure flow and secondary flow calculations in bends. The last application deals with the external flow past a wing with emphasis on the trailing vortex solution. The improvement due to the addition of the Lagrangian correction technique is measured by comparison with analytical solutions when available or with Eulerian solutions on finer grids. The use of the combined Eulerian/Lagrangian scheme results in substantially lower grid resolution requirements than the standard Eulerian scheme for a given solution accuracy.
Numerical Field Model Simulation of Fire and Heat Transfer in a Rectangular Compartment
1992-09-01
zero . However, due to the approximation inherent in the numerical scheme, we will be satisfied if S,, tends toward zero as determined by comparison... zero , the appropriate coefficient (A) corresponding to that boundary is also set equal to zero . After the local pressure correction (P’) is determined...chamber just prior to starting the fire. It is assumed that the air is uni- formly at rest, thus all components of velocity are set equal to zero
Power corrections in the N -jettiness subtraction scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Power corrections in the N -jettiness subtraction scheme
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
2017-03-30
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Practical scheme for error control using feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene
2004-05-01
We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.
Adaptive Packet Combining Scheme in Three State Channel Model
NASA Astrophysics Data System (ADS)
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
NASA Astrophysics Data System (ADS)
D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato
2018-01-01
Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.
NASA Astrophysics Data System (ADS)
Bulovich, S. V.; Smirnov, E. M.
2018-05-01
The paper covers application of the artificial viscosity technique to numerical simulation of unsteady one-dimensional multiphase compressible flows on the base of the multi-fluid approach. The system of the governing equations is written under assumption of the pressure equilibrium between the "fluids" (phases). No interfacial exchange is taken into account. A model for evaluation of the artificial viscosity coefficient that (i) assumes identity of this coefficient for all interpenetrating phases and (ii) uses the multiphase-mixture Wood equation for evaluation of a scale speed of sound has been suggested. Performance of the artificial viscosity technique has been evaluated via numerical solution of a model problem of pressure discontinuity breakdown in a three-fluid medium. It has been shown that a relatively simple numerical scheme, explicit and first-order, combined with the suggested artificial viscosity model, predicts a physically correct behavior of the moving shock and expansion waves, and a subsequent refinement of the computational grid results in a monotonic approaching to an asymptotic time-dependent solution, without non-physical oscillations.
Development of a pressure based multigrid solution method for complex fluid flows
NASA Technical Reports Server (NTRS)
Shyy, Wei
1991-01-01
In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.
Joint Schemes for Physical Layer Security and Error Correction
ERIC Educational Resources Information Center
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Simulation of blast action on civil structures using ANSYS Autodyn
NASA Astrophysics Data System (ADS)
Fedorova, N. N.; Valger, S. A.; Fedorov, A. V.
2016-10-01
The paper presents the results of 3D numerical simulations of shock wave spreading in cityscape area. ANSYS Autodyne software is used for the computations. Different test cases are investigated numerically. On the basis of the computations, the complex transient flowfield structure formed in the vicinity of prismatic bodies was obtained and analyzed. The simulation results have been compared to the experimental data. The ability of two numerical schemes is studied to correctly predict the pressure history in several gauges placed on walls of the obstacles.
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1987-01-01
The technique of obtaining second-order oscillation-free total -variation-diminishing (TVD), scalar difference schemes by adding a limited diffusive flux ('smoothing') to a second-order centered scheme is explored. It is shown that such schemes do not always converge to the correct physical answer. The approach presented here is to construct schemes that numerically satisfy the second law of thermodynamics on a cell-by-cell basis. Such schemes can only converge to the correct physical solution and in some cases can be shown to be TVD. An explicit scheme with this property and second-order spatial accuracy was found to have extremely restrictive time-step limitation. Switching to an implicit scheme removed the time-step limitation.
High-order flux correction/finite difference schemes for strand grids
NASA Astrophysics Data System (ADS)
Katz, Aaron; Work, Dalon
2015-02-01
A novel high-order method combining unstructured flux correction along body surfaces and high-order finite differences normal to surfaces is formulated for unsteady viscous flows on strand grids. The flux correction algorithm is applied in each unstructured layer of the strand grid, and the layers are then coupled together via a source term containing derivatives in the strand direction. Strand-direction derivatives are approximated to high-order via summation-by-parts operators for first derivatives and second derivatives with variable coefficients. We show how this procedure allows for the proper truncation error canceling properties required for the flux correction scheme. The resulting scheme possesses third-order design accuracy, but often exhibits fourth-order accuracy when higher-order derivatives are employed in the strand direction, especially for highly viscous flows. We prove discrete conservation for the new scheme and time stability in the absence of the flux correction terms. Results in two dimensions are presented that demonstrate improvements in accuracy with minimal computational and algorithmic overhead over traditional second-order algorithms.
Improved Convergence and Robustness of USM3D Solutions on Mixed Element Grids (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2015-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Scheme (HANIS), has been developed and implemented. It provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier Stokes (RANS) equations and a nonlinear control of the solution update. Two variants of the new methodology are assessed on four benchmark cases, namely, a zero-pressure gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the baseline solver technology.
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1986-01-01
The technique of obtaining second order, oscillation free, total variation diminishing (TVD), scalar difference schemes by adding a limited diffusion flux (smoothing) to a second order centered scheme is explored. It is shown that such schemes do not always converge to the correct physical answer. The approach presented here is to construct schemes that numerically satisfy the second law of thermodynamics on a cell by cell basis. Such schemes can only converge to the correct physical solution and in some cases can be shown to be TVD. An explicit scheme with this property and second order spatial accuracy was found to have an extremely restrictive time step limitation (Delta t less than Delta x squared). Switching to an implicit scheme removed the time step limitation.
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
An analog gamma correction scheme for high dynamic range CMOS logarithmic image sensors.
Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi
2014-12-15
In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process.
Autonomous Quantum Error Correction with Application to Quantum Metrology
NASA Astrophysics Data System (ADS)
Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.
2017-04-01
We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors
Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi
2014-01-01
In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692
Stable and Spectrally Accurate Schemes for the Navier-Stokes Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Jun; Liu, Jie
2011-01-01
In this paper, we present an accurate, efficient and stable numerical method for the incompressible Navier-Stokes equations (NSEs). The method is based on (1) an equivalent pressure Poisson equation formulation of the NSE with proper pressure boundary conditions, which facilitates the design of high-order and stable numerical methods, and (2) the Krylov deferred correction (KDC) accelerated method of lines transpose (mbox MoL{sup T}), which is very stable, efficient, and of arbitrary order in time. Numerical tests with known exact solutions in three dimensions show that the new method is spectrally accurate in time, and a numerical order of convergence 9more » was observed. Two-dimensional computational results of flow past a cylinder and flow in a bifurcated tube are also reported.« less
A new approximation for pore pressure accumulation in marine sediment due to water waves
NASA Astrophysics Data System (ADS)
Jeng, D.-S.; Seymour, B. R.; Li, J.
2007-01-01
The residual mechanism of wave-induced pore water pressure accumulation in marine sediments is re-examined. An analytical approximation is derived using a linear relation for pore pressure generation in cyclic loading, and mistakes in previous solutions (Int. J. Numer. Anal. Methods Geomech. 2001; 25:885-907; J. Offshore Mech. Arctic Eng. (ASME) 1989; 111(1):1-11) are corrected. A numerical scheme is then employed to solve the case with a non-linear relation for pore pressure generation. Both analytical and numerical solutions are verified with experimental data (Laboratory and field investigation of wave-sediment interaction. Joseph H. Defrees Hydraulics Laboratory, School of Civil and Environmental Engineering, Cornell University, Ithaca, NY, 1983), and provide a better prediction of pore pressure accumulation than the previous solution (J. Offshore Mech. Arctic Eng. (ASME) 1989; 111(1):1-11). The parametric study concludes that the pore pressure accumulation and use of full non-linear relation of pore pressure become more important under the following conditions: (1) large wave amplitude, (2) longer wave period, (3) shallow water, (4) shallow soil and (5) softer soils with a low consolidation coefficient. Copyright
Velocity and pressure fields associated with near-wall turbulence structures
NASA Technical Reports Server (NTRS)
Johansson, Arne V.; Alfredsson, P. Henrik; Kim, John
1990-01-01
Computer generated databases containing velocity and pressure fields in three-dimensional space at a sequence of time-steps, were used for the investigation of near-wall turbulence structures, their space-time evolution, and their associated pressure fields. The main body of the results were obtained from simulation data for turbulent channel flow at a Reynolds number of 180 (based on half-channel height and friction velocity) with a grid of 128 x 129 x and 128 points. The flow was followed over a total time of 141 viscous time units. Spanwise centering of the detected structures was found to be essential in order to obtain a correct magnitude of the associated Reynolds stress contribution. A positive wall-pressure peak is found immediately beneath the center of the structure. The maximum amplitude of the pressure pattern was, however, found in the buffer region at the center of the shear-layer. It was also found that these flow structures often reach a maximum strength in connection with an asymmetric spanwise motion, which motivated the construction of a conditional sampling scheme that preserved this asymmetry.
Pressure-based high-order TVD methodology for dynamic stall control
NASA Astrophysics Data System (ADS)
Yang, H. Q.; Przekwas, A. J.
1992-01-01
The quantitative prediction of the dynamics of separating unsteady flows, such as dynamic stall, is of crucial importance. This six-month SBIR Phase 1 study has developed several new pressure-based methodologies for solving 3D Navier-Stokes equations in both stationary and moving (body-comforting) coordinates. The present pressure-based algorithm is equally efficient for low speed incompressible flows and high speed compressible flows. The discretization of convective terms by the presently developed high-order TVD schemes requires no artificial dissipation and can properly resolve the concentrated vortices in the wing-body with minimum numerical diffusion. It is demonstrated that the proposed Newton's iteration technique not only increases the convergence rate but also strongly couples the iteration between pressure and velocities. The proposed hyperbolization of the pressure correction equation is shown to increase the solver's efficiency. The above proposed methodologies were implemented in an existing CFD code, REFLEQS. The modified code was used to simulate both static and dynamic stalls on two- and three-dimensional wing-body configurations. Three-dimensional effect and flow physics are discussed.
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, "eddy_correct" and the combination of "eddy" and "topup" in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non-diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non-diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme.
Comparing multilayer brain networks between groups: Introducing graph metrics and recommendations.
Mandke, Kanad; Meier, Jil; Brookes, Matthew J; O'Dea, Reuben D; Van Mieghem, Piet; Stam, Cornelis J; Hillebrand, Arjan; Tewarie, Prejaas
2018-02-01
There is an increasing awareness of the advantages of multi-modal neuroimaging. Networks obtained from different modalities are usually treated in isolation, which is however contradictory to accumulating evidence that these networks show non-trivial interdependencies. Even networks obtained from a single modality, such as frequency-band specific functional networks measured from magnetoencephalography (MEG) are often treated independently. Here, we discuss how a multilayer network framework allows for integration of multiple networks into a single network description and how graph metrics can be applied to quantify multilayer network organisation for group comparison. We analyse how well-known biases for single layer networks, such as effects of group differences in link density and/or average connectivity, influence multilayer networks, and we compare four schemes that aim to correct for such biases: the minimum spanning tree (MST), effective graph resistance cost minimisation, efficiency cost optimisation (ECO) and a normalisation scheme based on singular value decomposition (SVD). These schemes can be applied to the layers independently or to the multilayer network as a whole. For correction applied to whole multilayer networks, only the SVD showed sufficient bias correction. For correction applied to individual layers, three schemes (ECO, MST, SVD) could correct for biases. By using generative models as well as empirical MEG and functional magnetic resonance imaging (fMRI) data, we further demonstrated that all schemes were sensitive to identify network topology when the original networks were perturbed. In conclusion, uncorrected multilayer network analysis leads to biases. These biases may differ between centres and studies and could consequently lead to unreproducible results in a similar manner as for single layer networks. We therefore recommend using correction schemes prior to multilayer network analysis for group comparisons. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
Conductivity Cell Thermal Inertia Correction Revisited
NASA Astrophysics Data System (ADS)
Eriksen, C. C.
2012-12-01
Salinity measurements made with a CTD (conductivity-temperature-depth instrument) rely on accurate estimation of water temperature within their conductivity cell. Lueck (1990) developed a theoretical framework for heat transfer between the cell body and water passing through it. Based on this model, Lueck and Picklo (1990) introduced the practice of correcting for cell thermal inertia by filtering a temperature time series using two parameters, an amplitude α and a decay time constant τ, a practice now widely used. Typically these two parameters are chosen for a given cell configuration and internal flushing speed by a statistical method applied to a particular data set. Here, thermal inertia correction theory has been extended to apply to flow speeds spanning well over an order of magnitude, both within and outside a conductivity cell, to provide predictions of α and τ from cell geometry and composition. The extended model enables thermal inertia correction for the variable flows encountered by conductivity cells on autonomous gliders and floats, as well as tethered platforms. The length scale formed as the product of cell encounter speed of isotherms, α, and τ can be used to gauge the size of the temperature correction for a given thermal stratification. For cells flushed by dynamic pressure variation induced by platform motion, this length varies by less than a factor of 2 over more than a decade of speed variation. The magnitude of correction for free-flow flushed sensors is comparable to that of pumped cells, but at an order of magnitude in energy savings. Flow conditions around a cell's exterior are found to be of comparable importance to thermal inertia response as flushing speed. Simplification of cell thermal response to a single normal mode is most valid at slow speed. Error in thermal inertia estimation arises from both neglect of higher modes and numerical discretization of the correction scheme, both of which can be easily quantified. Consideration of thermal inertia correction enables assessment of various CTD sampling schemes. Spot sampling by pumping a cell intermittently provides particular challenges, and may lead to biases in inferred salinity that are comparable to climate signals reported from profiling float arrays.
APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study
NASA Astrophysics Data System (ADS)
Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak
2017-04-01
In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, “eddy_correct” and the combination of “eddy” and “topup” in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non–diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non–diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme. PMID:25405472
NASA Astrophysics Data System (ADS)
Zwanenburg, Philip; Nadarajah, Siva
2016-02-01
The aim of this paper is to demonstrate the equivalence between filtered Discontinuous Galerkin (DG) schemes and the Energy Stable Flux Reconstruction (ESFR) schemes, expanding on previous demonstrations in 1D [1] and for straight-sided elements in 3D [2]. We first derive the DG and ESFR schemes in strong form and compare the respective flux penalization terms while highlighting the implications of the fundamental assumptions for stability in the ESFR formulations, notably that all ESFR scheme correction fields can be interpreted as modally filtered DG correction fields. We present the result in the general context of all higher dimensional curvilinear element formulations. Through a demonstration that there exists a weak form of the ESFR schemes which is both discretely and analytically equivalent to the strong form, we then extend the results obtained for the strong formulations to demonstrate that ESFR schemes can be interpreted as a DG scheme in weak form where discontinuous edge flux is substituted for numerical edge flux correction. Theoretical derivations are then verified with numerical results obtained from a 2D Euler testcase with curved boundaries. Given the current choice of high-order DG-type schemes and the question as to which might be best to use for a specific application, the main significance of this work is the bridge that it provides between them. Clearly outlining the similarities between the schemes results in the important conclusion that it is always less efficient to use ESFR schemes, as opposed to the weak DG scheme, when solving problems implicitly.
Simple wavefront correction framework for two-photon microscopy of in-vivo brain
Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.
2015-01-01
We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763
Acoustic forcing of a liquid drop
NASA Technical Reports Server (NTRS)
Lyell, M. J.
1992-01-01
The development of systems such as acoustic levitation chambers will allow for the positioning and manipulation of material samples (drops) in a microgravity environment. This provides the capability for fundamental studies in droplet dynamics as well as containerless processing work. Such systems use acoustic radiation pressure forces to position or to further manipulate (e.g., oscillate) the sample. The primary objective was to determine the effect of a viscous acoustic field/tangential radiation pressure forcing on drop oscillations. To this end, the viscous acoustic field is determined. Modified (forced) hydrodynamic field equations which result from a consistent perturbation expansion scheme are solved. This is done in the separate cases of an unmodulated and a modulated acoustic field. The effect of the tangential radiation stress on the hydrodynamic field (drop oscillations) is found to manifest as a correction to the velocity field in a sublayer region near the drop/host interface. Moreover, the forcing due to the radiation pressure vector at the interface is modified by inclusion of tangential stresses.
Elastic constants and pressure derivative of elastic constants of Si1-xGex solid solution
NASA Astrophysics Data System (ADS)
Jivani, A. R.; Baria, J. K.; Vyas, P. S.; Jani, A. R.
2013-02-01
Elastic properties of Si1-xGex solid solution with arbitrary (atomic) concentration (x) are studied using the pseudo-alloy atom model based on the pseudopotential theory and on the higher-order perturbation scheme with the application of our own proposed model potential. We have used local-field correction function proposed by Sarkar et al to study Si-Ge system. The Elastic constants and pressure derivatives of elastic constants of the solid solution is investigated with different concentration x of Ge. It is found in the present study that the calculated numerical values of the aforesaid physical properties of Si-Ge system are function of x. The elastic constants (C11, C12 and C44) decrease linearly with increase in concentration x and pressure derivative of elastic constants (C11, C12 and C44) increase with the concentration x of Ge. This study provides better set of theoretical results for such solid solution for further comparison either with theoretical or experimental results.
Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.
Hoya, T; Chambers, J A
2001-01-01
In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.
Corrections to the General (2,4) and (4,4) FDTD Schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meierbachtol, Collin S.; Smith, William S.; Shao, Xuan-Min
The sampling weights associated with two general higher order FDTD schemes were derived by Smith, et al. and published in a IEEE Transactions on Antennas and Propagation article in 2012. Inconsistencies between governing equations and their resulting solutions were discovered within the article. In an effort to track down the root cause of these inconsistencies, the full three-dimensional, higher order FDTD dispersion relation was re-derived using Mathematica TM. During this process, two errors were identi ed in the article. Both errors are highlighted in this document. The corrected sampling weights are also provided. Finally, the original stability limits provided formore » both schemes are corrected, and presented in a more precise form. It is recommended any future implementations of the two general higher order schemes provided in the Smith, et al. 2012 article should instead use the sampling weights and stability conditions listed in this document.« less
Loss Tolerance in One-Way Quantum Computation via Counterfactual Error Correction
NASA Astrophysics Data System (ADS)
Varnava, Michael; Browne, Daniel E.; Rudolph, Terry
2006-09-01
We introduce a scheme for fault tolerantly dealing with losses (or other “leakage” errors) in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively using an adaptive strategy of measurement—no coherent measurements or coherent correction is required. Since the scheme relies on inferring information about what would have been the outcome of a measurement had one been able to carry it out, we call this counterfactual error correction.
Pressure induced structural phase transition in solid oxidizer KClO3: A first-principles study
NASA Astrophysics Data System (ADS)
Yedukondalu, N.; Ghule, Vikas D.; Vaitheeswaran, G.
2013-05-01
High pressure behavior of potassium chlorate (KClO3) has been investigated from 0 to 10 GPa by means of first principles density functional theory calculations. The calculated ground state parameters, transition pressure, and phonon frequencies using semiempirical dispersion correction scheme are in excellent agreement with experiment. It is found that KClO3 undergoes a pressure induced first order phase transition with an associated volume collapse of 6.4% from monoclinic (P21/m) → rhombohedral (R3m) structure at 2.26 GPa, which is in good accord with experimental observation. However, the transition pressure was found to underestimate (0.11 GPa) and overestimate (3.57 GPa) using local density approximation and generalized gradient approximation functionals, respectively. Mechanical stability of both the phases is explained from the calculated single crystal elastic constants. In addition, the zone center phonon frequencies have been calculated using density functional perturbation theory at ambient as well as at high pressure and the lattice modes are found to soften under pressure between 0.6 and 1.2 GPa. The present study reveals that the observed structural phase transition leads to changes in the decomposition mechanism of KClO3 which corroborates with the experimental results.
Pressure induced structural phase transition in solid oxidizer KClO3: a first-principles study.
Yedukondalu, N; Ghule, Vikas D; Vaitheeswaran, G
2013-05-07
High pressure behavior of potassium chlorate (KClO3) has been investigated from 0 to 10 GPa by means of first principles density functional theory calculations. The calculated ground state parameters, transition pressure, and phonon frequencies using semiempirical dispersion correction scheme are in excellent agreement with experiment. It is found that KClO3 undergoes a pressure induced first order phase transition with an associated volume collapse of 6.4% from monoclinic (P2(1)/m) → rhombohedral (R3m) structure at 2.26 GPa, which is in good accord with experimental observation. However, the transition pressure was found to underestimate (0.11 GPa) and overestimate (3.57 GPa) using local density approximation and generalized gradient approximation functionals, respectively. Mechanical stability of both the phases is explained from the calculated single crystal elastic constants. In addition, the zone center phonon frequencies have been calculated using density functional perturbation theory at ambient as well as at high pressure and the lattice modes are found to soften under pressure between 0.6 and 1.2 GPa. The present study reveals that the observed structural phase transition leads to changes in the decomposition mechanism of KClO3 which corroborates with the experimental results.
NASA Technical Reports Server (NTRS)
Smith, D. R.; Leslie, F. W.
1984-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a successive correction type scheme for the analysis of surface meteorological data. The scheme is subjected to a series of experiments to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple pass technique increases the accuracy of the analysis. Furthermore, the tests suggest appropriate values for the analysis parameters in resolving disturbances for the data set used in this investigation.
Angular spectral framework to test full corrections of paraxial solutions.
Mahillo-Isla, R; González-Morales, M J
2015-07-01
Different correction methods for paraxial solutions have been used when such solutions extend out of the paraxial regime. The authors have used correction methods guided by either their experience or some educated hypothesis pertinent to the particular problem that they were tackling. This article provides a framework so as to classify full wave correction schemes. Thus, for a given solution of the paraxial wave equation, we can select the best correction scheme of those available. Some common correction methods are considered and evaluated under the proposed scope. Another remarkable contribution is obtained by giving the necessary conditions that two solutions of the Helmholtz equation must accomplish to accept a common solution of the parabolic wave equation as a paraxial approximation of both solutions.
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.
Effect of ambient temperature and humidity on emissions of an idling gas turbine
NASA Technical Reports Server (NTRS)
Kauffman, C. W.
1977-01-01
The effects of inlet pressure, temperature, and humidity on the oxides of nitrogen produced by an engine operating at takeoff power setting were investigated and numerous correction factors were formulated. The effect of ambient relative humidity on gas turbine idle emissions was ascertained. Experimentally, a nonvitiating combustor rig was employed to simulate changing combustor inlet conditions as generated by changing ambient conditions. Emissions measurements were made at the combustor exit. For carbon monoxide, a reaction kinetic scheme was applied within each zone of the combustor where initial species concentrations reflected not only local combustor characteristics but also changing ambient conditions.
Using concatenated quantum codes for universal fault-tolerant quantum gates.
Jochym-O'Connor, Tomas; Laflamme, Raymond
2014-01-10
We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.
Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gorjala, Bhargavi
1991-01-01
Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.
Recent assimilation developments of FOAM the Met Office ocean forecast system
NASA Astrophysics Data System (ADS)
Lea, Daniel; Martin, Matthew; Waters, Jennifer; Mirouze, Isabelle; While, James; King, Robert
2015-04-01
FOAM is the Met Office's operational ocean forecasting system. This system comprises a range of models from a 1/4 degree resolution global to 1/12 degree resolution regional models and shelf seas models at 7 km resolution. The system is made up of the ocean model NEMO (Nucleus for European Modeling of the Ocean), the Los Alomos sea ice model CICE and the NEMOVAR assimilation run in 3D-VAR FGAT mode. Work is ongoing to transition to both a higher resolution global ocean model at 1/12 degrees and to run FOAM in coupled models. The FOAM system generally performs well. One area of concern however is the performance in the tropics where spurious oscillations and excessive vertical velocity gradients are found after assimilation. NEMOVAR includes a balance operator which in the extra-tropics uses geostrophic balance to produce velocity increments which balance the density increments applied. In the tropics, however, the main balance is between the pressure gradients produced by the density gradient and the applied wind stress. A scheme is presented which aims to maintain this balance when increments are applied. Another issue in FOAM is that there are sometimes persistent temperature and salinity errors which are not effectively corrected by the assimilation. The standard NEMOVAR has a single correlation length scale based on the local Rossby radius. This means that observations in the extra tropics have influence on the model only on short length-scales. In order to maximise the information extracted from the observations and to correct large scale model biases a multiple correlation length-scale scheme has been developed. This includes a larger length scale which spreads observation information further. Various refinements of the scheme are also explored including reducing the longer length scale component at the edge of the sea ice and in areas with high potential vorticity gradients. A related scheme which varies the correlation length scale in the shelf seas is also described.
An improved method for predicting brittleness of rocks via well logs in tight oil reservoirs
NASA Astrophysics Data System (ADS)
Wang, Zhenlin; Sun, Ting; Feng, Cheng; Wang, Wei; Han, Chuang
2018-06-01
There can be no industrial oil production in tight oil reservoirs until fracturing is undertaken. Under such conditions, the brittleness of the rocks is a very important factor. However, it has so far been difficult to predict. In this paper, the selected study area is the tight oil reservoirs in Lucaogou formation, Permian, Jimusaer sag, Junggar basin. According to the transformation of dynamic and static rock mechanics parameters and the correction of confining pressure, an improved method is proposed for quantitatively predicting the brittleness of rocks via well logs in tight oil reservoirs. First, 19 typical tight oil core samples are selected in the study area. Their static Young’s modulus, static Poisson’s ratio and petrophysical parameters are measured. In addition, the static brittleness indices of four other tight oil cores are measured under different confining pressure conditions. Second, the dynamic Young’s modulus, Poisson’s ratio and brittleness index are calculated using the compressional and shear wave velocity. With combination of the measured and calculated results, the transformation model of dynamic and static brittleness index is built based on the influence of porosity and clay content. The comparison of the predicted brittleness indices and measured results shows that the model has high accuracy. Third, on the basis of the experimental data under different confining pressure conditions, the amplifying factor of brittleness index is proposed to correct for the influence of confining pressure on the brittleness index. Finally, the above improved models are applied to formation evaluation via well logs. Compared with the results before correction, the results of the improved models agree better with the experimental data, which indicates that the improved models have better application effects. The brittleness index prediction method of tight oil reservoirs is improved in this research. It is of great importance in the optimization of fracturing layer and fracturing construction schemes and the improvement of oil recovery.
Ruschke, Stefan; Eggers, Holger; Kooijman, Hendrik; Diefenbach, Maximilian N; Baum, Thomas; Haase, Axel; Rummeny, Ernst J; Hu, Houchun H; Karampinos, Dimitrios C
2017-09-01
To propose a phase error correction scheme for monopolar time-interleaved multi-echo gradient echo water-fat imaging that allows accurate and robust complex-based quantification of the proton density fat fraction (PDFF). A three-step phase correction scheme is proposed to address a) a phase term induced by echo misalignments that can be measured with a reference scan using reversed readout polarity, b) a phase term induced by the concomitant gradient field that can be predicted from the gradient waveforms, and c) a phase offset between time-interleaved echo trains. Simulations were carried out to characterize the concomitant gradient field-induced PDFF bias and the performance estimating the phase offset between time-interleaved echo trains. Phantom experiments and in vivo liver and thigh imaging were performed to study the relevance of each of the three phase correction steps on PDFF accuracy and robustness. The simulation, phantom, and in vivo results showed in agreement with the theory an echo time-dependent PDFF bias introduced by the three phase error sources. The proposed phase correction scheme was found to provide accurate PDFF estimation independent of the employed echo time combination. Complex-based time-interleaved water-fat imaging was found to give accurate and robust PDFF measurements after applying the proposed phase error correction scheme. Magn Reson Med 78:984-996, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Yuhara, Daisuke; Brumby, Paul E; Wu, David T; Sum, Amadeu K; Yasuoka, Kenji
2018-05-14
To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems.
NASA Astrophysics Data System (ADS)
Yuhara, Daisuke; Brumby, Paul E.; Wu, David T.; Sum, Amadeu K.; Yasuoka, Kenji
2018-05-01
To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems.
NASA Technical Reports Server (NTRS)
Jameson, Antony
1994-01-01
The effect of artificial diffusion on discrete shock structures is examined for a family of schemes which includes scalar diffusion, convective upwind and split pressure (CUSP) schemes, and upwind schemes with characteristics splitting. The analysis leads to conditions on the diffusive flux such that stationary discrete shocks can contain a single interior point. The simplest formulation which meets these conditions is a CUSP scheme in which the coefficients of the pressure differences is fully determined by the coefficient of convective diffusion. It is also shown how both the characteristic and CUSP schemes can be modified to preserve constant stagnation enthalpy in steady flow, leading to four variants, the E and H-characteristic schemes, and the E and H-CUSP schemes. Numerical results are presented which confirm the properties of these schemes.
NASA Astrophysics Data System (ADS)
Zhao, Shengmei; Wang, Le; Zou, Li; Gong, Longyan; Cheng, Weiwen; Zheng, Baoyu; Chen, Hanwu
2016-10-01
A free-space optical (FSO) communication link with multiplexed orbital angular momentum (OAM) modes has been demonstrated to largely enhance the system capacity without a corresponding increase in spectral bandwidth, but the performance of the link is unavoidably degraded by atmospheric turbulence (AT). In this paper, we propose a turbulence mitigation scheme to improve AT tolerance of the OAM-multiplexed FSO communication link using both channel coding and wavefront correction. In the scheme, we utilize a wavefront correction method to mitigate the phase distortion first, and then we use a channel code to further correct the errors in each OAM mode. The improvement of AT tolerance is discussed over the performance of the link with or without channel coding/wavefront correction. The results show that the bit error rate performance has been improved greatly. The detrimental effect of AT on the OAM-multiplexed FSO communication link could be removed by the proposed scheme even in the relatively strong turbulence regime, such as Cn2 = 3.6 ×10-14m - 2 / 3.
Computational technique for stepwise quantitative assessment of equation correctness
NASA Astrophysics Data System (ADS)
Othman, Nuru'l Izzah; Bakar, Zainab Abu
2017-04-01
Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.
On the Difference Between Additive and Subtractive QM/MM Calculations
Cao, Lili; Ryde, Ulf
2018-01-01
The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e., the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic, and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended. PMID:29666794
NASA Astrophysics Data System (ADS)
Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.
2017-08-01
Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.
On the difference between additive and subtractive QM/MM calculations
NASA Astrophysics Data System (ADS)
Cao, Lili; Ryde, Ulf
2018-04-01
The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e. the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended.
Convergence of generalized MUSCL schemes
NASA Technical Reports Server (NTRS)
Osher, S.
1984-01-01
Semi-discrete generalizations of the second order extension of Godunov's scheme, known as the MUSCL scheme, are constructed, starting with any three point E scheme. They are used to approximate scalar conservation laws in one space dimension. For convex conservation laws, each member of a wide class is proven to be a convergent approximation to the correct physical solution. Comparison with another class of high resolution convergent schemes is made.
A numerical study of the steady scalar convective diffusion equation for small viscosity
NASA Technical Reports Server (NTRS)
Giles, M. B.; Rose, M. E.
1983-01-01
A time-independent convection diffusion equation is studied by means of a compact finite difference scheme and numerical solutions are compared to the analytic inviscid solutions. The correct internal and external boundary layer behavior is observed, due to an inherent feature of the scheme which automatically produces upwind differencing in inviscid regions and the correct viscous behavior in viscous regions.
Li, Ping; Zhou, Yong; Li, Haijin; Xu, Qinfeng; Meng, Xianguang; Wang, Xiaoyong; Xiao, Min; Zou, Zhigang
2015-01-31
Correction for 'All-solid-state Z-scheme system arrays of Fe2V4O13/RGO/CdS for visible light-driving photocatalytic CO2 reduction into renewable hydrocarbon fuel' by Ping Li et al., Chem. Commun., 2015, 51, 800-803.
Robot-Arm Dynamic Control by Computer
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Tarn, Tzyh J.; Chen, Yilong J.
1987-01-01
Feedforward and feedback schemes linearize responses to control inputs. Method for control of robot arm based on computed nonlinear feedback and state tranformations to linearize system and decouple robot end-effector motions along each of cartesian axes augmented with optimal scheme for correction of errors in workspace. Major new feature of control method is: optimal error-correction loop directly operates on task level and not on joint-servocontrol level.
Measurement-free implementations of small-scale surface codes for quantum-dot qubits
NASA Astrophysics Data System (ADS)
Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.
2018-01-01
The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.
Application Of Multi-grid Method On China Seas' Temperature Forecast
NASA Astrophysics Data System (ADS)
Li, W.; Xie, Y.; He, Z.; Liu, K.; Han, G.; Ma, J.; Li, D.
2006-12-01
Correlation scales have been used in traditional scheme of 3-dimensional variational (3D-Var) data assimilation to estimate the background error covariance for the numerical forecast and reanalysis of atmosphere and ocean for decades. However there are still some drawbacks of this scheme. First, the correlation scales are difficult to be determined accurately. Second, the positive definition of the first-guess error covariance matrix cannot be guaranteed unless the correlation scales are sufficiently small. Xie et al. (2005) indicated that a traditional 3D-Var only corrects some certain wavelength errors and its accuracy depends on the accuracy of the first-guess covariance. And in general, short wavelength error can not be well corrected until long one is corrected and then inaccurate first-guess covariance may mistakenly take long wave error as short wave ones and result in erroneous analysis. For the purpose of quickly minimizing the errors of long and short waves successively, a new 3D-Var data assimilation scheme, called multi-grid data assimilation scheme, is proposed in this paper. By assimilating the shipboard SST and temperature profiles data into a numerical model of China Seas, we applied this scheme in two-month data assimilation and forecast experiment which ended in a favorable result. Comparing with the traditional scheme of 3D-Var, the new scheme has higher forecast accuracy and a lower forecast Root-Mean-Square (RMS) error. Furthermore, this scheme was applied to assimilate the SST of shipboard, AVHRR Pathfinder Version 5.0 SST and temperature profiles at the same time, and a ten-month forecast experiment on sea temperature of China Seas was carried out, in which a successful forecast result was obtained. Particularly, the new scheme is demonstrated a great numerical efficiency in these analyses.
Outgassing rate analysis of a velvet cathode and a carbon fiber cathode
NASA Astrophysics Data System (ADS)
Li, An-Kun; Fan, Yu-Wei; Qian, Bao-Liang; Zhang, Zi-cheng; Xun, Tao
2017-11-01
In this paper, the outgassing-rates of a carbon fiber array cathode and a polymer velvet cathode are tested and discussed. Two different methods of measurements are used in the experiments. In one scheme, a method based on dynamic equilibrium of pressure is used. Namely, the cathode works in the repetitive mode in a vacuum diode, a dynamic equilibrium pressure would be reached when the outgassing capacity in the chamber equals the pumping capacity of the pump, and the outgassing rate could be figured out according to this equilibrium pressure. In another scheme, a method based on static equilibrium of pressure is used. Namely, the cathode works in a closed vacuum chamber (a hard tube), and the outgassing rate could be calculated from the pressure difference between the pressure in the chamber before and after the work of the cathode. The outgassing rate is analyzed from the real time pressure evolution data which are measured using a magnetron gauge in both schemes. The outgassing rates of the carbon fiber array cathode and the velvet cathode are 7.3 ± 0.4 neutrals/electron and 85 ± 5 neutrals/electron in the first scheme and 9 ± 0.5 neutrals/electron and 98 ± 7 neutrals/electron in the second scheme. Both the results of two schemes show that the outgassing rate of the carbon fiber array cathode is an order smaller than that of the velvet cathode under similar conditions, which shows that this carbon fiber array cathode is a promising replacement of the velvet cathode in the application of magnetically insulated transmission line oscillators and relativistic magnetrons.
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.
2017-12-01
As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.
Multivariate optimum interpolation of surface pressure and winds over oceans
NASA Technical Reports Server (NTRS)
Bloom, S. C.
1984-01-01
The observations of surface pressure are quite sparse over oceanic areas. An effort to improve the analysis of surface pressure over oceans through the development of a multivariate surface analysis scheme which makes use of surface pressure and wind data is discussed. Although the present research used ship winds, future versions of this analysis scheme could utilize winds from additional sources, such as satellite scatterometer data.
The Design and Analysis of the Hydraulic-pressure Seal of the Engine Box
NASA Astrophysics Data System (ADS)
Chen, Zhenya; Shen, Xingquan; Xin, Zhijie; Guo, Tingting; Liao, Kewei
2017-12-01
According to the sealing requirements of engine casing, using NX software to establish three-dimensional solid model of the engine box. Designing two seals suppress schemes basing on analyzing the characteristics of the case structure, one of seal is using two pins on one side to localize, the other is using cylinder to top tight and fasten, Clarifying the reasons for the using the former scheme have a lower cost. At the same time analysesing of the forces and deformation of the former scheme using finite element analysis software and the NX software, results proved that the pressure scheme can meet the actual needs of the program. It illustrated the composition of the basic principles of manual pressure and hydraulic system, verifed the feasibility of the seal program using experiment, providing reference for the experimental program of hydrostatic pressure in the future.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
NASA Astrophysics Data System (ADS)
Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven
2008-03-01
Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.
Bian, Tianjian; Gao, Jie; Zhang, Chuang; ...
2017-12-10
In September 2012, Chinese scientists proposed a Circular Electron Positron Collider (CEPC) in China at 240 GeV center-of-mass energy for Higgs studies. The booster provides 120 GeV electron and positron beams to the CEPC collider for top-up injection at 0.1 Hz. The design of the full energy booster ring of the CEPC is a challenge. The ejected beam energy is 120 GeV and the injected beam energy is 6 GeV. Here in this paper we describe two alternative schemes, the wiggler bend scheme and the normal bend scheme. For the wiggler bend scheme, we propose to operate the booster ringmore » as a large wiggler at low energy and as a normal ring at high energy to avoid the problem of very low dipole magnet fields. Finally, for the normal bend scheme, we implement the orbit correction to correct the earth field.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bian, Tianjian; Gao, Jie; Zhang, Chuang
In September 2012, Chinese scientists proposed a Circular Electron Positron Collider (CEPC) in China at 240 GeV center-of-mass energy for Higgs studies. The booster provides 120 GeV electron and positron beams to the CEPC collider for top-up injection at 0.1 Hz. The design of the full energy booster ring of the CEPC is a challenge. The ejected beam energy is 120 GeV and the injected beam energy is 6 GeV. Here in this paper we describe two alternative schemes, the wiggler bend scheme and the normal bend scheme. For the wiggler bend scheme, we propose to operate the booster ringmore » as a large wiggler at low energy and as a normal ring at high energy to avoid the problem of very low dipole magnet fields. Finally, for the normal bend scheme, we implement the orbit correction to correct the earth field.« less
Pressure activated interconnection of micro transfer printed components
NASA Astrophysics Data System (ADS)
Prevatte, Carl; Guven, Ibrahim; Ghosal, Kanchan; Gomez, David; Moore, Tanya; Bonafede, Salvatore; Raymond, Brook; Trindade, António Jose; Fecioru, Alin; Kneeburg, David; Meitl, Matthew A.; Bower, Christopher A.
2016-05-01
Micro transfer printing and other forms of micro assembly deterministically produce heterogeneously integrated systems of miniaturized components on non-native substrates. Most micro assembled systems include electrical interconnections to the miniaturized components, typically accomplished by metal wires formed on the non-native substrate after the assembly operation. An alternative scheme establishing interconnections during the assembly operation is a cost-effective manufacturing method for producing heterogeneous microsystems, and facilitates the repair of integrated microsystems, such as displays, by ex post facto addition of components to correct defects after system-level tests. This letter describes pressure-concentrating conductor structures formed on silicon (1 0 0) wafers to establish connections to preexisting conductive traces on glass and plastic substrates during micro transfer printing with an elastomer stamp. The pressure concentrators penetrate a polymer layer to form the connection, and reflow of the polymer layer bonds the components securely to the target substrate. The experimental yield of series-connected test systems with >1000 electrical connections demonstrates the suitability of the process for manufacturing, and robustness of the test systems against exposure to thermal shock, damp heat, and mechanical flexure shows reliability of the resulting bonds.
On basis set superposition error corrected stabilization energies for large n-body clusters.
Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael
2011-10-07
In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Wang, Shoucheng; Huang, Guoqing; Wu, Xin
2018-02-01
In this paper, we survey the effect of dissipative forces including radiation pressure, Poynting–Robertson drag, and solar wind drag on the motion of dust grains with negligible mass, which are subjected to the gravities of the Sun and Jupiter moving in circular orbits. The effect of the dissipative parameter on the locations of five Lagrangian equilibrium points is estimated analytically. The instability of the triangular equilibrium point L4 caused by the drag forces is also shown analytically. In this case, the Jacobi constant varies with time, whereas its integral invariant relation still provides a probability for the applicability of the conventional fourth-order Runge–Kutta algorithm combined with the velocity scaling manifold correction scheme. Consequently, the velocity-only correction method significantly suppresses the effects of artificial dissipation and a rapid increase in trajectory errors caused by the uncorrected one. The stability time of an orbit, regardless of whether it is chaotic or not in the conservative problem, is apparently longer in the corrected case than in the uncorrected case when the dissipative forces are included. Although the artificial dissipation is ruled out, the drag dissipation leads to an escape of grains. Numerical evidence also demonstrates that more orbits near the triangular equilibrium point L4 escape as the integration time increases.
Boore, D.M.; Stephens, C.D.; Joyner, W.B.
2002-01-01
Residual displacements for large earthquakes can sometimes be determined from recordings on modern digital instruments, but baseline offsets of unknown origin make it difficult in many cases to do so. To recover the residual displacement, we suggest tailoring a correction scheme by studying the character of the velocity obtained by integration of zeroth-order-corrected acceleration and then seeing if the residual displacements are stable when the various parameters in the particular correction scheme are varied. For many seismological and engineering purposes, however, the residual displacement are of lesser importance than ground motions at periods less than about 20 sec. These ground motions are often recoverable with simple baseline correction and low-cut filtering. In this largely empirical study, we illustrate the consequences of various correction schemes, drawing primarily from digital recordings of the 1999 Hector Mine, California, earthquake. We show that with simple processing the displacement waveforms for this event are very similar for stations separated by as much as 20 km. We also show that a strong pulse on the transverse component was radiated from the Hector Mine earthquake and propagated with little distortion to distances exceeding 170 km; this pulse leads to large response spectral amplitudes around 10 sec.
Neural network decoder for quantum error correcting codes
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Z.; Ching, W.Y.
Based on the Sterne-Inkson model for the self-energy correction to the single-particle energy in the local-density approximation (LDA), we have implemented an approximate energy-dependent and [bold k]-dependent [ital GW] correction scheme to the orthogonalized linear combination of atomic orbital-based local-density calculation for insulators. In contrast to the approach of Jenkins, Srivastava, and Inkson, we evaluate the on-site exchange integrals using the LDA Bloch functions throughout the Brillouin zone. By using a [bold k]-weighted band gap [ital E][sub [ital g
Dissipative quantum error correction and application to quantum sensing with trapped ions.
Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A
2017-11-28
Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
Projection methods for incompressible flow problems with WENO finite difference schemes
NASA Astrophysics Data System (ADS)
de Frutos, Javier; John, Volker; Novo, Julia
2016-03-01
Weighted essentially non-oscillatory (WENO) finite difference schemes have been recommended in a competitive study of discretizations for scalar evolutionary convection-diffusion equations [20]. This paper explores the applicability of these schemes for the simulation of incompressible flows. To this end, WENO schemes are used in several non-incremental and incremental projection methods for the incompressible Navier-Stokes equations. Velocity and pressure are discretized on the same grid. A pressure stabilization Petrov-Galerkin (PSPG) type of stabilization is introduced in the incremental schemes to account for the violation of the discrete inf-sup condition. Algorithmic aspects of the proposed schemes are discussed. The schemes are studied on several examples with different features. It is shown that the WENO finite difference idea can be transferred to the simulation of incompressible flows. Some shortcomings of the methods, which are due to the splitting in projection schemes, become also obvious.
NASA Technical Reports Server (NTRS)
Schlesinger, R. E.
1985-01-01
The impact of upstream-biased corrections for third-order spatial truncation error on the stability and phase error of the two-dimensional Crowley combined advective scheme with the cross-space term included is analyzed, putting primary emphasis on phase error reduction. The various versions of the Crowley scheme are formally defined, and their stability and phase error characteristics are intercompared using a linear Fourier component analysis patterned after Fromm (1968, 1969). The performances of the schemes under prototype simulation conditions are tested using time-dependent numerical experiments which advect an initially cone-shaped passive scalar distribution in each of three steady nondivergent flows. One such flow is solid rotation, while the other two are diagonal uniform flow and a strongly deformational vortex.
NASA Astrophysics Data System (ADS)
Whalen, Daniel; Norman, Michael L.
2006-02-01
Radiation hydrodynamical transport of ionization fronts (I-fronts) in the next generation of cosmological reionization simulations holds the promise of predicting UV escape fractions from first principles as well as investigating the role of photoionization in feedback processes and structure formation. We present a multistep integration scheme for radiative transfer and hydrodynamics for accurate propagation of I-fronts and ionized flows from a point source in cosmological simulations. The algorithm is a photon-conserving method that correctly tracks the position of I-fronts at much lower resolutions than nonconservative techniques. The method applies direct hierarchical updates to the ionic species, bypassing the need for the costly matrix solutions required by implicit methods while retaining sufficient accuracy to capture the true evolution of the fronts. We review the physics of ionization fronts in power-law density gradients, whose analytical solutions provide excellent validation tests for radiation coupling schemes. The advantages and potential drawbacks of direct and implicit schemes are also considered, with particular focus on problem time-stepping, which if not properly implemented can lead to morphologically plausible I-front behavior that nonetheless departs from theory. We also examine the effect of radiation pressure from very luminous central sources on the evolution of I-fronts and flows.
Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations
NASA Astrophysics Data System (ADS)
Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Rieben, R.; Tomov, V.
2018-03-01
We present a new predictor-corrector approach to enforcing local maximum principles in piecewise-linear finite element schemes for the compressible Euler equations. The new element-based limiting strategy is suitable for continuous and discontinuous Galerkin methods alike. In contrast to synchronized limiting techniques for systems of conservation laws, we constrain the density, momentum, and total energy in a sequential manner which guarantees positivity preservation for the pressure and internal energy. After the density limiting step, the total energy and momentum gradients are adjusted to incorporate the irreversible effect of density changes. Antidiffusive corrections to bounds-compatible low-order approximations are limited to satisfy inequality constraints for the specific total and kinetic energy. An accuracy-preserving smoothness indicator is introduced to gradually adjust lower bounds for the element-based correction factors. The employed smoothness criterion is based on a Hessian determinant test for the density. A numerical study is performed for test problems with smooth and discontinuous solutions.
Electronic properties of copper aluminate examined by three theoretical approaches
NASA Astrophysics Data System (ADS)
Christensen, Niels; Svane, Axel
2010-03-01
Electronic properties of 3R.CuAlO2 are derived vs. pressure from ab initio band structure calculations within the local-density approximation (LDA), LDA+U scheme as well as the quasiparticle self-consistent GW approximation (QSGW, van Schilfgaarde, Kotani, and Falaev). The LDA underestimates the gap and places the Cu-3d states at too high energies. An effective U value, 8.2 eV, can be selected so that LDA+U lowers the 3d states to match XPS data and such that the lowest gap agrees rather well with optical absorption experiments. The electrical field gradient (EFG) on Cu is in error when calculated within the LDA. The agreement with experiment can be improved by LDA+U, but a larger U, 13.5 eV, is needed for full adjustment. QSGW yields correct Cu-EFG and, when electron-hole correlations are included, also correct band gaps. The QSGW and LDA band gap deformation potential values differ significantly.
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
Numerical experiments on the accuracy of ENO and modified ENO schemes
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1990-01-01
Further numerical experiments are made assessing an accuracy degeneracy phenomena. A modified essentially non-oscillatory (ENO) scheme is proposed, which recovers the correct order of accuracy for all the test problems with smooth initial conditions and gives comparable results with the original ENO schemes for discontinuous problems.
LDPC-PPM Coding Scheme for Optical Communication
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
NASA Astrophysics Data System (ADS)
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
Bias correction of daily satellite precipitation data using genetic algorithm
NASA Astrophysics Data System (ADS)
Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.
2018-05-01
Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.
NASA Technical Reports Server (NTRS)
Schuster, David M.; Panda, Jayanta; Ross, James C.; Roozeboom, Nettie H.; Burnside, Nathan J.; Ngo, Christina L.; Kumagai, Hiro; Sellers, Marvin; Powell, Jessica M.; Sekula, Martin K.;
2016-01-01
This NESC assessment examined the accuracy of estimating buffet loads on in-line launch vehicles without booster attachments using sparse unsteady pressure measurements. The buffet loads computed using sparse sensor data were compared with estimates derived using measurements with much higher spatial resolution. The current method for estimating launch vehicle buffet loads is through wind tunnel testing of models with approximately 400 unsteady pressure transducers. Even with this relatively large number of sensors, the coverage can be insufficient to provide reliable integrated unsteady loads on vehicles. In general, sparse sensor spacing requires the use of coherence-length-based corrections in the azimuthal and axial directions to integrate the unsteady pressures and obtain reasonable estimates of the buffet loads. Coherence corrections have been used to estimate buffet loads for a variety of launch vehicles with the assumption methodology results in reasonably conservative loads. For the Space Launch System (SLS), the first estimates of buffet loads exceeded the limits of the vehicle structure, so additional tests with higher sensor density were conducted to better define the buffet loads and possibly avoid expensive modifications to the vehicle design. Without the additional tests and improvements to the coherence-length analysis methods, there would have been significant impacts to the vehicle weight, cost, and schedule. If the load estimates turn out to be too low, there is significant risk of structural failure of the vehicle. This assessment used a combination of unsteady pressure-sensitive paint (uPSP), unsteady pressure transducers, and a dynamic force and moment balance to investigate the integration schemes used with limited unsteady pressure data by comparing them with direct integration of extremely dense fluctuating pressure measurements. An outfall of the assessment was to evaluate the potential of using the emerging uPSP technique in a production test environment for future launch vehicles. The results show that modifications to the current technique can improve the accuracy of buffet estimates. More importantly, the uPSP worked remarkably well and, with improvements to the frequency response, sensitivity, and productivity, will provide an enhanced method for measuring wind tunnel buffet forcing functions (BFFs).
Precision calculations for h → WW/ZZ → 4 fermions in the Two-Higgs-Doublet Model with Prophecy4f
NASA Astrophysics Data System (ADS)
Altenkamp, Lukas; Dittmaier, Stefan; Rzehak, Heidi
2018-03-01
We have calculated the next-to-leading-order electroweak and QCD corrections to the decay processes h → WW/ZZ → 4 fermions of the light CP-even Higgs boson h of various types of Two-Higgs-Doublet Models (Types I and II, "lepton-specific" and "flipped" models). The input parameters are defined in four different renormalization schemes, where parameters that are not directly accessible by experiments are defined in the \\overline{MS} scheme. Numerical results are presented for the corrections to partial decay widths for various benchmark scenarios previously motivated in the literature, where we investigate the dependence on the \\overline{MS} renormalization scale and on the choice of the renormalization scheme in detail. We find that it is crucial to be precise with these issues in parameter analyses, since parameter conversions between different schemes can involve sizeable or large corrections, especially in scenarios that are close to experimental exclusion limits or theoretical bounds. It even turns out that some renormalization schemes are not applicable in specific regions of parameter space. Our investigation of differential distributions shows that corrections beyond the Standard Model are mostly constant offsets induced by the mixing between the light and heavy CP-even Higgs bosons, so that differential analyses of h→4 f decay observables do not help to identify Two-Higgs-Doublet Models. Moreover, the decay widths do not significantly depend on the specific type of those models. The calculations are implemented in the public Monte Carlo generator Prophecy4f and ready for application.
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a three-dimensional pressure-based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution had a strong influence on the accuracy of the base flowfield prediction.
Incompressible spectral-element method: Derivation of equations
NASA Technical Reports Server (NTRS)
Deanna, Russell G.
1993-01-01
A fractional-step splitting scheme breaks the full Navier-Stokes equations into explicit and implicit portions amenable to the calculus of variations. Beginning with the functional forms of the Poisson and Helmholtz equations, we substitute finite expansion series for the dependent variables and derive the matrix equations for the unknown expansion coefficients. This method employs a new splitting scheme which differs from conventional three-step (nonlinear, pressure, viscous) schemes. The nonlinear step appears in the conventional, explicit manner, the difference occurs in the pressure step. Instead of solving for the pressure gradient using the nonlinear velocity, we add the viscous portion of the Navier-Stokes equation from the previous time step to the velocity before solving for the pressure gradient. By combining this 'predicted' pressure gradient with the nonlinear velocity in an explicit term, and the Crank-Nicholson method for the viscous terms, we develop a Helmholtz equation for the final velocity.
NASA Astrophysics Data System (ADS)
Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo
2017-01-01
We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.
Analytical and numerical analysis of frictional damage in quasi brittle materials
NASA Astrophysics Data System (ADS)
Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.
2016-07-01
Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.
Accurate thermoelastic tensor and acoustic velocities of NaCl
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less
NASA Astrophysics Data System (ADS)
Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun
2015-07-01
An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.
An Experimental Investigation of Unsteady Surface Pressure on an Airfoil in Turbulence
NASA Technical Reports Server (NTRS)
Mish, Patrick F.; Devenport, William J.
2003-01-01
Measurements of fluctuating surface pressure were made on a NACA 0015 airfoil immersed in grid generated turbulence. The airfoil model has a 2 ft chord and spans the 6 ft Virginia Tech Stability Wind Tunnel test section. Two grids were used to investigate the effects of turbulence length scale on the surface pressure response. A large grid which produced turbulence with an integral scale 13% of the chord and a smaller grid which produced turbulence with an integral scale 1.3% of the chord. Measurements were performed at angles of attack, alpha from 0 to 20 . An array of microphones mounted subsurface was used to measure the unsteady surface pressure. The goal of this measurement was to characterize the effects of angle of attack on the inviscid response. Lift spectra calculated from pressure measurements at each angle of attack revealed two distinct interaction regions; for omega(sub r) = omega b / U(sub infinity) is less than 10 a reduction in unsteady lift of up to 7 decibels (dB) occurs while an increase occurs for omega(sub r) is greater than 10 as the angle of attack is increased. The reduction in unsteady lift at low omega(sub r) with increasing angle of attack is a result that has never before been shown either experimentally or theoretically. The source of the reduction in lift spectral level appears to be closely related to the distortion of inflow turbulence based on analysis of surface pressure spanwise correlation length scales. Furthermore, while the distortion of the inflow appears to be critical in this experiment, this effect does not seem to be significant in larger integral scale (relative to the chord) flows based on the previous experimental work of McKeough suggesting the airfoils size relative to the inflow integral scale is critical in defining how the airfoil will respond under variation of angle of attack. A prediction scheme is developed that correctly accounts for the effects of distortion when the inflow integral scale is small relative to the airfoil chord. This scheme utilizes Rapid Distortion Theory to account for the distortion of the inflow with the distortion field modeled using a circular cylinder.
Quantitative NO-LIF imaging in high-pressure flames
NASA Astrophysics Data System (ADS)
Bessler, W. G.; Schulz, C.; Lee, T.; Shin, D.-I.; Hofmann, M.; Jeffries, J. B.; Wolfrum, J.; Hanson, R. K.
2002-07-01
Planar laser-induced fluorescence (PLIF) images of NO concentration are reported in premixed laminar flames from 1-60 bar exciting the A-X(0,0) band. The influence of O2 interference and gas composition, the variation with local temperature, and the effect of laser and signal attenuation by UV light absorption are investigated. Despite choosing a NO excitation and detection scheme with minimum O2-LIF contribution, this interference produces errors of up to 25% in a slightly lean 60 bar flame. The overall dependence of the inferred NO number density with temperature in the relevant (1200-2500 K) range is low (<±15%) because different effects cancel. The attenuation of laser and signal light by combustion products CO2 and H2O is frequently neglected, yet such absorption yields errors of up to 40% in our experiment despite the small scale (8 mm flame diameter). Understanding the dynamic range for each of these corrections provides guidance to minimize errors in single shot imaging experiments at high pressure.
NASA Technical Reports Server (NTRS)
Gensenheyner, Robert M.; Berdysz, Joseph J.
1947-01-01
An investigation to determine the performance and operational characteristics of the TG-1OOA gas turbine-propeller engine was conducted in the Cleveland altitude wind tunnel. As part of this investigation, the combustion-chamber performance was determined at pressure altitudes from 5000 to 35,000 feet, compressor-inlet rm-pressure ratios of 1.00 and 1.09, and engine speeds from 8000 to 13,000 rpm. Combustion-chamber performance is presented as a function of corrected engine speed and.correcte& horsepower. For the range of corrected engine speeds investigated, over-all total-pressure-loss ratio, cycle efficiency, ana the frac%ional loss in cycle efficiency resulting from pressure losses in the combustion chambers were unaffected by a change in altitude or compressor-inlet ram-pressure ratio. The scatter of combustion- efficiency data tended to obscure any effect of altitude or ram-pressure ratio. For the range of corrected horse-powers investigated, the total-pressure-loss ratio an& the fractional loss in cycle efficiency resulting from pressure losses in the combustion chambers decreased with an increase in corrected horsepower at a constant corrected engine speed. The combustion efficiency remained constant for the range of corrected horse-powers investigated at all corrected engine speeds.
Security and Correctness Analysis on Privacy-Preserving k-Means Clustering Schemes
NASA Astrophysics Data System (ADS)
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
Due to the fast development of Internet and the related IT technologies, it becomes more and more easier to access a large amount of data. k-means clustering is a powerful and frequently used technique in data mining. Many research papers about privacy-preserving k-means clustering were published. In this paper, we analyze the existing privacy-preserving k-means clustering schemes based on the cryptographic techniques. We show those schemes will cause the privacy breach and cannot output the correct results due to the faults in the protocol construction. Furthermore, we analyze our proposal as an option to improve such problems but with intermediate information breach during the computation.
40 CFR 146.64 - Corrective action for wells in the area of review.
Code of Federal Regulations, 2012 CFR
2012-07-01
... requiring corrective action other than pressure limitations shall include a compliance schedule requiring... require observance of appropriate pressure limitations under paragraph (d)(3) until all other corrective... have been taken. (3) The Director may require pressure limitations in lieu of plugging. If pressure...
NASA Technical Reports Server (NTRS)
Geisenheyner, Robert M.; Berdysz, Joseph J.
1948-01-01
An investigation to determine the performance and operational characteristics of an axial-flow gas turbine-propeller engine was conducted in the Cleveland altitude wind tunnel. As part of this investigation, the combustion-chamber performance was determined at pressure altitudes from 5000 to 35,000 feet, compressor-inlet ram-pressure ratios of 1.00 and 1.09, and engine speeds from 8000 to 13,000 rpm. Combustion-chamber performance is presented as a function of corrected engine speed and corrected horsepower. For the range of corrected engine speeds investigated, overall total-pressure-loss ratio, cycle efficiency, and the fractional loss in cycle efficiency resulting from pressure losses in the combustion chambers were unaffected by a change in altitude or compressor-inlet ram-pressure ratio. For the range of corrected horsepowers investigated, the total-pressure-loss ratio and the fractional loss in cycle efficiency resulting from pressure losses in the combustion chambers decreased with an increase in corrected horsepower at a constant corrected engine speed. The combustion efficiency remained constant for the range of corrected horsepowers investigated at all corrected engine speeds.
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-01-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol−1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning. PMID:24320250
Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol(-1)). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.
NASA Astrophysics Data System (ADS)
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol-1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.
Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.
Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin
2005-03-01
This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.
Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Brown, James W.; Evans, Robert H.
1988-01-01
The radiance reflected from a plane-parallel atmosphere and flat sea surface in the absence of aerosols has been determined with an exact multiple scattering code to improve the analysis of Nimbus-7 CZCS imagery. It is shown that the single scattering approximation normally used to compute this radiance can result in errors of up to 5 percent for small and moderate solar zenith angles. A scheme to include the effect of variations in the surface pressure in the exact computation of the Rayleigh radiance is discussed. The results of an application of these computations to CZCS imagery suggest that accurate atmospheric corrections can be obtained for solar zenith angles at least as large as 65 deg.
A Navier-Stokes Solution of Hull-Ring Wing-Thruster Interaction
NASA Technical Reports Server (NTRS)
Yang, C.-I.; Hartwich, P.; Sundaram, P.
1991-01-01
Navier-Stokes simulations of high Reynolds number flow around an axisymmetric body supported in a water tunnel were made. The numerical method is based on a finite-differencing high resolution second-order accurate implicit upwind scheme. Four different configurations were investigated, these are: (1) barebody; (2) body with an operating propeller; (3) body with a ring wing; and (4) body with a ring wing and an operating propeller. Pressure and velocity components near the stern region were obtained computationally and are shown to compare favorably with the experimental data. The method correctly predicts the existence and extent of stern flow separation for the barebody and the absence of flow separation for the three other configurations with ring wing and/or propeller.
NASA Technical Reports Server (NTRS)
Swanson, R. Charles; Radespiel, Rolf; Mccormick, V. Edward
1989-01-01
The two-dimensional (2-D) and three-dimensional Navier-Stokes equations are solved for flow over a NAE CAST-10 airfoil model. Recently developed finite-volume codes that apply a multistage time stepping scheme in conjunction with steady state acceleration techniques are used to solve the equations. Two-dimensional results are shown for flow conditions uncorrected and corrected for wind tunnel wall interference effects. Predicted surface pressures from 3-D simulations are compared with those from 2-D calculations. The focus of the 3-D computations is the influence of the sidewall boundary layers. Topological features of the 3-D flow fields are indicated. Lift and drag results are compared with experimental measurements.
NASA Technical Reports Server (NTRS)
Cooper, Clayton S.; Laurendeau, Normand M.; Hicks, Yolanda R. (Technical Monitor)
2000-01-01
Lean direct-injection (LDI) spray flames offer the possibility of reducing NO(sub x) emissions from gas turbines by rapid mixing of the liquid fuel and air so as to drive the flame structure toward partially-premixed conditions. We consider the technical approaches required to utilize laser-induced fluorescence methods for quantitatively measuring NO concentrations in high-pressure LDI spray flames. In the progression from atmospheric to high-pressure measurements, the LIF method requires a shift from the saturated to the linear regime of fluorescence measurements. As such, we discuss quantitative, spatially resolved laser-saturated fluorescence (LSF), linear laser-induced fluorescence (LIF), and planar laser-induced fluorescence (PLIF) measurements of NO concentration in LDI spray flames. Spatially-resolved LIF measurements of NO concentration (ppm) are reported for preheated, LDI spray flames at pressures of two to five atmospheres. The spray is produced by a hollow-cone, pressure-atomized nozzle supplied with liquid heptane. NO is excited via the Q(sub 2)(26.5) transition of the gamma(0,0) band. Detection is performed in a two nanometer region centered on the gamma(0,1) band. A complete scheme is developed by which quantitative NO concentrations in high-pressure LDI spray flames can be measured by applying linear LIF. NO is doped into the reactants and convected through the flame with no apparent destruction, thus allowing a NO fluorescence calibration to be taken inside the flame environment. The in-situ calibration scheme is validated by comparisons to a reference flame. Quantitative NO profiles are presented and analyzed so as to better understand the operation of lean-direct injectors for gas turbine combustors. Moreover, parametric studies are provided for variations in pressure, air-preheat temperature, and equivalence ratio. Similar parametric studies are performed for lean, premixed-prevaporized flames to permit comparisons to those for LDI flames. Finally, PLIF is expanded to high pressure in an effort to quantify the detected fluorescence image for LDI flames. Success is achieved by correcting the PLIF calibration via a single-point LIF measurement. This procedure removes the influence of any preferential background that occurs in the PLIF detection window. In general, both the LIF and PLIF measurements verify that the LDI strategy could be used to reduce NO(sub x) emissions in future gas turbine combustors.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
NASA Technical Reports Server (NTRS)
Vermote, E.; Elsaleous, N.; Kaufman, Y. J.; Dutton, E.
1994-01-01
An operational stratospheric correction scheme used after the Mount Pinatubo (Phillipines) eruption (Jun. 1991) is presented. The stratospheric aerosol distribution is assumed to be only variable with latitude. Each 9 days the latitudinal distribution of the optical thickness is computed by inverting radiances observed in the NOAA AVHRR channel 1 (0.63 micrometers) and channel 2 (0.83 micrometers) over the Pacific Ocean. This radiance data set is used to check the validity of model used for inversion by checking consistency of the optical thickness deduced from each channel as well as optical thickness deduced from different scattering angles. Using the optical thickness profile previously computed and radiative transfer code assuming Lambertian boundary condition, each pixel of channel 1 and 2 are corrected prior to computation of NDVI (Normalized Difference Vegetation Index). Comparison between corrected, non corrected, and years prior to Pinatubo eruption (1989 to 1990) NDVI composite, shows the necessity and the accuracy of the operational correction scheme.
NASA Technical Reports Server (NTRS)
Vila, Daniel; deGoncalves, Luis Gustavo; Toll, David L.; Rozante, Jose Roberto
2008-01-01
This paper describes a comprehensive assessment of a new high-resolution, high-quality gauge-satellite based analysis of daily precipitation over continental South America during 2004. This methodology is based on a combination of additive and multiplicative bias correction schemes in order to get the lowest bias when compared with the observed values. Inter-comparisons and cross-validations tests have been carried out for the control algorithm (TMPA real-time algorithm) and different merging schemes: additive bias correction (ADD), ratio bias correction (RAT) and TMPA research version, for different months belonging to different seasons and for different network densities. All compared merging schemes produce better results than the control algorithm, but when finer temporal (daily) and spatial scale (regional networks) gauge datasets is included in the analysis, the improvement is remarkable. The Combined Scheme (CoSch) presents consistently the best performance among the five techniques. This is also true when a degraded daily gauge network is used instead of full dataset. This technique appears a suitable tool to produce real-time, high-resolution, high-quality gauge-satellite based analyses of daily precipitation over land in regional domains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svane, A.; Trygg, J.; Johansson, B.
1997-09-01
Electronic-structure calculations of elemental praseodymium are presented. Several approximations are used to describe the Pr f electrons. It is found that the low-pressure, trivalent phase is well described using either the self-interaction corrected (SIC) local-spin-density (LSD) approximation or the generalized-gradient approximation (GGA) with spin and orbital polarization (OP). In the SIC-LSD approach the Pr f electrons are treated explicitly as localized with a localization energy given by the self-interaction of the f orbital. In the GGA+OP scheme the f-electron localization is described by the onset of spin and orbital polarization, the energetics of which is described by spin-moment formation energymore » and a term proportional to the total orbital moment, L{sub z}{sup 2}. The high-pressure phase is well described with the f electrons treated as band electrons, in either the LSD or the GGA approximations, of which the latter describes more accurately the experimental equation of state. The calculated pressure of the transition from localized to delocalized behavior is 280 kbar in the SIC-LSD approximation and 156 kbar in the GGA+OP approach, both comparing favorably with the experimentally observed transition pressure of 210 kbar. {copyright} {ital 1997} {ital The American Physical Society}« less
NASA Astrophysics Data System (ADS)
Somogyi, Gábor; Trócsányi, Zoltán
2008-08-01
In previous articles we outlined a subtraction scheme for regularizing doubly-real emission and real-virtual emission in next-to-next-to-leading order (NNLO) calculations of jet cross sections in electron-positron annihilation. In order to find the NNLO correction these subtraction terms have to be integrated over the factorized unresolved phase space and combined with the two-loop corrections. In this paper we perform the integration of all one-parton unresolved subtraction terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thachuk, M.; McCourt, F.R.W.
1991-09-15
A series of centrifugal sudden (CS) and infinite-order sudden (IOS) approximations together with their corrected versions, respectively, the corrected centrifugal sudden (CCS) and corrected infinite-order sudden (CIOS) approximations, originally introduced by McLenithan and Secrest (J. Chem. Phys. {bold 80}, 2480 (1987)), have been compared with the close-coupled (CC) method for the N{sub 2}--He interaction. This extends previous work using the H{sub 2}--He system (J. Chem. Phys. {bold 93}, 3931 (1990)) to an interaction which is more anisotropic and more classical in nature. A set of eleven energy dependent cross sections, including both relaxation and production types, has been calculated usingmore » the {ital LF}- and {ital LA}-labeling schemes for the CS approximation, as well as the {ital KI}-, {ital KF}-, {ital KA}-, and {ital KM}-labeling schemes for the IOS approximation. The latter scheme is defined as {ital KM}={ital K}=max({ital k}{sub {ital j}},{ital k}{sub {ital j}{sub {ital I}}}). Further, a number of temperature dependent cross sections formed from thermal averages of the above set have also been compared at 100 and 200 K. These comparisons have shown that the CS approximation produced accurate results for relaxation type cross sections regardless of the {ital L}-labeling scheme chosen, but inaccurate results for production type cross sections. Further, except for one particular cross section, the CCS approximation did not generally improve the accuracy of the CS results using either the {ital LF}- or {ital LA}-labeling schemes. The accuracy of the IOS results vary greatly between the cross sections with the most accurate values given by the {ital KM}-labeling scheme. The CIOS approximation generally increases the accuracy of the corresponding IOS results but does not completely eliminate the errors associated with them.« less
Random access to mobile networks with advanced error correction
NASA Technical Reports Server (NTRS)
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
Numerical Investigation of a Model Scramjet Combustor Using DDES
NASA Astrophysics Data System (ADS)
Shin, Junsu; Sung, Hong-Gye
2017-04-01
Non-reactive flows moving through a model scramjet were investigated using a delayed detached eddy simulation (DDES), which is a hybrid scheme combining Reynolds averaged Navier-Stokes scheme and a large eddy simulation. The three dimensional Navier-Stokes equations were solved numerically on a structural grid using finite volume methods. An in-house was developed. This code used a monotonic upstream-centered scheme for conservation laws (MUSCL) with an advection upstream splitting method by pressure weight function (AUSMPW+) for space. In addition, a 4th order Runge-Kutta scheme was used with preconditioning for time integration. The geometries and boundary conditions of a scramjet combustor operated by DLR, a German aerospace center, were considered. The profiles of the lower wall pressure and axial velocity obtained from a time-averaged solution were compared with experimental results. Also, the mixing efficiency and total pressure recovery factor were provided in order to inspect the performance of the combustor.
Development of a Blood Pressure Measurement Instrument with Active Cuff Pressure Control Schemes.
Kuo, Chung-Hsien; Wu, Chun-Ju; Chou, Hung-Chyun; Chen, Guan-Ting; Kuo, Yu-Cheng
2017-01-01
This paper presents an oscillometric blood pressure (BP) measurement approach based on the active control schemes of cuff pressure. Compared with conventional electronic BP instruments, the novelty of the proposed BP measurement approach is to utilize a variable volume chamber which actively and stably alters the cuff pressure during inflating or deflating cycles. The variable volume chamber is operated with a closed-loop pressure control scheme, and it is activated by controlling the piston position of a single-acting cylinder driven by a screw motor. Therefore, the variable volume chamber could significantly eliminate the air turbulence disturbance during the air injection stage when compared to an air pump mechanism. Furthermore, the proposed active BP measurement approach is capable of measuring BP characteristics, including systolic blood pressure (SBP) and diastolic blood pressure (DBP), during the inflating cycle. Two modes of air injection measurement (AIM) and accurate dual-way measurement (ADM) were proposed. According to the healthy subject experiment results, AIM reduced 34.21% and ADM reduced 15.78% of the measurement time when compared to a commercial BP monitor. Furthermore, the ADM performed much consistently (i.e., less standard deviation) in the measurements when compared to a commercial BP monitor.
Asian dust aerosol: Optical effect on satellite ocean color signal and a scheme of its correction
NASA Astrophysics Data System (ADS)
Fukushima, H.; Toratani, M.
1997-07-01
The paper first exhibits the influence of the Asian dust aerosol (KOSA) on a coastal zone color scanner (CZCS) image which records erroneously low or negative satellite-derived water-leaving radiance especially in a shorter wavelength region. This suggests the presence of spectrally dependent absorption which was disregarded in the past atmospheric correction algorithms. On the basis of the analysis of the scene, a semiempirical optical model of the Asian dust aerosol that relates aerosol single scattering albedo (ωA) to the spectral ratio of aerosol optical thickness between 550 nm and 670 nm is developed. Then, as a modification to a standard CZCS atmospheric correction algorithm (NASA standard algorithm), a scheme which estimates pixel-wise aerosol optical thickness, and in turn ωA, is proposed. The assumption of constant normalized water-leaving radiance at 550 nm is adopted together with a model of aerosol scattering phase function. The scheme is combined to the standard algorithm, performing atmospheric correction just the same as the standard version with a fixed Angstrom coefficient except in the case where the presence of Asian dust aerosol is detected by the lowered satellite-derived Angstrom exponent. Some of the model parameter values are determined so that the scheme does not produce any spatial discontinuity with the standard scheme. The algorithm was tested against the Japanese Asian dust CZCS scene with parameter values of the spectral dependency of ωA, first statistically determined and second optimized for selected pixels. Analysis suggests that the parameter values depend on the assumed Angstrom coefficient for standard algorithm, at the same time defining the spatial extent of the area to apply the Asian dust scheme. The algorithm was also tested for a Saharan dust scene, showing the relevance of the scheme but with different parameter setting. Finally, the algorithm was applied to a data set of 25 CZCS scenes to produce a monthly composite of pigment concentration for April 1981. Through these analyses, the modified algorithm is considered robust in the sense that it operates most compatibly with the standard algorithm yet performs adaptively in response to the magnitude of the dust effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appalakondaiah, S.; Vaitheeswaran, G., E-mail: gvaithee@gmail.com; Lebègue, S.
The effects of pressure on the structural and vibrational properties of the layered molecular crystal 1,1-diamino-2,2-dinitroethelene (FOX-7) are explored by first principles calculations. We observe significant changes in the calculated structural properties with different corrections for treating van der Waals interactions to Density Functional Theory (DFT), as compared with standard DFT functionals. In particular, the calculated ground state lattice parameters, volume and bulk modulus obtained with Grimme's scheme, are found to agree well with experiments. The calculated vibrational frequencies demonstrate the dependence of the intra and inter-molecular interactions on FOX-7 under pressure. In addition, we also found a significant incrementmore » in the N–H...O hydrogen bond strength under compression. This is explained by the change in bond lengths between nitrogen, hydrogen, and oxygen atoms, as well as calculated IR spectra under pressure. Finally, the computed band gap is about 2.3 eV with generalized gradient approximation, and is enhanced to 5.1 eV with the GW approximation, which reveals the importance of performing quasiparticle calculations in high energy density materials.« less
NASA Technical Reports Server (NTRS)
Troccoli, Alberto; Rienecker, Michele M.; Keppenne, Christian L.; Johnson, Gregory C.
2003-01-01
The NASA Seasonal-to-Interannual Prediction Project (NSIPP) has developed an Ocean data assimilation system to initialize the quasi-isopycnal ocean model used in our experimental coupled-model forecast system. Initial tests of the system have focused on the assimilation of temperature profiles in an optimal interpolation framework. It is now recognized that correction of temperature only often introduces spurious water masses. The resulting density distribution can be statically unstable and also have a detrimental impact on the velocity distribution. Several simple schemes have been developed to try to correct these deficiencies. Here the salinity field is corrected by using a scheme which assumes that the temperature-salinity relationship of the model background is preserved during the assimilation. The scheme was first introduced for a zlevel model by Troccoli and Haines (1999). A large set of subsurface observations of salinity and temperature is used to cross-validate two data assimilation experiments run for the 6-year period 1993-1998. In these two experiments only subsurface temperature observations are used, but in one case the salinity field is also updated whenever temperature observations are available.
Revised Chapman-Enskog analysis for a class of forcing schemes in the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Li, Q.; Zhou, P.; Yan, H. J.
2016-10-01
In the lattice Boltzmann (LB) method, the forcing scheme, which is used to incorporate an external or internal force into the LB equation, plays an important role. It determines whether the force of the system is correctly implemented in an LB model and affects the numerical accuracy. In this paper we aim to clarify a critical issue about the Chapman-Enskog analysis for a class of forcing schemes in the LB method in which the velocity in the equilibrium density distribution function is given by u =∑αeαfα / ρ , while the actual fluid velocity is defined as u ̂=u +δtF / (2 ρ ) . It is shown that the usual Chapman-Enskog analysis for this class of forcing schemes should be revised so as to derive the actual macroscopic equations recovered from these forcing schemes. Three forcing schemes belonging to the above class are analyzed, among which Wagner's forcing scheme [A. J. Wagner, Phys. Rev. E 74, 056703 (2006), 10.1103/PhysRevE.74.056703] is shown to be capable of reproducing the correct macroscopic equations. The theoretical analyses are examined and demonstrated with two numerical tests, including the simulation of Womersley flow and the modeling of flat and circular interfaces by the pseudopotential multiphase LB model.
A Note on Multigrid Theory for Non-nested Grids and/or Quadrature
NASA Technical Reports Server (NTRS)
Douglas, C. C.; Douglas, J., Jr.; Fyfe, D. E.
1996-01-01
We provide a unified theory for multilevel and multigrid methods when the usual assumptions are not present. For example, we do not assume that the solution spaces or the grids are nested. Further, we do not assume that there is an algebraic relationship between the linear algebra problems on different levels. What we provide is a computationally useful theory for adaptively changing levels. Theory is provided for multilevel correction schemes, nested iteration schemes, and one way (i.e., coarse to fine grid with no correction iterations) schemes. We include examples showing the applicability of this theory: finite element examples using quadrature in the matrix assembly and finite volume examples with non-nested grids. Our theory applies directly to other discretizations as well.
Shang, Zhehai; Lee, Zhongping; Dong, Qiang; Wei, Jianwei
2017-09-01
Self-shading associated with a skylight-blocked approach (SBA) system for the measurement of water-leaving radiance (L w ) and its correction [Appl. Opt.52, 1693 (2013)APOPAI0003-693510.1364/AO.52.001693] is characterized by Monte Carlo simulations, and it is found that this error is in a range of ∼1%-20% under most water properties and solar positions. A model for estimating this shading error is further developed, and eventually a scheme to correct this error based on the shaded measurements is proposed and evaluated. It is found that the shade-corrected value in the visible domain is within 3% of the true value, which thus indicates that we can obtain not only high precision but also high accuracy L w in the field with the SBA scheme.
An Orbit And Dispersion Correction Scheme for the PEP II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Donald, M.; Shoaee, H.
2011-09-01
To achieve optimum luminosity in a storage ring it is vital to control the residual vertical dispersion. In the original PEP storage ring, a scheme to control the residual dispersion function was implemented using the ring orbit as the controlling element. The 'best' orbit not necessarily giving the lowest vertical dispersion. A similar scheme has been implemented in both the on-line control code and in the simulation code LEGO. The method involves finding the response matrices (sensitivity of orbit/dispersion at each Beam-Position-Monitor (BPM) to each orbit corrector) and solving in a least squares sense for minimum orbit, dispersion function ormore » both. The optimum solution is usually a subset of the full least squares solution. A scheme of simultaneously correcting the orbits and dispersion has been implemented in the simulation code and on-line control system for PEP-II. The scheme is based on the eigenvector decomposition method. An important ingredient of the scheme is to choose the optimum eigenvectors that minimize the orbit, dispersion and corrector strength. Simulations indicate this to be a very effective way to control the vertical residual dispersion.« less
Experiments with a three-dimensional statistical objective analysis scheme using FGGE data
NASA Technical Reports Server (NTRS)
Baker, Wayman E.; Bloom, Stephen C.; Woollen, John S.; Nestler, Mark S.; Brin, Eugenia
1987-01-01
A three-dimensional (3D), multivariate, statistical objective analysis scheme (referred to as optimum interpolation or OI) has been developed for use in numerical weather prediction studies with the FGGE data. Some novel aspects of the present scheme include: (1) a multivariate surface analysis over the oceans, which employs an Ekman balance instead of the usual geostrophic relationship, to model the pressure-wind error cross correlations, and (2) the capability to use an error correlation function which is geographically dependent. A series of 4-day data assimilation experiments are conducted to examine the importance of some of the key features of the OI in terms of their effects on forecast skill, as well as to compare the forecast skill using the OI with that utilizing a successive correction method (SCM) of analysis developed earlier. For the three cases examined, the forecast skill is found to be rather insensitive to varying the error correlation function geographically. However, significant differences are noted between forecasts from a two-dimensional (2D) version of the OI and those from the 3D OI, with the 3D OI forecasts exhibiting better forecast skill. The 3D OI forecasts are also more accurate than those from the SCM initial conditions. The 3D OI with the multivariate oceanic surface analysis was found to produce forecasts which were slightly more accurate, on the average, than a univariate version.
NASA Astrophysics Data System (ADS)
Bessler, Wolfgang G.; Schulz, Christof; Lee, Tonghun; Jeffries, Jay B.; Hanson, Ronald K.
2002-06-01
Three different high-pressure flame measurement strategies for NO laser-induced fluorescence (LIF) with A-X (0,0) excitation have been studied previously with computational simulations and experiments in flames up to 15 bars. Interference from O2 LIF is a significant problem in lean flames for NO LIF measurements, and pressure broadening and quenching lead to increased interference with increased pressure. We investigate the NO LIF signal strength, interference by hot molecular oxygen, and temperature dependence of the three previous schemes and for two newly chosen excitation schemes with wavelength-resolved LIF measurements in premixed methane and air flames at pressures between 1 and 60 bars and a range of fuel /air ratios. In slightly lean flames with an equivalence ratio of 0.83 at 60 bars, the contribution of O2 LIF to the NO LIF signal varies between 8% and 29% for the previous schemes. The O2 interference is best suppressed with excitation at 226.03 nm.
NASA Astrophysics Data System (ADS)
Zhou, X.; Beljaars, A.; Wang, Y.; Huang, B.; Lin, C.; Chen, Y.; Wu, H.
2017-09-01
Weather Research and Forecasting (WRF) simulations with different selections of subgrid orographic drag over the Tibetan Plateau have been evaluated with observation and ERA-Interim reanalysis. Results show that the subgrid orographic drag schemes, especially the turbulent orographic form drag (TOFD) scheme, efficiently reduce the 10 m wind speed bias and RMS error with respect to station measurements. With the combination of gravity wave, flow blocking and TOFD schemes, wind speed is simulated more realistically than with the individual schemes only. Improvements are also seen in the 2 m air temperature and surface pressure. The gravity wave drag, flow blocking drag, and TOFD schemes combined have the smallest station mean bias (-2.05°C in 2 m air temperature and 1.27 hPa in surface pressure) and RMS error (3.59°C in 2 m air temperature and 2.37 hPa in surface pressure). Meanwhile, the TOFD scheme contributes more to the improvements than the gravity wave drag and flow blocking schemes. The improvements are more pronounced at low levels of the atmosphere than at high levels due to the stronger drag enhancement on the low-level flow. The reduced near-surface cold bias and high-pressure bias over the Tibetan Plateau are the result of changes in the low-level wind components associated with the geostrophic balance. The enhanced drag directly leads to weakened westerlies but also enhances the a-geostrophic flow in this case reducing (enhancing) the northerlies (southerlies), which bring more warm air across the Himalaya Mountain ranges from South Asia (bring less cold air from the north) to the interior Tibetan Plateau.
A functional supervised learning approach to the study of blood pressure data.
Papayiannis, Georgios I; Giakoumakis, Emmanuel A; Manios, Efstathios D; Moulopoulos, Spyros D; Stamatelopoulos, Kimon S; Toumanidis, Savvas T; Zakopoulos, Nikolaos A; Yannacopoulos, Athanasios N
2018-04-15
In this work, a functional supervised learning scheme is proposed for the classification of subjects into normotensive and hypertensive groups, using solely the 24-hour blood pressure data, relying on the concepts of Fréchet mean and Fréchet variance for appropriate deformable functional models for the blood pressure data. The schemes are trained on real clinical data, and their performance was assessed and found to be very satisfactory. Copyright © 2017 John Wiley & Sons, Ltd.
Comparitive Study of High-Order Positivity-Preserving WENO Schemes
NASA Technical Reports Server (NTRS)
Kotov, D. V.; Yee, H. C.; Sjogreen, B.
2014-01-01
In gas dynamics and magnetohydrodynamics flows, physically, the density ? and the pressure p should both be positive. In a standard conservative numerical scheme, however, the computed internal energy is The ideas of Zhang & Shu (2012) and Hu et al. (2012) precisely address the aforementioned issue. Zhang & Shu constructed a new conservative positivity-preserving procedure to preserve positive density and pressure for high-order Weighted Essentially Non-Oscillatory (WENO) schemes by the Lax-Friedrichs flux (WENO/LLF). In general, WENO/LLF is obtained by subtracting the kinetic energy from the total energy, resulting in a computed p that may be negative. Examples are problems in which the dominant energy is kinetic. Negative ? may often emerge in computing blast waves. In such situations the computed eigenvalues of the Jacobian will become imaginary. Consequently, the initial value problem for the linearized system will be ill posed. This explains why failure of preserving positivity of density or pressure may cause blow-ups of the numerical algorithm. The adhoc methods in numerical strategy which modify the computed negative density and/or the computed negative pressure to be positive are neither a conservative cure nor a stable solution. Conservative positivity-preserving schemes are more appropriate for such flow problems. too dissipative for flows such as turbulence with strong shocks computed in direct numerical simulations (DNS) and large eddy simulations (LES). The new conservative positivity-preserving procedure proposed in Hu et al. (2012) can be used with any high-order shock-capturing scheme, including high-order WENO schemes using the Roe's flux (WENO/Roe). The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more di cult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium ow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case as well as for the case consisting of two species and one reaction.. This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.
Local bounds preserving stabilization for continuous Galerkin discretization of hyperbolic systems
NASA Astrophysics Data System (ADS)
Mabuza, Sibusiso; Shadid, John N.; Kuzmin, Dmitri
2018-05-01
The objective of this paper is to present a local bounds preserving stabilized finite element scheme for hyperbolic systems on unstructured meshes based on continuous Galerkin (CG) discretization in space. A CG semi-discrete scheme with low order artificial dissipation that satisfies the local extremum diminishing (LED) condition for systems is used to discretize a system of conservation equations in space. The low order artificial diffusion is based on approximate Riemann solvers for hyperbolic conservation laws. In this case we consider both Rusanov and Roe artificial diffusion operators. In the Rusanov case, two designs are considered, a nodal based diffusion operator and a local projection stabilization operator. The result is a discretization that is LED and has first order convergence behavior. To achieve high resolution, limited antidiffusion is added back to the semi-discrete form where the limiter is constructed from a linearity preserving local projection stabilization operator. The procedure follows the algebraic flux correction procedure usually used in flux corrected transport algorithms. To further deal with phase errors (or terracing) common in FCT type methods, high order background dissipation is added to the antidiffusive correction. The resulting stabilized semi-discrete scheme can be discretized in time using a wide variety of time integrators. Numerical examples involving nonlinear scalar Burgers equation, and several shock hydrodynamics simulations for the Euler system are considered to demonstrate the performance of the method. For time discretization, Crank-Nicolson scheme and backward Euler scheme are utilized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com; Talon, Laurent, E-mail: talon@fast.u-psud.fr; Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr
The present contribution focuses on the accuracy of reflection-type boundary conditions in the Stokes–Brinkman–Darcy modeling of porous flows solved with the lattice Boltzmann method (LBM), which we operate with the two-relaxation-time (TRT) collision and the Brinkman-force based scheme (BF), called BF-TRT scheme. In parallel, we compare it with the Stokes–Brinkman–Darcy linear finite element method (FEM) where the Dirichlet boundary conditions are enforced on grid vertices. In bulk, both BF-TRT and FEM share the same defect: in their discretization a correction to the modeled Brinkman equation appears, given by the discrete Laplacian of the velocity-proportional resistance force. This correction modifies themore » effective Brinkman viscosity, playing a crucial role in the triggering of spurious oscillations in the bulk solution. While the exact form of this defect is available in lattice-aligned, straight or diagonal, flows; in arbitrary flow/lattice orientations its approximation is constructed. At boundaries, we verify that such a Brinkman viscosity correction has an even more harmful impact. Already at the first order, it shifts the location of the no-slip wall condition supported by traditional LBM boundary schemes, such as the bounce-back rule. For that reason, this work develops a new class of boundary schemes to prescribe the Dirichlet velocity condition at an arbitrary wall/boundary-node distance and that supports a higher order accuracy in the accommodation of the TRT-Brinkman solutions. For their modeling, we consider the standard BF scheme and its improved version, called IBF; this latter is generalized in this work to suppress or to reduce the viscosity correction in arbitrarily oriented flows. Our framework extends the one- and two-point families of linear and parabolic link-wise boundary schemes, respectively called B-LI and B-MLI, which avoid the interference of the Brinkman viscosity correction in their closure relations. The performance of LBM and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.« less
NASA Astrophysics Data System (ADS)
Silva, Goncalo; Talon, Laurent; Ginzburg, Irina
2017-04-01
The present contribution focuses on the accuracy of reflection-type boundary conditions in the Stokes-Brinkman-Darcy modeling of porous flows solved with the lattice Boltzmann method (LBM), which we operate with the two-relaxation-time (TRT) collision and the Brinkman-force based scheme (BF), called BF-TRT scheme. In parallel, we compare it with the Stokes-Brinkman-Darcy linear finite element method (FEM) where the Dirichlet boundary conditions are enforced on grid vertices. In bulk, both BF-TRT and FEM share the same defect: in their discretization a correction to the modeled Brinkman equation appears, given by the discrete Laplacian of the velocity-proportional resistance force. This correction modifies the effective Brinkman viscosity, playing a crucial role in the triggering of spurious oscillations in the bulk solution. While the exact form of this defect is available in lattice-aligned, straight or diagonal, flows; in arbitrary flow/lattice orientations its approximation is constructed. At boundaries, we verify that such a Brinkman viscosity correction has an even more harmful impact. Already at the first order, it shifts the location of the no-slip wall condition supported by traditional LBM boundary schemes, such as the bounce-back rule. For that reason, this work develops a new class of boundary schemes to prescribe the Dirichlet velocity condition at an arbitrary wall/boundary-node distance and that supports a higher order accuracy in the accommodation of the TRT-Brinkman solutions. For their modeling, we consider the standard BF scheme and its improved version, called IBF; this latter is generalized in this work to suppress or to reduce the viscosity correction in arbitrarily oriented flows. Our framework extends the one- and two-point families of linear and parabolic link-wise boundary schemes, respectively called B-LI and B-MLI, which avoid the interference of the Brinkman viscosity correction in their closure relations. The performance of LBM and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.
Studies of Methane Counterflow Flames at Low Pressures
NASA Astrophysics Data System (ADS)
Burrell, Robert Roe
Methane is the smallest hydrocarbon molecule, the fuel most widely studied in fundamental flame structure studies, and a major component of natural gas. Despite many decades of research into the fundamental chemical kinetics involved in methane oxidation, ongoing advancements in research suggest that more progress can be made. Though practical combustors of industrial and commercial significance operate at high pressures and turbulent flow conditions, fundamental understanding of combustion chemistry in flames is more readily obtained for low pressure and laminar flow conditions. Measurements were performed from 1 to 0.1 atmospheres for premixed methane/air and non-premixed methane-nitrogen/oxygen flames in a counterflow. Comparative modeling with quasi-one-dimensional strained flame codes revealed bias-induced errors in measured velocities up to 8% at 0.1 atmospheres due to tracer particle phase velocity slip in the low density gas reacting flow. To address this, a numerically-assisted correction scheme consisting of direct simulation of the particle phase dynamics in counterflow was implemented. Addition of reactions describing the prompt dissociation of formyl radicals to an otherwise unmodified USC Mech II kinetic model was found to enhance computed flame reactivity and substantially improve the predictive capability of computed results for measurements at the lowest pressures studied. Yet, the same modifications lead to overprediction of flame data at 1 atmosphere where results from the unmodified USC Mech II kinetic mechanism agreed well with ambient pressure flame data. The apparent failure of a single kinetic model to capture pressure dependence in methane flames motivates continued skepticism regarding the current understanding of pressure dependence in kinetic models, even for the simplest fuels.
Short-range second order screened exchange correction to RPA correlation energies
NASA Astrophysics Data System (ADS)
Beuerle, Matthias; Ochsenfeld, Christian
2017-11-01
Direct random phase approximation (RPA) correlation energies have become increasingly popular as a post-Kohn-Sham correction, due to significant improvements over DFT calculations for properties such as long-range dispersion effects, which are problematic in conventional density functional theory. On the other hand, RPA still has various weaknesses, such as unsatisfactory results for non-isogyric processes. This can in parts be attributed to the self-correlation present in RPA correlation energies, leading to significant self-interaction errors. Therefore a variety of schemes have been devised to include exchange in the calculation of RPA correlation energies in order to correct this shortcoming. One of the most popular RPA plus exchange schemes is the second order screened exchange (SOSEX) correction. RPA + SOSEX delivers more accurate absolute correlation energies and also improves upon RPA for non-isogyric processes. On the other hand, RPA + SOSEX barrier heights are worse than those obtained from plain RPA calculations. To combine the benefits of RPA correlation energies and the SOSEX correction, we introduce a short-range RPA + SOSEX correction. Proof of concept calculations and benchmarks showing the advantages of our method are presented.
Short-range second order screened exchange correction to RPA correlation energies.
Beuerle, Matthias; Ochsenfeld, Christian
2017-11-28
Direct random phase approximation (RPA) correlation energies have become increasingly popular as a post-Kohn-Sham correction, due to significant improvements over DFT calculations for properties such as long-range dispersion effects, which are problematic in conventional density functional theory. On the other hand, RPA still has various weaknesses, such as unsatisfactory results for non-isogyric processes. This can in parts be attributed to the self-correlation present in RPA correlation energies, leading to significant self-interaction errors. Therefore a variety of schemes have been devised to include exchange in the calculation of RPA correlation energies in order to correct this shortcoming. One of the most popular RPA plus exchange schemes is the second order screened exchange (SOSEX) correction. RPA + SOSEX delivers more accurate absolute correlation energies and also improves upon RPA for non-isogyric processes. On the other hand, RPA + SOSEX barrier heights are worse than those obtained from plain RPA calculations. To combine the benefits of RPA correlation energies and the SOSEX correction, we introduce a short-range RPA + SOSEX correction. Proof of concept calculations and benchmarks showing the advantages of our method are presented.
Lee, Tian-Fu; Liu, Chuan-Ming
2013-06-01
A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.
Apparatus for controlling air/fuel ratio for internal combustion engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, K.; Mizuno, T.
1986-07-08
This patent describes an apparatus for controlling air-fuel ratio of an air-fuel mixture to be supplied to an internal combustion engine having an intake passage, an exhaust passage, an an exhaust gas recirculation passage for recirculating exhaust gases in the exhaust passage to the intake passage therethrough. The apparatus consists of: (a) means for sensing rotational speed of the engine; (b) means for sensing intake pressure in the intake passage; (c) means for sensing atmospheric pressure; (d) means for enabling and disabling exhaust gas recirculation through the exhaust gas recirculation passage in accordance with operating condition of the engine; (e)more » means for determining required amount of fuel in accordance with the sensed rotational speed and the sensed intake pressure; (f) means for determining, when the exhaust gas recirculation is enabled, a first correction value in accordance with the sensed rotational speed, the sensed intake pressure and the sensed atmospheric pressure, the first correction factor being used for correcting fuel amount so as to compensate for the decrease of fuel due to the performance of exhaust gas recirculation and also to compensate for the change in atmospheric pressure; (g) means for determining, when the exhaust gas recirculation is disabled, a second correction value in accordance with the atmospheric pressure, the second correction factor being used so as to compensate for the change in atmospheric pressure; (h) means for correcting the required amount of fuel by the first correction value and the second correction value when the exhaust gas recirculation is enabled and disabled respectively; and (i) means for supplying the engine with the corrected amount of fuel.« less
A hybrid-drive nonisobaric-ignition scheme for inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, X. T., E-mail: xthe@iapcm.ac.cn; Center for Applied Physics and Technology, HEDPS, Peking University, Beijing 100871; IFSA Collaborative Innovation Center of MoE, Shanghai Jiao-Tong University, Shanghai 200240
A new hybrid-drive (HD) nonisobaric ignition scheme of inertial confinement fusion (ICF) is proposed, in which a HD pressure to drive implosion dynamics increases via increasing density rather than temperature in the conventional indirect drive (ID) and direct drive (DD) approaches. In this HD (combination of ID and DD) scheme, an assembled target of a spherical hohlraum and a layered deuterium-tritium capsule inside is used. The ID lasers first drive the shock to perform a spherical symmetry implosion and produce a large-scale corona plasma. Then, the DD lasers, whose critical surface in ID corona plasma is far from the radiationmore » ablation front, drive a supersonic electron thermal wave, which slows down to a high-pressure electron compression wave, like a snowplow, piling up the corona plasma into high density and forming a HD pressurized plateau with a large width. The HD pressure is several times the conventional ID and DD ablation pressure and launches an enhanced precursor shock and a continuous compression wave, which give rise to the HD capsule implosion dynamics in a large implosion velocity. The hydrodynamic instabilities at imploding capsule interfaces are suppressed, and the continuous HD compression wave provides main pdV work large enough to hotspot, resulting in the HD nonisobaric ignition. The ignition condition and target design based on this scheme are given theoretically and by numerical simulations. It shows that the novel scheme can significantly suppress implosion asymmetry and hydrodynamic instabilities of current isobaric hotspot ignition design, and a high-gain ICF is promising.« less
NASA Technical Reports Server (NTRS)
Chen, Y. S.
1986-01-01
In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.
Ustinov, E A; Do, D D
2012-08-21
We present for the first time in the literature a new scheme of kinetic Monte Carlo method applied on a grand canonical ensemble, which we call hereafter GC-kMC. It was shown recently that the kinetic Monte Carlo (kMC) scheme is a very effective tool for the analysis of equilibrium systems. It had been applied in a canonical ensemble to describe vapor-liquid equilibrium of argon over a wide range of temperatures, gas adsorption on a graphite open surface and in graphitic slit pores. However, in spite of the conformity of canonical and grand canonical ensembles, the latter is more relevant in the correct description of open systems; for example, the hysteresis loop observed in adsorption of gases in pores under sub-critical conditions can only be described with a grand canonical ensemble. Therefore, the present paper is aimed at an extension of the kMC to open systems. The developed GC-kMC was proved to be consistent with the results obtained with the canonical kMC (C-kMC) for argon adsorption on a graphite surface at 77 K and in graphitic slit pores at 87.3 K. We showed that in slit micropores the hexagonal packing in the layers adjacent to the pore walls is observed at high loadings even at temperatures above the triple point of the bulk phase. The potential and applicability of the GC-kMC are further shown with the correct description of the heat of adsorption and the pressure tensor of the adsorbed phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Fish, Jacob; Waisman, Haim
Two heuristic strategies intended to enhance the performance of the generalized global basis (GGB) method [H. Waisman, J. Fish, R.S. Tuminaro, J. Shadid, The Generalized Global Basis (GGB) method, International Journal for Numerical Methods in Engineering 61(8), 1243-1269] applied to nonlinear systems are presented. The standard GGB accelerates a multigrid scheme by an additional coarse grid correction that filters out slowly converging modes. This correction requires a potentially costly eigen calculation. This paper considers reusing previously computed eigenspace information. The GGB? scheme enriches the prolongation operator with new eigenvectors while the modified method (MGGB) selectively reuses the same prolongation. Bothmore » methods use the criteria of principal angles between subspaces spanned between the previous and current prolongation operators. Numerical examples clearly indicate significant time savings in particular for the MGGB scheme.« less
Simple Numerical Modelling for Gasdynamic Design of Wave Rotors
NASA Astrophysics Data System (ADS)
Okamoto, Koji; Nagashima, Toshio
The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.
Analysis of BJ493 diesel engine lubrication system properties
NASA Astrophysics Data System (ADS)
Liu, F.
2017-12-01
The BJ493ZLQ4A diesel engine design is based on the primary model of BJ493ZLQ3, of which exhaust level is upgraded to the National GB5 standard due to the improved design of combustion and injection systems. Given the above changes in the diesel lubrication system, its improved properties are analyzed in this paper. According to the structures, technical parameters and indices of the lubrication system, the lubrication system model of BJ493ZLQ4A diesel engine was constructed using the Flowmaster flow simulation software. The properties of the diesel engine lubrication system, such as the oil flow rate and pressure at different rotational speeds were analyzed for the schemes involving large- and small-scale oil filters. The calculated values of the main oil channel pressure are in good agreement with the experimental results, which verifies the proposed model feasibility. The calculation results show that the main oil channel pressure and maximum oil flow rate values for the large-scale oil filter scheme satisfy the design requirements, while the small-scale scheme yields too low main oil channel’s pressure and too high. Therefore, application of small-scale oil filters is hazardous, and the large-scale scheme is recommended.
Mishra, Dheerendra; Mukhopadhyay, Sourav; Chaturvedi, Ankita; Kumari, Saru; Khan, Muhammad Khurram
2014-06-01
Remote user authentication is desirable for a Telecare Medicine Information System (TMIS) for the safety, security and integrity of transmitted data over the public channel. In 2013, Tan presented a biometric based remote user authentication scheme and claimed that his scheme is secure. Recently, Yan et al. demonstrated some drawbacks in Tan's scheme and proposed an improved scheme to erase the drawbacks of Tan's scheme. We analyze Yan et al.'s scheme and identify that their scheme is vulnerable to off-line password guessing attack, and does not protect anonymity. Moreover, in their scheme, login and password change phases are inefficient to identify the correctness of input where inefficiency in password change phase can cause denial of service attack. Further, we design an improved scheme for TMIS with the aim to eliminate the drawbacks of Yan et al.'s scheme.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
Spronck, Bart; Delhaas, Tammo; Butlin, Mark; Reesink, Koen D; Avolio, Alberto P
2018-03-01
Pulse wave velocity (PWV), a marker of arterial stiffness, is known to change instantaneously with changes in blood pressure. In this mini-review, we discuss two main approaches for handling the blood pressure dependence of PWV: (1) converting PWV into a pressure-independent index, and (2) correcting PWV per se for the pressure dependence. Under option 1, we focus on cardio-ankle vascular index (CAVI). CAVI is essentially a form of stiffness index β - CAVI is estimated for a (heart-to-ankle) trajectory, whereas β is estimated for a single artery from pressure and diameter measurements. Stiffness index β, and therefore also CAVI, have been shown to theoretically exhibit a slight residual blood pressure dependence due to the use of diastolic blood pressure instead of a fixed reference blood pressure. Additionally, CAVI exhibits pressure dependence due to the use of an estimated derivative of the pressure-diameter relationship. In this mini-review, we will address CAVI's blood pressure dependence theoretically, but also statistically. Furthermore, we review corrected indices (CAVI 0 and β 0 ) that theoretically do not show a residual blood pressure dependence. Under option 2, three ways of correcting PWV are reviewed: (1) using an exponential relationship between pressure and cross-sectional area, (2) by statistical model adjustment, and (3) through reference values or rule of thumb. Method 2 requires a population to be studied to characterise the statistical model, and method 3 requires a representative reference study. Given these limitations, method 1 seems preferable for correcting PWV per se for its blood pressure dependence. In summary, several options are available to handle the blood pressure dependence of PWV. If a blood pressure-independent index is sought, CAVI 0 is theoretically preferable over CAVI. If correcting PWV per se is required, using an exponential pressure-area relationship provides the user with a method to correct PWV on an individual basis.
Simplification of a dust emission scheme and comparison with data
NASA Astrophysics Data System (ADS)
Shao, Yaping
2004-05-01
A simplification of a dust emission scheme is proposed, which takes into account of saltation bombardment and aggregates disintegration. The statement of the scheme is that dust emission is proportional to streamwise saltation flux, but the proportionality depends on soil texture and soil plastic pressure p. For small p values (loose soils), dust emission rate is proportional to u*4 (u* is friction velocity) but not necessarily so in general. The dust emission predictions using the scheme are compared with several data sets published in the literature. The comparison enables the estimate of a model parameter and soil plastic pressure for various soils. While more data are needed for further verification, a general guideline for choosing model parameters is recommended.
Weak Galerkin method for the Biot’s consolidation model
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
2017-08-23
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
Weak Galerkin method for the Biot’s consolidation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
Quantum annealing correction with minor embedding
NASA Astrophysics Data System (ADS)
Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.
2015-10-01
Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.
Fantoni, Frédéric; Hervé, Lionel; Poher, Vincent; Gioux, Sylvain; Mars, Jérôme I; Dinten, Jean-Marc
2015-10-01
Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality as it allows to noninvasively monitor the fluorescence targeted tumors located below the tissue surface. Some drawbacks of this technique are the background fluorescence decreasing the contrast and absorption heterogeneities leading to misinterpretations concerning fluorescence concentrations. We propose a correction technique based on a laser line scanning illumination scheme. We scan the medium with the laser line and acquire, at each position of the line, both fluorescence and excitation images. We then use the finding that there is a relationship between the excitation intensity profile and the background fluorescence one to predict the amount of signal to subtract from the fluorescence images to get a better contrast. As the light absorption information is contained both in fluorescence and excitation images, this method also permits us to correct the effects of absorption heterogeneities. This technique has been validated on simulations and experimentally. Fluorescent inclusions are observed in several configurations at depths ranging from 1 mm to 1 cm. Results obtained with this technique are compared with those obtained with a classical wide-field detection scheme for contrast enhancement and with the fluorescence by an excitation ratio approach for absorption correction.
NASA Astrophysics Data System (ADS)
Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe
2017-12-01
A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks, vortical structures and complex geometries.
NASA Technical Reports Server (NTRS)
Allmaras, S. R.
1986-01-01
The Wall-Pressure Signature Method for correcting low-speed wind tunnel data to free-air conditions has been revised and improved for two-dimensional tests of bluff bodies. The method uses experimentally measured tunnel wall pressures to approximately reconstruct the flow field about the body with potential sources and sinks. With the use of these sources and sinks, the measured drag and tunnel dynamic pressure are corrected for blockage effects. Good agreement is obtained with simpler methods for cases in which the blockage corrections were about 10% of the nominal drag values.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Multi-dimensional upwinding-based implicit LES for the vorticity transport equations
NASA Astrophysics Data System (ADS)
Foti, Daniel; Duraisamy, Karthik
2017-11-01
Complex turbulent flows such as rotorcraft and wind turbine wakes are characterized by the presence of strong coherent structures that can be compactly described by vorticity variables. The vorticity-velocity formulation of the incompressible Navier-Stokes equations is employed to increase numerical efficiency. Compared to the traditional velocity-pressure formulation, high order numerical methods and sub-grid scale models for the vorticity transport equation (VTE) have not been fully investigated. Consistent treatment of the convection and stretching terms also needs to be addressed. Our belief is that, by carefully designing sharp gradient-capturing numerical schemes, coherent structures can be more efficiently captured using the vorticity-velocity formulation. In this work, a multidimensional upwind approach for the VTE is developed using the generalized Riemann problem-based scheme devised by Parish et al. (Computers & Fluids, 2016). The algorithm obtains high resolution by augmenting the upwind fluxes with transverse and normal direction corrections. The approach is investigated with several canonical vortex-dominated flows including isolated and interacting vortices and turbulent flows. The capability of the technique to represent sub-grid scale effects is also assessed. Navy contract titled ``Turbulence Modelling Across Disparate Length Scales for Naval Computational Fluid Dynamics Applications,'' through Continuum Dynamics, Inc.
NASA Technical Reports Server (NTRS)
Lombard, C. K.
1982-01-01
A conservative flux difference splitting is presented for the hyperbolic systems of gasdynamics. The stable robust method is suitable for wide application in a variety of schemes, explicit or implicit, iterative or direct, for marching in either time or space. The splitting is modeled on the local quasi one dimensional characteristics system for multi-dimensional flow similar to Chakravarthy's nonconservative split coefficient matrix method; but, as the result of maintaining global conservation, the method is able to capture sharp shocks correctly. The embedded characteristics formulation is cast in a primitive variable the volumetric internal energy (rather than the pressure) that is effective for treating real as well as perfect gases. Finally the relationship of the splitting to characteristics boundary conditions is discussed and the associated conservative matrix formulation for a computed blown wall boundary condition is developed as an example. The theoretical development employs and extends the notion of Roe of constructing stable upwind difference formulae by sending split simple one sided flux difference pieces to appropriate mesh sites. The developments are also believed to have the potential for aiding in the analysis of both existing and new conservative difference schemes.
NASA Technical Reports Server (NTRS)
Dobrzynski, W.
1984-01-01
Amiet's correction scheme for sound wave transmission through shear-layers is extended to incorporate the additional effects of different temperatures in the flow-field in the surrounding medium at rest. Within a parameter-regime typical for acoustic measurements in wind tunnels amplitude- and angle-correction is calculated and plotted systematically to provide a data base for the test engineer.
Brady's Geothermal Field - Analysis of Pressure Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, David
*This submission provides corrections to GDR Submissions 844 and 845* Poroelastic Tomography (PoroTomo) by Adjoint Inverse Modeling of Data from Hydrology. The 3 *csv files containing pressure data are the corrected versions of the pressure dataset found in Submission 844. The dataset has been corrected in the sense that the atmospheric pressure has been subtracted from the total pressure measured in the well. Also, the transducers used at wells 56A-1 and SP-2 are sensitive to surface temperature fluctuations. These temperature effects have been removed from the corrected datasets. The 4th *csv file contains corrected version of the pumping data foundmore » in Submission 845. The data has been corrected in the sense that the data from several wells that were used during the PoroTomo deployment pumping tests that were not included in the original dataset has been added. In addition, several other minor changes have been made to the pumping records due to flow rate instrument calibration issues that were discovered.« less
Rotman, Oren Moshe; Weiss, Dar; Zaretsky, Uri; Shitzer, Avraham; Einav, Shmuel
2015-09-18
High accuracy differential pressure measurements are required in various biomedical and medical applications, such as in fluid-dynamic test systems, or in the cath-lab. Differential pressure measurements using fluid-filled catheters are relatively inexpensive, yet may be subjected to common mode pressure errors (CMP), which can significantly reduce the measurement accuracy. Recently, a novel correction method for high accuracy differential pressure measurements was presented, and was shown to effectively remove CMP distortions from measurements acquired in rigid tubes. The purpose of the present study was to test the feasibility of this correction method inside compliant tubes, which effectively simulate arteries. Two tubes with varying compliance were tested under dynamic flow and pressure conditions to cover the physiological range of radial distensibility in coronary arteries. A third, compliant model, with a 70% stenosis severity was additionally tested. Differential pressure measurements were acquired over a 3 cm tube length using a fluid-filled double-lumen catheter, and were corrected using the proposed CMP correction method. Validation of the corrected differential pressure signals was performed by comparison to differential pressure recordings taken via a direct connection to the compliant tubes, and by comparison to predicted differential pressure readings of matching fluid-structure interaction (FSI) computational simulations. The results show excellent agreement between the experimentally acquired and computationally determined differential pressure signals. This validates the application of the CMP correction method in compliant tubes of the physiological range for up to intermediate size stenosis severity of 70%. Copyright © 2015 Elsevier Ltd. All rights reserved.
Identification of Unexpressed Premises and Argumentation Schemes by Students in Secondary School.
ERIC Educational Resources Information Center
van Eemeren, Frans H.; And Others
1995-01-01
Reports on exploratory empirical investigations on the performances of Dutch secondary education students in identifying unexpressed premises and argumentation schemes. Finds that, in the absence of any disambiguating contextual information, unexpressed major premises and non-syllogistic premises are more often correctly identified that…
10 CFR 205.322 - Contents of application.
Code of Federal Regulations, 2010 CFR
2010-01-01
... relay protection scheme, including equipment and proposed functional devices; (v) After receipt of the... as insulation medium pressurizing or forced cooling; and (C) cathodic protection scheme. Technical...
10 CFR 205.322 - Contents of application.
Code of Federal Regulations, 2011 CFR
2011-01-01
... relay protection scheme, including equipment and proposed functional devices; (v) After receipt of the... as insulation medium pressurizing or forced cooling; and (C) cathodic protection scheme. Technical...
Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals
NASA Technical Reports Server (NTRS)
Wang, Meng-Hua; King, Michael D.
1997-01-01
We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic Stratocumulus Transition Experiment (ASTEX) conducted near the Azores in June 1992 and compare these results to corresponding retrievals obtained using 0.88 Am. These results provide an example of the Rayleigh scattering effects on thin clouds and further test the Rayleigh correction scheme. Using a nonabsorbing near-infrared wavelength lambda (0.88 Am) in retrieving cloud optical thickness is only applicable over oceans, however, since most land surfaces are highly reflective at 0.88 Am. Hence successful global retrievals of cloud optical thickness should remove Rayleigh scattering effects when using reflectance measurements at 0.66 Am.
NASA Astrophysics Data System (ADS)
Gotti, Riccardo; Prevedelli, Marco; Kassi, Samir; Marangoni, Marco; Romanini, Daniele
2018-02-01
We apply a feed-forward frequency control scheme to establish a phase-coherent link from an optical frequency comb to a distributed feedback (DFB) diode laser: This allows us to exploit the full laser tuning range (up to 1 THz) with the linewidth and frequency accuracy of the comb modes. The approach relies on the combination of an RF single-sideband modulator (SSM) and of an electro-optical SSM, providing a correction bandwidth in excess of 10 MHz and a comb-referenced RF-driven agile tuning over several GHz. As a demonstration, we obtain a 0.3 THz cavity ring-down scan of the low-pressure methane absorption spectrum. The spectral resolution is 100 kHz, limited by the self-referenced comb, starting from a DFB diode linewidth of 3 MHz. To illustrate the spectral resolution, we obtain saturation dips for the 2ν3 R(6) methane multiplet at μbar pressure. Repeated measurements of the Lamb-dip positions provide a statistical uncertainty in the kHz range.
Development of a three-dimensional high-order strand-grids approach
NASA Astrophysics Data System (ADS)
Tong, Oisin
Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.
Calibration of 3-D wind measurements on a single engine research aircraft
NASA Astrophysics Data System (ADS)
Mallaun, C.; Giez, A.; Baumann, R.
2015-02-01
An innovative calibration method for the wind speed measurement using a boom mounted Rosemount model 858 AJ air velocity probe is introduced. The method is demonstrated for a sensor system installed on a medium size research aircraft which is used for measurements in the atmospheric boundary layer. The method encounters a series of coordinated flight manoeuvres to directly estimate the aerodynamic influences on the probe and to calculate the measurement uncertainties. The introduction of a differential Global Positioning System (DGPS) combined with a high accuracy Inertial Reference System (IRS) has brought major advances to airborne measurement techniques. The exact determination of geometrical height allows the use of the pressure signal as an independent parameter. Furthermore, the exact height information and the stepwise calibration process lead to maximum accuracy. The results show a measurement uncertainty for the aerodynamic influence of the dynamic and static pressures of 0.1 hPa. The applied parametrisation does not require any height dependencies or time shifts. After extensive flight tests a correction for the flow angles (attack and sideslip angles) was found, which is necessary for a successful wind calculation. A new method is demonstrated to correct for the aerodynamic influence on the sideslip angle. For the 3-D wind vector (with 100 Hz resolution) a novel error propagation scheme is tested, which determines the measurement uncertainties to be 0.3 m s-1 for the horizontal and 0.2 m s-1 for the vertical wind components.
Calibration of 3-D wind measurements on a single-engine research aircraft
NASA Astrophysics Data System (ADS)
Mallaun, C.; Giez, A.; Baumann, R.
2015-08-01
An innovative calibration method for the wind speed measurement using a boom-mounted Rosemount model 858 AJ air velocity probe is introduced. The method is demonstrated for a sensor system installed on a medium-size research aircraft which is used for measurements in the atmospheric boundary layer. The method encounters a series of coordinated flight manoeuvres to directly estimate the aerodynamic influences on the probe and to calculate the measurement uncertainties. The introduction of a differential Global Positioning System (DGPS) combined with a high-accuracy inertial reference system (IRS) has brought major advances to airborne measurement techniques. The exact determination of geometrical height allows the use of the pressure signal as an independent parameter. Furthermore, the exact height information and the stepwise calibration process lead to maximum accuracy. The results show a measurement uncertainty for the aerodynamic influence of the dynamic and static pressures of 0.1 hPa. The applied parametrisation does not require any height dependencies or time shifts. After extensive flight tests a correction for the flow angles (attack and sideslip angles) was found, which is necessary for a successful wind calculation. A new method is demonstrated to correct for the aerodynamic influence on the sideslip angle. For the three-dimensional (3-D) wind vector (with 100 Hz resolution) a novel error propagation scheme is tested, which determines the measurement uncertainties to be 0.3 m s-1 for the horizontal and 0.2 m s-1 for the vertical wind components.
Analysis of an ABE Scheme with Verifiable Outsourced Decryption.
Liao, Yongjian; He, Yichuan; Li, Fagen; Jiang, Shaoquan; Zhou, Shijie
2018-01-10
Attribute-based encryption (ABE) is a popular cryptographic technology to protect the security of users' data in cloud computing. In order to reduce its decryption cost, outsourcing the decryption of ciphertexts is an available method, which enables users to outsource a large number of decryption operations to the cloud service provider. To guarantee the correctness of transformed ciphertexts computed by the cloud server via the outsourced decryption, it is necessary to check the correctness of the outsourced decryption to ensure security for the data of users. Recently, Li et al. proposed a full verifiability of the outsourced decryption of ABE scheme (ABE-VOD) for the authorized users and unauthorized users, which can simultaneously check the correctness of the transformed ciphertext for both them. However, in this paper we show that their ABE-VOD scheme cannot obtain the results which they had shown, such as finding out all invalid ciphertexts, and checking the correctness of the transformed ciphertext for the authorized user via checking it for the unauthorized user. We first construct some invalid ciphertexts which can pass the validity checking in the decryption algorithm. That means their "verify-then-decrypt" skill is unavailable. Next, we show that the method to check the validity of the outsourced decryption for the authorized users via checking it for the unauthorized users is not always correct. That is to say, there exist some invalid ciphertexts which can pass the validity checking for the unauthorized user, but cannot pass the validity checking for the authorized user.
Analysis of an ABE Scheme with Verifiable Outsourced Decryption
He, Yichuan; Li, Fagen; Jiang, Shaoquan; Zhou, Shijie
2018-01-01
Attribute-based encryption (ABE) is a popular cryptographic technology to protect the security of users’ data in cloud computing. In order to reduce its decryption cost, outsourcing the decryption of ciphertexts is an available method, which enables users to outsource a large number of decryption operations to the cloud service provider. To guarantee the correctness of transformed ciphertexts computed by the cloud server via the outsourced decryption, it is necessary to check the correctness of the outsourced decryption to ensure security for the data of users. Recently, Li et al. proposed a full verifiability of the outsourced decryption of ABE scheme (ABE-VOD) for the authorized users and unauthorized users, which can simultaneously check the correctness of the transformed ciphertext for both them. However, in this paper we show that their ABE-VOD scheme cannot obtain the results which they had shown, such as finding out all invalid ciphertexts, and checking the correctness of the transformed ciphertext for the authorized user via checking it for the unauthorized user. We first construct some invalid ciphertexts which can pass the validity checking in the decryption algorithm. That means their “verify-then-decrypt” skill is unavailable. Next, we show that the method to check the validity of the outsourced decryption for the authorized users via checking it for the unauthorized users is not always correct. That is to say, there exist some invalid ciphertexts which can pass the validity checking for the unauthorized user, but cannot pass the validity checking for the authorized user. PMID:29320418
NASA Astrophysics Data System (ADS)
Patel, Jitendra Kumar; Natarajan, Ganesh
2017-12-01
We discuss the development and assessment of a robust numerical algorithm for simulating multiphase flows with complex interfaces and high density ratios on arbitrary polygonal meshes. The algorithm combines the volume-of-fluid method with an incremental projection approach for incompressible multiphase flows in a novel hybrid staggered/non-staggered framework. The key principles that characterise the algorithm are the consistent treatment of discrete mass and momentum transport and the similar discretisation of force terms appearing in the momentum equation. The former is achieved by invoking identical schemes for convective transport of volume fraction and momentum in the respective discrete equations while the latter is realised by representing the gravity and surface tension terms as gradients of suitable scalars which are then discretised in identical fashion resulting in a balanced formulation. The hybrid staggered/non-staggered framework employed herein solves for the scalar normal momentum at the cell faces, while the volume fraction is computed at the cell centroids. This is shown to naturally lead to similar terms for pressure and its correction in the momentum and pressure correction equations respectively, which are again treated discretely in a similar manner. We show that spurious currents that corrupt the solution may arise both from an unbalanced formulation where forces (gravity and surface tension) are discretised in dissimilar manner and from an inconsistent approach where different schemes are used to convect the mass and momentum, with the latter prominent in flows which are convection-dominant with high density ratios. Interestingly, the inconsistent approach is shown to perform as well as the consistent approach even for high density ratio flows in some cases while it exhibits anomalous behaviour for other scenarios, even at low density ratios. Using a plethora of test problems of increasing complexity, we conclusively demonstrate that the consistent transport and balanced force treatment results in a numerically stable solution procedure and physically consistent results. The algorithm proposed in this study qualifies as a robust approach to simulate multiphase flows with high density ratios on unstructured meshes and may be realised in existing flow solvers with relative ease.
Comparative Study on High-Order Positivity-preserving WENO Schemes
NASA Technical Reports Server (NTRS)
Kotov, D. V.; Yee, H. C.; Sjogreen, B.
2013-01-01
In gas dynamics and magnetohydrodynamics flows, physically, the density and the pressure p should both be positive. In a standard conservative numerical scheme, however, the computed internal energy is obtained by subtracting the kinetic energy from the total energy, resulting in a computed p that may be negative. Examples are problems in which the dominant energy is kinetic. Negative may often emerge in computing blast waves. In such situations the computed eigenvalues of the Jacobian will become imaginary. Consequently, the initial value problem for the linearized system will be ill posed. This explains why failure of preserving positivity of density or pressure may cause blow-ups of the numerical algorithm. The adhoc methods in numerical strategy which modify the computed negative density and/or the computed negative pressure to be positive are neither a conservative cure nor a stable solution. Conservative positivity-preserving schemes are more appropriate for such flow problems. The ideas of Zhang & Shu (2012) and Hu et al. (2012) precisely address the aforementioned issue. Zhang & Shu constructed a new conservative positivity-preserving procedure to preserve positive density and pressure for high-order WENO schemes by the Lax-Friedrichs flux (WENO/LLF). In general, WENO/LLF is too dissipative for flows such as turbulence with strong shocks computed in direct numerical simulations (DNS) and large eddy simulations (LES). The new conservative positivity-preserving procedure proposed in Hu et al. (2012) can be used with any high-order shock-capturing scheme, including high-order WENO schemes using the Roe's flux (WENO/Roe). The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more difficult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium flow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case (LeVeque & Yee 1990) as well as for the case consisting of two species and one reaction (Wang et al. 2012). For further information concerning this issue see (LeVeque & Yee 1990; Griffiths et al. 1992; Lafon & Yee 1996; Yee et al. 2012). This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.
Barrenechea, Gabriel R; Burman, Erik; Karakatsani, Fotini
2017-01-01
For the case of approximation of convection-diffusion equations using piecewise affine continuous finite elements a new edge-based nonlinear diffusion operator is proposed that makes the scheme satisfy a discrete maximum principle. The diffusion operator is shown to be Lipschitz continuous and linearity preserving. Using these properties we provide a full stability and error analysis, which, in the diffusion dominated regime, shows existence, uniqueness and optimal convergence. Then the algebraic flux correction method is recalled and we show that the present method can be interpreted as an algebraic flux correction method for a particular definition of the flux limiters. The performance of the method is illustrated on some numerical test cases in two space dimensions.
NASA Astrophysics Data System (ADS)
Oyama, Takuro; Ikabata, Yasuhiro; Seino, Junji; Nakai, Hiromi
2017-07-01
This Letter proposes a density functional treatment based on the two-component relativistic scheme at the infinite-order Douglas-Kroll-Hess (IODKH) level. The exchange-correlation energy and potential are calculated using the electron density based on the picture-change corrected density operator transformed by the IODKH method. Numerical assessments indicated that the picture-change uncorrected density functional terms generate significant errors, on the order of hartree for heavy atoms. The present scheme was found to reproduce the energetics in the four-component treatment with high accuracy.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Leading-Color Fully Differential Two-Loop Soft Corrections to QCD Dipole Showers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dulat, Falko; Höche, Stefan; Prestel, Stefan
We compute the next-to-leading order corrections to soft-gluon radiation differentially in the one-emission phase space. We show that their contribution to the evolution of color dipoles can be obtained in a modified subtraction scheme, such that both one- and two-emission terms are amenable to Monte-Carlo integration. The two-loop cusp anomalous dimension is recovered naturally upon integration over the full phase space. We present two independent implementations of the new algorithm in the two event generators Pythia and Sherpa, and we compare the resulting fully differential simulation to the CMW scheme.
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a pressure based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution and turbulence model are two primary factors that influence the accuracy of the base flowfield prediction.
Development of a Pressure Sensitive Paint System with Correction for Temperature Variation
NASA Technical Reports Server (NTRS)
Simmons, Kantis A.
1995-01-01
Pressure Sensitive Paint (PSP) is known to provide a global image of pressure over a model surface. However, improvements in its accuracy and reliability are needed. Several factors contribute to the inaccuracy of PSP. One major factor is that luminescence is temperature dependent. To correct the luminescence of the pressure sensing component for changes in temperature, a temperature sensitive luminophore incorporated in the paint allows the user to measure both pressure and temperature simultaneously on the surface of a model. Magnesium Octaethylporphine (MgOEP) was used as a temperature sensing luminophore, with the pressure sensing luminophore, Platinum Octaethylporphine (PtOEP), to correct for temperature variations in model surface pressure measurements.
NASA Astrophysics Data System (ADS)
Yunguo, Gao
1996-12-01
This scheme structure is for positioning 4000 optical fibres of LAMOST telescope. It adopts the swing rods adjusted parallel and simultaneously by many small tables. The problems, for example, positioning accuracy of the optical fibers, the time to readjust all the 4000 optical fibres and error correction, etc. have been considered in the scheme. The structure has no blind area.
Permanence analysis of a concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.; Kasami, T.
1983-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.
Probability of undetected error after decoding for a concatenated coding scheme
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2013-07-25
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01% can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2014-03-01
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01 percent can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Long distance quantum communication using quantum error correction
NASA Technical Reports Server (NTRS)
Gingrich, R. M.; Lee, H.; Dowling, J. P.
2004-01-01
We describe a quantum error correction scheme that can increase the effective absorption length of the communication channel. This device can play the role of a quantum transponder when placed in series, or a cyclic quantum memory when inserted in an optical loop.
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
Murmur intensity in adult dogs with pulmonic and subaortic stenosis reflects disease severity.
Caivano, D; Dickson, D; Martin, M; Rishniw, M
2018-03-01
The aims of this study were to determine whether murmur intensity in adult dogs with pulmonic stenosis or subaortic stenosis reflects echocardiographic disease severity and to determine whether a six-level murmur grading scheme provides clinical advantages over a four-level scheme. In this retrospective multi-investigator study on adult dogs with pulmonic stenosis or subaortic stenosis, murmur intensity was compared to echocardiographically determined pressure gradient across the affected valve. Disease severity, based on pressure gradients, was assessed between sequential murmur grades to identify redundancy in classification. A simplified four-level murmur intensity classification scheme ('soft', 'moderate', 'loud', 'palpable') was evaluated. In total, 284 dogs (153 with pulmonic stenosis, 131 with subaortic stenosis) were included; 55 dogs had soft, 59 had moderate, 72 had loud and 98 had palpable murmurs. 95 dogs had mild stenosis, 46 had moderate stenosis, and 143 had severe stenosis. No dogs with soft murmurs of either pulmonic or subaortic stenosis had transvalvular pressure gradients greater than 50 mmHg. Dogs with loud or palpable murmurs mostly, but not always, had severe stenosis. Stenosis severity increased with increasing murmur intensity. The traditional six-level murmur grading scheme provided no additional clinical information than the four-level descriptive murmur grading scheme. A simplified descriptive four-level murmur grading scheme differentiated stenosis severity without loss of clinical information, compared to the traditional six-level scheme. Soft murmurs in dogs with pulmonic or subaortic stenosis are strongly indicative of mild lesions. Loud or palpable murmurs are strongly suggestive of severe stenosis. © 2017 British Small Animal Veterinary Association.
Correction of Pressure Drop in Steam and Water System in Performance Test of Boiler
NASA Astrophysics Data System (ADS)
Liu, Jinglong; Zhao, Xianqiao; Hou, Fanjun; Wu, Xiaowu; Wang, Feng; Hu, Zhihong; Yang, Xinsen
2018-01-01
Steam and water pressure drop is one of the most important characteristics in the boiler performance test. As the measuring points are not in the guaranteed position and the test condition fluctuation exsits, the pressure drop test of steam and water system has the deviation of measuring point position and the deviation of test running parameter. In order to get accurate pressure drop of steam and water system, the corresponding correction should be carried out. This paper introduces the correction method of steam and water pressure drop in boiler performance test.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-08
... Toughness Requirements for Protection Against Pressurized Thermal Shock Events; Correcting Amendment AGENCY... Commission (NRC) is revising its regulations to add a table that was inadvertently omitted in a correction... toughness requirements for protection against pressurized thermal shock (PTS) events for pressurized water...
NASA Astrophysics Data System (ADS)
Yu, Tianxu; Rose, William I.; Prata, A. J.
2002-08-01
Volcanic ash in volcanic clouds can be mapped in two dimensions using two-band thermal infrared data available from meteorological satellites. Wen and Rose [1994] developed an algorithm that allows retrieval of the effective particle size, the optical depth of the volcanic cloud, and the mass of fine ash in the cloud. Both the mapping and the retrieval scheme are less accurate in the humid tropical atmosphere. In this study we devised and tested a scheme for atmospheric correction of volcanic ash mapping and retrievals. The scheme utilizes infrared (IR) brightness temperature (BT) information in two infrared channels (both between 10 and 12.5 μm) and the brightness temperature differences (BTD) to estimate the amount of BTD shift caused by lower tropospheric water vapor. It is supported by the moderate resolution transmission (MODTRAN) analysis. The discrimination of volcanic clouds in the new scheme also uses both BT and BTD data but corrects for the effects of the water vapor. The new scheme is demonstrated and compared with the old scheme using two well-documented examples: (1) the 18 August 1992 volcanic cloud of Crater Peak, Mount Spurr, Alaska, and (2) the 26 December 1997 volcanic cloud from Soufriere Hills, Montserrat. The Spurr example represents a relatively ``dry'' subarctic atmospheric condition. The new scheme sees a volcanic cloud that is about 50% larger than the old. The mean optical depth and effective radii of cloud particles are lower by 22% and 9%, and the fine ash mass in the cloud is 14% higher. The Montserrat cloud is much smaller than Spurr and is more sensitive to atmospheric moisture. It also was located in a moist tropical atmosphere. For the Montserrat example the new scheme shows larger differences, with the area of the volcanic cloud being about 5.5 times larger, the optical depth and effective radii of particles lower by 56% and 28%, and the total fine particle mass in the cloud increased by 53%. The new scheme can be automated and can contribute to more accurate remote volcanic ash detection. More tests are needed to find the best way to estimate the water vapor effects in real time.
Lefave, Melissa; Harrell, Brad; Wright, Molly
2016-06-01
The purpose of this project was to assess the ability of anesthesiologists, nurse anesthetists, and registered nurses to correctly identify anatomic landmarks of cricoid pressure and apply the correct amount of force. The project included an educational intervention with one group pretest-post-test design. Participants demonstrated cricoid pressure on a laryngotracheal model. After an educational intervention video, participants were asked to repeat cricoid pressure on the model. Participants with a nurse anesthesia background applied more appropriate force pretest than other participants; however, post-test results, while improved, showed no significant difference among providers. Participant identification of the correct anatomy of the cricoid cartilage and application of correct force were significantly improved after education. This study revealed that participants lacked prior knowledge of correct cricoid anatomy and pressure as well as the ability to apply correct force to the laryngotracheal model before an educational intervention. The intervention used in this study proved successful in educating health care providers. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.
2014-02-01
idle waiting for the wavefront to reach it. To overcome this, Reeve et al. (2001) 3 developed a scheme in analogy to the red-black Gauss - Seidel iterative ...understandable procedure calls. Parallelization of the SIMPLE iterative scheme with SIP used a red-black scheme similar to the red-black Gauss - Seidel ...scheme, the SIMPLE method, for pressure-velocity coupling. The result is a slowing convergence of the outer iterations . The red-black scheme excites a 2
Analysis of composite ablators using massively parallel computation
NASA Technical Reports Server (NTRS)
Shia, David
1995-01-01
In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.
Real gas flow parameters for NASA Langley 22-inch Mach 20 helium tunnel
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1992-01-01
A computational procedure was developed which can be used to determine the flow properties in hypersonic helium wind tunnels in which real gas behavior is significant. In this procedure, a three-coefficient virial equation of state and the assumption of isentropic nozzle flow are employed to determine the tunnel reservoir, nozzle, throat, freestream, and post-normal shock conditions. This method was applied to a range of conditions which encompasses the operational capabilities of the LaRC 22-Inch Mach 20 Helium Tunnel. Results are presented graphically in the form of real gas correction factors which can be applied to perfect gas calculations. Important thermodynamic properties of helium are also plotted versus pressure and temperature. The computational scheme used to determine the real-helium flow parameters was incorporated into a FORTRAN code which is discussed.
An architecture for rapid prototyping of control schemes for artificial ventricles.
Ficola, Antonio; Pagnottelli, Stefano; Valigi, Paolo; Zoppitelli, Maurizio
2004-01-01
This paper presents an experimental system aimed at rapid prototyping of feedback control schemes for ventricular assist devices, and artificial ventricles in general. The system comprises a classical mock circulatory system, an actuated bellow-based ventricle chamber, and a software architecture for control schemes implementation and experimental data acquisition, visualization and storing. Several experiments have been carried out, showing good performance of ventricular pressure tracking control schemes.
Unstructured grids for sonic-boom analysis
NASA Technical Reports Server (NTRS)
Fouladi, Kamran
1993-01-01
A fast and efficient unstructured grid scheme is evaluated for sonic-boom applications. The scheme is used to predict the near-field pressure signatures of a body of revolution at several body lengths below the configuration, and those results are compared with experimental data. The introduction of the 'sonic-boom grid topology' to this scheme make it well suited for sonic-boom applications, thus providing an alternative to conventional multiblock structured grid schemes.
Erratum: 2-Bromo-1-(4-methyl-phen-yl)-3-phenyl-prop-2-en-1-one. Corrigendum.
Fun, Hoong-Kun; Jebas, Samuel Robinson; Patil, P S; Karthikeyan, M S; Dharmaprakash, S M
2008-11-13
The chemical name in the title and the scheme of the paper by Fun, Jebas, Patil, Karthikeyan & Dharmaprakash [Acta Cryst. (2008), E64, o1559] are corrected.[This corrects the article DOI: 10.1107/S1600536808022289.].
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-01-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395
A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application
Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang
2018-01-01
Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
NASA Astrophysics Data System (ADS)
Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas
1992-07-01
Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-10
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
Investigation of television transmission using adaptive delta modulation principles
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1976-01-01
The results are presented of a study on the use of the delta modulator as a digital encoder of television signals. The computer simulation of different delta modulators was studied in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results were analyzed and then implemented in hardware to study its ability to encode real time motion pictures from an NTSC format television camera. The effects of channel errors on the delta modulated video signal were tested along with several error correction algorithms via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. Delta modulators were investigated which could achieve significant bandwidth reduction without regard to complexity or speed. The first scheme investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved the computer simulation of two dimensional delta modulator algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Lan, E-mail: chenglanster@gmail.com; Stopkowicz, Stella, E-mail: stella.stopkowicz@kjemi.uio.no; Gauss, Jürgen, E-mail: gauss@uni-mainz.de
A perturbative approach to compute second-order spin-orbit (SO) corrections to a spin-free Dirac-Coulomb Hartree-Fock (SFDC-HF) calculation is suggested. The proposed scheme treats the difference between the DC and SFDC Hamiltonian as perturbation and exploits analytic second-derivative techniques. In addition, a cost-effective scheme for incorporating relativistic effects in high-accuracy calculations is suggested consisting of a SFDC coupled-cluster treatment augmented by perturbative SO corrections obtained at the HF level. Benchmark calculations for the hydrogen halides HX, X = F-At as well as the coinage-metal fluorides CuF, AgF, and AuF demonstrate the accuracy of the proposed perturbative treatment of SO effects on energiesmore » and electrical properties in comparison with the more rigorous full DC treatment. Furthermore, we present, as an application of our scheme, results for the electrical properties of AuF and XeAuF.« less
Maly, Friedrich E; Fried, Roman; Spannagl, Michael
2014-01-01
INSTAND e.V. has provided Molecular Genetics Multi-Analyte EQA schemes since 2006. EQA participation and performance were assessed from 2006 - 2012. From 2006 to 2012, the number of analytes in the Multi-Analyte EQA schemes rose from 17 to 53. Total number of results returned rose from 168 in January 2006 to 824 in August 2012. The overall error rate was 1.40 +/- 0.84% (mean +/- SD, N = 24 EQA dates). From 2006 to 2012, no analyte was reported 100% correctly. Individual participant performance was analysed for one common analyte, Lactase (LCT) T-13910C. From 2006 to 2012, 114 laboratories participated in this EQA. Of these, 10 laboratories (8.8%) reported at least one wrong result during the whole observation period. All laboratories reported correct results after their failure incident. In spite of the low overall error rate, EQA will continue to be important for Molecular Genetics.
NASA Astrophysics Data System (ADS)
Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo
2016-04-01
We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
Extraction of Xenon Using Enriching Reflux Pressure Swing Adsorption
2010-09-01
collection scheme aimed at preconcentrating xenon without the use of any form of cooling. The collection scheme utilizes activated charcoal (AC), a... collection efficiency for a given trap size. For a given isothermal system, it can be seen that if adsorption occurs at high pressure, where capacity is... activated charcoal at room temperature. These results are presented below and show that these early tests appear very promising and that useful quantities
Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes
NASA Technical Reports Server (NTRS)
Montarnal, Philippe; Shu, Chi-Wang
1998-01-01
In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
NASA Astrophysics Data System (ADS)
Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe
2018-02-01
A stable numerical solution of the steady Stokes problem requires compatibility between the choice of velocity and pressure approximation that has traditionally proven problematic for meshless methods. In this work, we present a discretization that couples a staggered scheme for pressure approximation with a divergence-free velocity reconstruction to obtain an adaptive, high-order, finite difference-like discretization that can be efficiently solved with conventional algebraic multigrid techniques. We use analytic benchmarks to demonstrate equal-order convergence for both velocity and pressure when solving problems with curvilinear geometries. In order to study problems in dense suspensions, we couple the solution for the flow to the equations of motion for freely suspended particles in an implicit monolithic scheme. The combination of high-order accuracy with fully-implicit schemes allows the accurate resolution of stiff lubrication forces directly from the solution of the Stokes problem without the need to introduce sub-grid lubrication models.
Symmetric weak ternary quantum homomorphic encryption schemes
NASA Astrophysics Data System (ADS)
Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao
2016-03-01
Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.
Lee, Tian-Fu; Chang, I-Pin; Lin, Tsung-Hung; Wang, Ching-Cheng
2013-06-01
The integrated EPR information system supports convenient and rapid e-medicine services. A secure and efficient authentication scheme for the integrated EPR information system provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Wu et al. proposed an efficient password-based user authentication scheme using smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various malicious attacks. However, their scheme is still vulnerable to lost smart card and stolen verifier attacks. This investigation discusses these weaknesses and proposes a secure and efficient authentication scheme for the integrated EPR information system as alternative. Compared with related approaches, the proposed scheme not only retains a lower computational cost and does not require verifier tables for storing users' secrets, but also solves the security problems in previous schemes and withstands possible attacks.
NASA Astrophysics Data System (ADS)
Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming
2015-03-01
The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp
Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme,more » we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.« less
Elucidation of molecular kinetic schemes from macroscopic traces using system identification
González-Maeso, Javier; Sealfon, Stuart C.; Galocha-Iragüen, Belén; Brezina, Vladimir
2017-01-01
Overall cellular responses to biologically-relevant stimuli are mediated by networks of simpler lower-level processes. Although information about some of these processes can now be obtained by visualizing and recording events at the molecular level, this is still possible only in especially favorable cases. Therefore the development of methods to extract the dynamics and relationships between the different lower-level (microscopic) processes from the overall (macroscopic) response remains a crucial challenge in the understanding of many aspects of physiology. Here we have devised a hybrid computational-analytical method to accomplish this task, the SYStems-based MOLecular kinetic scheme Extractor (SYSMOLE). SYSMOLE utilizes system-identification input-output analysis to obtain a transfer function between the stimulus and the overall cellular response in the Laplace-transformed domain. It then derives a Markov-chain state molecular kinetic scheme uniquely associated with the transfer function by means of a classification procedure and an analytical step that imposes general biological constraints. We first tested SYSMOLE with synthetic data and evaluated its performance in terms of its rate of convergence to the correct molecular kinetic scheme and its robustness to noise. We then examined its performance on real experimental traces by analyzing macroscopic calcium-current traces elicited by membrane depolarization. SYSMOLE derived the correct, previously known molecular kinetic scheme describing the activation and inactivation of the underlying calcium channels and correctly identified the accepted mechanism of action of nifedipine, a calcium-channel blocker clinically used in patients with cardiovascular disease. Finally, we applied SYSMOLE to study the pharmacology of a new class of glutamate antipsychotic drugs and their crosstalk mechanism through a heteromeric complex of G protein-coupled receptors. Our results indicate that our methodology can be successfully applied to accurately derive molecular kinetic schemes from experimental macroscopic traces, and we anticipate that it may be useful in the study of a wide variety of biological systems. PMID:28192423
1992-01-01
multiversioning scheme for this purpose was presented in [9]. The scheme guarantees that high level methods would read down object states at lower levels that...order given by fork-stamp, and terminated writing versions with timestamp WStamp. Such a history is needed to implement the multiversioning scheme...recovery protocol for multiversion schedulers and show that this protocol is both correct and secure. The behavior of the recovery protocol depends
NASA Technical Reports Server (NTRS)
Huynh, H. T.; Wang, Z. J.; Vincent, P. E.
2013-01-01
Popular high-order schemes with compact stencils for Computational Fluid Dynamics (CFD) include Discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV) methods. The recently proposed Flux Reconstruction (FR) approach or Correction Procedure using Reconstruction (CPR) is based on a differential formulation and provides a unifying framework for these high-order schemes. Here we present a brief review of recent developments for the FR/CPR schemes as well as some pacing items.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor
2013-04-01
We finish the definition of a subtraction scheme for computing NNLO corrections to QCD jet cross sections. In particular, we perform the integration of the soft-type contributions to the doubly unresolved counterterms via the method of Mellin-Barnes representations. With these final ingredients in place, the definition of the scheme is complete and the computation of fully differential rates for electron-positron annihilation into two and three jets at NNLO accuracy becomes feasible.
NASA Astrophysics Data System (ADS)
Chakraborty, Swarnendu Kumar; Goswami, Rajat Subhra; Bhunia, Chandan Tilak; Bhunia, Abhinandan
2016-06-01
Aggressive packet combining (APC) scheme is well-established in literature. Several modifications were studied earlier for improving throughput. In this paper, three new modifications of APC are proposed. The performance of proposed modified APC is studied by simulation and is reported here. A hybrid scheme is proposed here for getting higher throughput and also the disjoint factor is compared among conventional APC with proposed schemes for getting higher throughput.
Numerical simulation of large-scale ocean-atmosphere coupling and the ocean's role in climate
NASA Technical Reports Server (NTRS)
Gates, W. L.
1983-01-01
The problem of reducing model generated sigma coordinate data to pressure levels is considered. A mass consistent scheme for performing budget analyses is proposed, wherein variables interpolated to a given pressure level are weighted according to the mass between a nominal pressure level above and either a nominal pressure level below or the Earth's surface, whichever is closer. The method is applied to the atmospheric energy cycle as simulated by the OSU two level atmospheric general circulation model. The results are more realistic than sigma coordinate analyses with respect to eddy decomposition, and are in agreement with the sigma coordinate evaluation of the numerical energy sink. Comparison with less sophisticated budget schemes indicates superiority locally, but not globally.
NASA Astrophysics Data System (ADS)
Bessler, Wolfgang G.; Schulz, Christof; Lee, Tonghun; Jeffries, Jay B.; Hanson, Ronald K.
2003-04-01
A-X(0,1) excitation is a promising new approach for NO laser-induced fluorescence (LIF) diagnostics at elevated pressures and temperatures. We present what to our knowledge are the first detailed spectroscopic investigations within this excitation band using wavelength-resolved LIF measurements in premixed methane/air flames at pressures between 1 and 60 bar and a range of fuel/air ratios. Interference from O2 LIF is a significant problem in lean flames for NO LIF measurements, and pressure broadening and quenching lead to increased interference with increased pressure. Three different excitation schemes are identified that maximize NO/O2 LIF signal ratios, thereby minimizing the O2 interference. The NO LIF signal strength, interference by hot molecular oxygen, and temperature dependence of the three schemes are investigated.
NASA Technical Reports Server (NTRS)
Lang, Stephen E.; Tao, Wei-Kuo; Chern, Jiun-Dar; Wu, Di; Li, Xiaowen
2015-01-01
Numerous cloud microphysical schemes designed for cloud and mesoscale models are currently in use, ranging from simple bulk to multi-moment, multi-class to explicit bin schemes. This study details the benefits of adding a 4th ice class (hail) to an already improved 3-class ice bulk microphysics scheme developed for the Goddard Cumulus Ensemble model based on Rutledge and Hobbs (1983,1984). Besides the addition and modification of several hail processes from Lin et al. (1983), further modifications were made to the 3-ice processes, including allowing greater ice super saturation and mitigating spurious evaporationsublimation in the saturation adjustment scheme, allowing graupelhail to become snow via vapor growth and hail to become graupel via riming, and the inclusion of a rain evaporation correction and vapor diffusivity factor. The improved 3-ice snowgraupel size-mapping schemes were adjusted to be more stable at higher mixing rations and to increase the aggregation effect for snow. A snow density mapping was also added. The new scheme was applied to an intense continental squall line and a weaker, loosely-organized continental case using three different hail intercepts. Peak simulated reflectivities agree well with radar for both the intense and weaker case and were better than earlier 3-ice versions when using a moderate and large intercept for hail, respectively. Simulated reflectivity distributions versus height were also improved versus radar in both cases compared to earlier 3-ice versions. The bin-based rain evaporation correction affected the squall line case more but did not change the overall agreement in reflectivity distributions.
Physical oceanography from satellites: Currents and the slope of the sea surface
NASA Technical Reports Server (NTRS)
Sturges, W.
1974-01-01
A global scheme using satellite altimetry in conjunction with thermometry techniques provides for more accurate determinations of first order leveling networks by overcoming discrepancies between ocean leveling and land leveling methods. The high noise content in altimetry signals requires filtering or correction for tides, etc., as well as carefully planned sampling schemes.
Displacement data assimilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, W. Steven; Venkataramani, Shankar; Mariano, Arthur J.
We show that modifying a Bayesian data assimilation scheme by incorporating kinematically-consistent displacement corrections produces a scheme that is demonstrably better at estimating partially observed state vectors in a setting where feature information is important. While the displacement transformation is generic, here we implement it within an ensemble Kalman Filter framework and demonstrate its effectiveness in tracking stochastically perturbed vortices.
40 CFR 144.55 - Corrective action.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the case of Class II wells operating over the fracture pressure of the injection formation, all known wells within the area of review penetrating formations affected by the increase in pressure. For such... injection until all required corrective action has been taken. (3) Injection pressure limitation. The...
40 CFR 144.55 - Corrective action.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the case of Class II wells operating over the fracture pressure of the injection formation, all known wells within the area of review penetrating formations affected by the increase in pressure. For such... injection until all required corrective action has been taken. (3) Injection pressure limitation. The...
40 CFR 144.55 - Corrective action.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the case of Class II wells operating over the fracture pressure of the injection formation, all known wells within the area of review penetrating formations affected by the increase in pressure. For such... injection until all required corrective action has been taken. (3) Injection pressure limitation. The...
40 CFR 144.55 - Corrective action.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the case of Class II wells operating over the fracture pressure of the injection formation, all known wells within the area of review penetrating formations affected by the increase in pressure. For such... injection until all required corrective action has been taken. (3) Injection pressure limitation. The...
77 FR 2928 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-20
... pressure hose and electrical wiring of the green electrical motor pump (EMP). This proposed AD would... for correct condition and installation of hydraulic pressure hoses, electrical conduits, feeder cables... detect and correct chafing of hydraulic pressure hoses and electrical wiring of the green EMPs, which in...
NNLO QCD corrections to associated W H production and H →b b ¯ decay
NASA Astrophysics Data System (ADS)
Caola, Fabrizio; Luisoni, Gionata; Melnikov, Kirill; Röntsch, Raoul
2018-04-01
We present a computation of the next-to-next-to-leading-order (NNLO) QCD corrections to the production of a Higgs boson in association with a W boson at the LHC and the subsequent decay of the Higgs boson into a b b ¯ pair, treating the b quarks as massless. We consider various kinematic distributions and find significant corrections to observables that resolve the Higgs decay products. We also find that a cut on the transverse momentum of the W boson, important for experimental analyses, may have a significant impact on kinematic distributions and radiative corrections. We show that some of these effects can be adequately described by simulating QCD radiation in Higgs boson decays to b quarks using parton showers. We also describe contributions to Higgs decay to a b b ¯ pair that first appear at NNLO and that were not considered in previous fully differential computations. The calculation of NNLO QCD corrections to production and decay sub-processes is carried out within the nested soft-collinear subtraction scheme presented by some of us earlier this year. We demonstrate that this subtraction scheme performs very well, allowing a computation of the coefficient of the second-order QCD corrections at the level of a few per mill.
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
Weighted divergence correction scheme and its fast implementation
NASA Astrophysics Data System (ADS)
Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun
2017-05-01
Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyachenko, Sergey A.; Zlotnik, Anatoly; Korotkevich, Alexander O.
Here, we develop an operator splitting method to simulate flows of isothermal compressible natural gas over transmission pipelines. The method solves a system of nonlinear hyperbolic partial differential equations (PDEs) of hydrodynamic type for mass flow and pressure on a metric graph, where turbulent losses of momentum are modeled by phenomenological Darcy-Weisbach friction. Mass flow balance is maintained through the boundary conditions at the network nodes, where natural gas is injected or withdrawn from the system. Gas flow through the network is controlled by compressors boosting pressure at the inlet of the adjoint pipe. Our operator splitting numerical scheme ismore » unconditionally stable and it is second order accurate in space and time. The scheme is explicit, and it is formulated to work with general networks with loops. We test the scheme over range of regimes and network configurations, also comparing its performance with performance of two other state of the art implicit schemes.« less
The refractive index in electron microscopy and the errors of its approximations.
Lentzen, M
2017-05-01
In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. Copyright © 2016 Elsevier B.V. All rights reserved.
Higgs boson decay into b-quarks at NNLO accuracy
NASA Astrophysics Data System (ADS)
Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Tramontano, Francesco; Trócsányi, Zoltán
2015-04-01
We compute the fully differential decay rate of the Standard Model Higgs boson into b-quarks at next-to-next-to-leading order (NNLO) accuracy in αs. We employ a general subtraction scheme developed for the calculation of higher order perturbative corrections to QCD jet cross sections, which is based on the universal infrared factorization properties of QCD squared matrix elements. We show that the subtractions render the various contributions to the NNLO correction finite. In particular, we demonstrate analytically that the sum of integrated subtraction terms correctly reproduces the infrared poles of the two-loop double virtual contribution to this process. We present illustrative differential distributions obtained by implementing the method in a parton level Monte Carlo program. The basic ingredients of our subtraction scheme, used here for the first time to compute a physical observable, are universal and can be employed for the computation of more involved processes.
Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J
2017-06-01
This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
3D early embryogenesis image filtering by nonlinear partial differential equations.
Krivá, Z; Mikula, K; Peyriéras, N; Rizzi, B; Sarti, A; Stasová, O
2010-08-01
We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which are artificially connected due to acquisition error intrinsically linked to physics of LSM. In all studied aspects it turned out that the nonlinear diffusion filter which is called geodesic mean curvature flow (GMCF) has the best performance. Copyright 2010 Elsevier B.V. All rights reserved.
Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part I: numerical scheme
NASA Astrophysics Data System (ADS)
Rõõm, Rein; Männik, Aarne; Luhamaa, Andres
2007-10-01
Two-time-level, semi-implicit, semi-Lagrangian (SISL) scheme is applied to the non-hydrostatic pressure coordinate equations, constituting a modified Miller-Pearce-White model, in hybrid-coordinate framework. Neutral background is subtracted in the initial continuous dynamics, yielding modified equations for geopotential, temperature and logarithmic surface pressure fluctuation. Implicit Lagrangian marching formulae for single time-step are derived. A disclosure scheme is presented, which results in an uncoupled diagnostic system, consisting of 3-D Poisson equation for omega velocity and 2-D Helmholtz equation for logarithmic pressure fluctuation. The model is discretized to create a non-hydrostatic extension to numerical weather prediction model HIRLAM. The discretization schemes, trajectory computation algorithms and interpolation routines, as well as the physical parametrization package are maintained from parent hydrostatic HIRLAM. For stability investigation, the derived SISL model is linearized with respect to the initial, thermally non-equilibrium resting state. Explicit residuals of the linear model prove to be sensitive to the relative departures of temperature and static stability from the reference state. Relayed on the stability study, the semi-implicit term in the vertical momentum equation is replaced to the implicit term, which results in stability increase of the model.
Fracture Sustainability Pressure, Temperature, Differential Pressure, and Aperture Closure Data
Tim Kneafsey
2016-09-30
In these data sets, the experiment time, actual date and time, room temperature, sample temperature, upstream and downstream pressures (measured independently), corrected differential pressure (measured independently and corrected for offset and room temperature) indication of aperture closure by linear variable differential transformer are presented. An indication of the sample is in the file name and in the first line of data.
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2016-03-01
Context. Many problems in astrophysics feature flows which are close to hydrostatic equilibrium. However, standard numerical schemes for compressible hydrodynamics may be deficient in approximating this stationary state, where the pressure gradient is nearly balanced by gravitational forces. Aims: We aim to develop a second-order well-balanced scheme for the Euler equations. The scheme is designed to mimic a discrete version of the hydrostatic balance. It therefore can resolve a discrete hydrostatic equilibrium exactly (up to machine precision) and propagate perturbations, on top of this equilibrium, very accurately. Methods: A local second-order hydrostatic equilibrium preserving pressure reconstruction is developed. Combined with a standard central gravitational source term discretization and numerical fluxes that resolve stationary contact discontinuities exactly, the well-balanced property is achieved. Results: The resulting well-balanced scheme is robust and simple enough to be very easily implemented within any existing computer code that solves time explicitly or implicitly the compressible hydrodynamics equations. We demonstrate the performance of the well-balanced scheme for several astrophysically relevant applications: wave propagation in stellar atmospheres, a toy model for core-collapse supernovae, convection in carbon shell burning, and a realistic proto-neutron star.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less
Quantum gambling using two nonorthogonal states
NASA Astrophysics Data System (ADS)
Hwang, Won Young; Ahn, Doyeol; Hwang, Sung Woo
2001-12-01
We give a (remote) quantum-gambling scheme that makes use of the fact that quantum nonorthogonal states cannot be distinguished with certainty. In the proposed scheme, two participants Alice and Bob can be regarded as playing a game of making guesses on identities of quantum states that are in one of two given nonorthogonal states: if Bob makes a correct (an incorrect) guess on the identity of a quantum state that Alice has sent, he wins (loses). It is shown that the proposed scheme is secure against the nonentanglement attack. It can also be shown heuristically that the scheme is secure in the case of the entanglement attack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaeger, J.
1983-07-14
Correcting the dispersion function in the SLC north arc it turned out that backleg-windings (BLW) acting horizontally as well as BLW acting vertically have to be used. In the latter case the question arose what is the best representation of a defocusing magnet with excited BLW acting in the vertical plane for the computer code TURTLE. Two different schemes, the 14.-scheme and the 20.-scheme were studied and the TURTLE output for one ray through such a magnet compared with the numerical solution of the equation of motion; only terms of first order have been taken into account.
Palmprint Based Multidimensional Fuzzy Vault Scheme
Liu, Hailun; Sun, Dongmei; Xiong, Ke; Qiu, Zhengding
2014-01-01
Fuzzy vault scheme (FVS) is one of the most popular biometric cryptosystems for biometric template protection. However, error correcting code (ECC) proposed in FVS is not appropriate to deal with real-valued biometric intraclass variances. In this paper, we propose a multidimensional fuzzy vault scheme (MDFVS) in which a general subspace error-tolerant mechanism is designed and embedded into FVS to handle intraclass variances. Palmprint is one of the most important biometrics; to protect palmprint templates; a palmprint based MDFVS implementation is also presented. Experimental results show that the proposed scheme not only can deal with intraclass variances effectively but also could maintain the accuracy and meanwhile enhance security. PMID:24892094
Gauge-independent renormalization of the N2HDM
NASA Astrophysics Data System (ADS)
Krause, Marcel; López-Val, David; Mühlleitner, Margarete; Santos, Rui
2017-12-01
The Next-to-Minimal 2-Higgs-Doublet Model (N2HDM) is an interesting benchmark model for a Higgs sector consisting of two complex doublet and one real singlet fields. Like the Next-to-Minimal Supersymmetric extension (NMSSM) it features light Higgs bosons that could have escaped discovery due to their singlet admixture. Thereby, the model allows for various different Higgs-to-Higgs decay modes. Contrary to the NMSSM, however, the model is not subject to supersymmetric relations restraining its allowed parameter space and its phenomenology. For the correct determination of the allowed parameter space, the correct interpretation of the LHC Higgs data and the possible distinction of beyond-the-Standard Model Higgs sectors higher order corrections to the Higgs boson observables are crucial. This requires not only their computation but also the development of a suitable renormalization scheme. In this paper we have worked out the renormalization of the complete N2HDM and provide a scheme for the gauge-independent renormalization of the mixing angles. We discuss the renormalization of the Z_2 soft breaking parameter m 12 2 and the singlet vacuum expectation value v S . Both enter the Higgs self-couplings relevant for Higgs-to-Higgs decays. We apply our renormalization scheme to different sample processes such as Higgs decays into Z bosons and decays into a lighter Higgs pair. Our results show that the corrections may be sizable and have to be taken into account for reliable predictions.
Well-balanced Schemes for Gravitationally Stratified Media
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2015-10-01
We present a well-balanced scheme for the Euler equations with gravitation. The scheme is capable of maintaining exactly (up to machine precision) a discrete hydrostatic equilibrium without any assumption on a thermodynamic variable such as specific entropy or temperature. The well-balanced scheme is based on a local hydrostatic pressure reconstruction. Moreover, it is computationally efficient and can be incorporated into any existing algorithm in a straightforward manner. The presented scheme improves over standard ones especially when flows close to a hydrostatic equilibrium have to be simulated. The performance of the well-balanced scheme is demonstrated on an astrophysically relevant application: a toy model for core-collapse supernovae.
Elevation correction factor for absolute pressure measurements
NASA Technical Reports Server (NTRS)
Panek, Joseph W.; Sorrells, Mark R.
1996-01-01
With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.
An analysis of the sliding pressure start-up of SCWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, F.; Yang, J.; Li, H.
In this paper, the preliminary sliding pressure start-up system and scheme of supercritical water-cooled reactor in CGNPC (CGN-SCWR) were proposed. Thermal-hydraulic behavior in start-up procedures was analyzed in detail by employing advanced reactor subchannel analysis software ATHAS. The maximum cladding temperature (MCT for short) and core power of fuel assembly during the whole start-up process were investigated comparatively. The results show that the recommended start-up scheme meets the design requirements from the perspective of thermal-hydraulic. (authors)
Scheme for teleportation of quantum states onto a mechanical resonator.
Mancini, Stefano; Vitali, David; Tombesi, Paolo
2003-04-04
We propose an experimentally feasible scheme to teleport an unkown quantum state onto the vibrational degree of freedom of a macroscopic mirror. The quantum channel between the two parties is established by exploiting radiation pressure effects.
NASA Astrophysics Data System (ADS)
Bullimore, Blaise
2014-10-01
Management of anthropogenic activities that cause pressure on estuarine wildlife and biodiversity is beset by a wide range of challenges. Some, such as the differing environmental and socio-economic objectives and conflicting views and priorities, are common to many estuaries; others are site specific. The Carmarthen Bay and Estuaries European Marine Site encompasses four estuaries of European wildlife and conservation importance and considerable socio-economic value. The estuaries and their wildlife are subject to a range of pressures and threats and the statutory authorities responsible for management in and adjacent to the Site have developed a management scheme to address these. Preparation of the management scheme included an assessment of human activities known to occur in and adjacent to the Site for their potential to cause a threat to the designated habitats and species features, and identified actions the management authorities need to take to minimise or eliminate pressures and threats. To deliver the scheme the partner authorities need to accept the requirement for management actions and work together to achieve them. The Welsh Government also needs to work with these authorities because it is responsible for management of many of the most important pressure-causing activities. However, the absence of statutory obligations for partnership working has proved an impediment to successful management.
Direct Numerical Simulation of Turbulent Flow Over Complex Bathymetry
NASA Astrophysics Data System (ADS)
Yue, L.; Hsu, T. J.
2017-12-01
Direct numerical simulation (DNS) is regarded as a powerful tool in the investigation of turbulent flow featured with a wide range of time and spatial scales. With the application of coordinate transformation in a pseudo-spectral scheme, a parallelized numerical modeling system was created aiming at simulating flow over complex bathymetry with high numerical accuracy and efficiency. The transformed governing equations were integrated in time using a third-order low-storage Runge-Kutta method. For spatial discretization, the discrete Fourier expansion was adopted in the streamwise and spanwise direction, enforcing the periodic boundary condition in both directions. The Chebyshev expansion on Chebyshev-Gauss-Lobatto points was used in the wall-normal direction, assuming there is no-slip on top and bottom walls. The diffusion terms were discretized with a Crank-Nicolson scheme, while the advection terms dealiased with the 2/3 rule were discretized with an Adams-Bashforth scheme. In the prediction step, the velocity was calculated in physical domain by solving the resulting linear equation directly. However, the extra terms introduced by coordinate transformation impose a strict limitation to time step and an iteration method was applied to overcome this restriction in the correction step for pressure by solving the Helmholtz equation. The numerical solver is written in object-oriented C++ programing language utilizing Armadillo linear algebra library for matrix computation. Several benchmarking cases in laminar and turbulent flow were carried out to verify/validate the numerical model and very good agreements are achieved. Ongoing work focuses on implementing sediment transport capability for multiple sediment classes and parameterizations for flocculation processes.
Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels
NASA Technical Reports Server (NTRS)
Moher, Michael L.; Lodge, John H.
1990-01-01
A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.
Long-range analysis of density fitting in extended systems
NASA Astrophysics Data System (ADS)
Varga, Scarontefan
Density fitting scheme is analyzed for the Coulomb problem in extended systems from the correctness of long-range behavior point of view. We show that for the correct cancellation of divergent long-range Coulomb terms it is crucial for the density fitting scheme to reproduce the overlap matrix exactly. It is demonstrated that from all possible fitting metric choices the Coulomb metric is the only one which inherently preserves the overlap matrix for infinite systems with translational periodicity. Moreover, we show that by a small additional effort any non-Coulomb metric fit can be made overlap-preserving as well. The problem is analyzed for both ordinary and Poisson basis set choices.
Classification of ring artifacts for their effective removal using type adaptive correction schemes.
Anas, Emran Mohammad Abu; Lee, Soo Yeol; Hasan, Kamrul
2011-06-01
High resolution tomographic images acquired with a digital X-ray detector are often degraded by the so called ring artifacts. In this paper, a detail analysis including the classification, detection and correction of these ring artifacts is presented. At first, a novel idea for classifying rings into two categories, namely type I and type II rings, is proposed based on their statistical characteristics. The defective detector elements and the dusty scintillator screens result in type I ring and the mis-calibrated detector elements lead to type II ring. Unlike conventional approaches, we emphasize here on the separate detection and correction schemes for each type of rings for their effective removal. For the detection of type I ring, the histogram of the responses of the detector elements is used and a modified fast image inpainting algorithm is adopted to correct the responses of the defective pixels. On the other hand, to detect the type II ring, first a simple filtering scheme is presented based on the fast Fourier transform (FFT) to smooth the sum curve derived form the type I ring corrected projection data. The difference between the sum curve and its smoothed version is then used to detect their positions. Then, to remove the constant bias suffered by the responses of the mis-calibrated detector elements with view angle, an estimated dc shift is subtracted from them. The performance of the proposed algorithm is evaluated using real micro-CT images and is compared with three recently reported algorithms. Simulation results demonstrate superior performance of the proposed technique as compared to the techniques reported in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.
Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.
2016-01-01
Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.
A full potential inverse method based on a density linearization scheme for wing design
NASA Technical Reports Server (NTRS)
Shankar, V.
1982-01-01
A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.
Monitoring of intracranial compliance: correction for a change in body position.
Raabe, A; Czosnyka, M; Piper, I; Seifert, V
1999-01-01
The objectives of our study were 1. to investigate whether the intracranial compliance changes with body position; 2. to test if the pressure-volume index (PVI) calculation is affected by different body positions; 3. to define the optimal parameter to correct PVI for changes in body position and 4. to investigate the physiological meaning of the constant term (P0) in the model of the intracranial volume-pressure relationship. Thirteen patients were included in this study. All patients were subjected to 2 to 3 different body positions. In each position, either classic bolus injection was performed for measurement of intracranial compliance and calculation of PVI or the new Spiegelberg compliance monitor was used to calculate PVI continuously. Four different models were used for calculating the constant pressure term P0 and the P0 corrected PVI values. Pressure volume index not corrected for the constant term P0 significantly decreased with elevating the patients head (r = 0.70, p < 0.0001). In contrast, volume-pressure response and ICP pulse amplitude did not change with position. Using the constant term P0 to correct the PVI we found no changes between the different body positions. Our results suggest that during the variation in body position there is no change in intracranial compliance but a change in hydrostatic offset pressure which causes a shifting of the volume-pressure curve along the pressure axis without its shape being affected. PVI measurements should either be performed only with the patient in the 0 degree recumbent position or that the PVI calculation should be corrected for the hydrostatic difference between the level of the ICP transducer and the hydrostatic indifference point of the craniospinal system close to the third thoracic vertebra.
Multi-Objective Memetic Search for Robust Motion and Distortion Correction in Diffusion MRI.
Hering, Jan; Wolf, Ivo; Maier-Hein, Klaus H
2016-10-01
Effective image-based artifact correction is an essential step in the analysis of diffusion MR images. Many current approaches are based on retrospective registration, which becomes challenging in the realm of high b -values and low signal-to-noise ratio, rendering the corresponding correction schemes more and more ineffective. We propose a novel registration scheme based on memetic search optimization that allows for simultaneous exploitation of different signal intensity relationships between the images, leading to more robust registration results. We demonstrate the increased robustness and efficacy of our method on simulated as well as in vivo datasets. In contrast to the state-of-art methods, the median target registration error (TRE) stayed below the voxel size even for high b -values (3000 s ·mm -2 and higher) and low SNR conditions. We also demonstrate the increased precision in diffusion-derived quantities by evaluating Neurite Orientation Dispersion and Density Imaging (NODDI) derived measures on a in vivo dataset with severe motion artifacts. These promising results will potentially inspire further studies on metaheuristic optimization in diffusion MRI artifact correction and image registration in general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.
2015-11-14
Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less
Self-Correcting Electronically-Scanned Pressure Sensor
NASA Technical Reports Server (NTRS)
Gross, C.; Basta, T.
1982-01-01
High-data-rate sensor automatically corrects for temperature variations. Multichannel, self-correcting pressure sensor can be used in wind tunnels, aircraft, process controllers and automobiles. Offers data rates approaching 100,000 measurements per second with inaccuracies due to temperature shifts held below 0.25 percent (nominal) of full scale over a temperature span of 55 degrees C.
A controlled variation scheme for convection treatment in pressure-based algorithm
NASA Technical Reports Server (NTRS)
Shyy, Wei; Thakur, Siddharth; Tucker, Kevin
1993-01-01
Convection effect and source terms are two primary sources of difficulties in computing turbulent reacting flows typically encountered in propulsion devices. The present work intends to elucidate the individual as well as the collective roles of convection and source terms in the fluid flow equations, and to devise appropriate treatments and implementations to improve our current capability of predicting such flows. A controlled variation scheme (CVS) has been under development in the context of a pressure-based algorithm, which has the characteristics of adaptively regulating the amount of numerical diffusivity, relative to central difference scheme, according to the variation in local flow field. Both the basic concepts and a pragmatic assessment will be presented to highlight the status of this work.
A prototype of volume-controlled tidal liquid ventilator using independent piston pumps.
Robert, Raymond; Micheau, Philippe; Cyr, Stéphane; Lesur, Olivier; Praud, Jean-Paul; Walti, Hervé
2006-01-01
Liquid ventilation using perfluorochemicals (PFC) offers clear theoretical advantages over gas ventilation, such as decreased lung damage, recruitment of collapsed lung regions, and lavage of inflammatory debris. We present a total liquid ventilator designed to ventilate patients with completely filled lungs with a tidal volume of PFC liquid. The two independent piston pumps are volume controlled and pressure limited. Measurable pumping errors are corrected by a programmed supervisor module, which modifies the inserted or withdrawn volume. Pump independence also allows easy functional residual capacity modifications during ventilation. The bubble gas exchanger is divided into two sections such that the PFC exiting the lungs is not in contact with the PFC entering the lungs. The heating system is incorporated into the metallic base of the gas exchanger, and a heat-sink-type condenser is placed on top of the exchanger to retrieve PFC vapors. The prototype was tested on 5 healthy term newborn lambs (<5 days old). The results demonstrate the efficiency and safety of the prototype in maintaining adequate gas exchange, normal acido-basis equilibrium, and cardiovascular stability during a short, 2-hour total liquid ventilator. Airway pressure, lung volume, and ventilation scheme were maintained in the targeted range.
NASA Astrophysics Data System (ADS)
Badziak, J.; Krousky, E.; Kucharik, M.; Liska, R.
2016-03-01
Generation of strong shock waves for the production of Mbar or Gbar pressures is a topic of high relevance for contemporary research in various domains, including inertial confinement fusion, laboratory astrophysics, planetology and material science. The pressures in the multi-Mbar range can be produced by the shocks generated using chemical explosions, light-gas guns, Z-pinch machines or lasers. Higher pressures, in the sub-Gbar or Gbar range are attainable only with nuclear explosions or laser-based methods. Unfortunately, due to the low efficiency of energy conversion from a laser to the shock (below a few percent), multi-kJ, multi-beam lasers are needed to produce such pressures with these methods. Here, we propose and investigate a novel scheme for generating high-pressure shocks which is much more efficient than the laser-based schemes known so far. In the proposed scheme, the shock is generated in a dense target by the impact of a fast projectile driven by the laser-induced cavity pressure acceleration (LICPA) mechanism. Using two-dimensional hydrodynamic simulations and the measurements performed at the kilojoule PALS laser facility it is shown that in the LICPA-driven collider the laser-to-shock energy conversion efficiency can reach a very high value ~ 15-20 % and, as a result, the shock pressure ~ 0.5-1 Gbar can be produced using lasers of energy <= 0.5 kJ. On the other hand, the pressures in the multi-Mbar range could be produced in this collider with low-energy (~ 10 J) lasers available on the market. It would open up the possibility of conducting research in high energy-density science also in small, university-class laboratories.
Research Topics on Cluttered Environments Interrogation and Propagation
2014-11-04
propagation in random and complex media and looked at specific applications associated with imaging and communication through a cluttered medium...imaging and communication schemes. We have used the results on the fourth moment to analyze wavefront correction schemes and obtained novel...and com- plex media and looked at specific applications associated with imaging and communication through a cluttered medium. The main new
Challenges of constructing salt cavern gas storage in China
NASA Astrophysics Data System (ADS)
Xia, Yan; Yuan, Guangjie; Ban, Fansheng; Zhuang, Xiaoqian; Li, Jingcui
2017-11-01
After more than ten years of research and engineering practice in salt cavern gas storage, the engineering technology of geology, drilling, leaching, completion, operation and monitoring system has been established. With the rapid growth of domestic consumption of natural gas, the requirement of underground gas storage is increasing. Because high-quality rock salt resources about 1000m depth are relatively scarce, the salt cavern gas storages will be built in deep rock salt. According to the current domestic conventional construction technical scheme, construction in deep salt formations will face many problems such as circulating pressure increasing, tubing blockage, deformation failure, higher completion risk and so on, caused by depth and the complex geological conditions. Considering these difficulties, the differences between current technical scheme and the construction scheme of twin well and big hole are analyzed, and the results show that the technical scheme of twin well and big hole have obvious advantages in reducing the circulating pressure loss, tubing blockage and failure risk, and they can be the alternative schemes to solve the technical difficulties of constructing salt cavern gas storages in the deep rock salt.
NASA Astrophysics Data System (ADS)
Rahman, Syazila; Yusoff, Mohd. Zamri; Hasini, Hasril
2012-06-01
This paper describes the comparison between the cell centered scheme and cell vertex scheme in the calculation of high speed compressible flow properties. The calculation is carried out using Computational Fluid Dynamic (CFD) in which the mass, momentum and energy equations are solved simultaneously over the flow domain. The geometry under investigation consists of a Binnie and Green convergent-divergent nozzle and structured mesh scheme is implemented throughout the flow domain. The finite volume CFD solver employs second-order accurate central differencing scheme for spatial discretization. In addition, the second-order accurate cell-vertex finite volume spatial discretization is also introduced in this case for comparison. The multi-stage Runge-Kutta time integration is implemented for solving a set of non-linear governing equations with variables stored at the vertices. Artificial dissipations used second and fourth order terms with pressure switch to detect changes in pressure gradient. This is important to control the solution stability and capture shock discontinuity. The result is compared with experimental measurement and good agreement is obtained for both cases.
Study on advanced information processing system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Liu, Jyh-Charn
1992-01-01
Issues related to the reliability of a redundant system with large main memory are addressed. In particular, the Fault-Tolerant Processor (FTP) for Advanced Launch System (ALS) is used as a basis for our presentation. When the system is free of latent faults, the probability of system crash due to nearly-coincident channel faults is shown to be insignificant even when the outputs of computing channels are infrequently voted on. In particular, using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs--with a low hardware overhead--can be used to reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, we have developed two schemes, called Scheme 1 and Scheme 2, to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.
Performance of MIMO-OFDM using convolution codes with QAM modulation
NASA Astrophysics Data System (ADS)
Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa
2014-04-01
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
On regularizing the MCTDH equations of motion
NASA Astrophysics Data System (ADS)
Meyer, Hans-Dieter; Wang, Haobin
2018-03-01
The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.
Zou, Shiyang; Sanz, Cristina; Balint-Kurti, Gabriel G
2008-09-28
We present an analytic scheme for designing laser pulses to manipulate the field-free molecular alignment of a homonuclear diatomic molecule. The scheme is based on the use of a generalized pulse-area theorem and makes use of pulses constructed around two-photon resonant frequencies. In the proposed scheme, the populations and relative phases of the rovibrational states of the molecule are independently controlled utilizing changes in the laser intensity and in the carrier-envelope phase difference, respectively. This allows us to create the correct coherent superposition of rovibrational states needed to achieve optimal molecular alignment. The validity and efficiency of the scheme are demonstrated by explicit application to the H(2) molecule. The analytically designed laser pulses are tested by exact numerical solutions of the time-dependent Schrodinger equation including laser-molecule interactions to all orders of the field strength. The design of a sequence of pulses to further enhance molecular alignment is also discussed and tested. It is found that the rotating wave approximation used in the analytic design of the laser pulses leads to small errors in the prediction of the relative phase of the rotational states. It is further shown how these errors may be easily corrected.
The effect of interference on delta modulation encoded video signals
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1979-01-01
The results of a study on the use of the delta modulator as a digital encoder of television signals are presented. The computer simulation was studied of different delta modulators in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results are analyzed and then implemented in hardware to study the ability to encode real time motion pictures from an NTSC format television camera. The effects were investigated of channel errors on the delta modulated video signal and several error correction algorithms were tested via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. The final area of investigation concerned itself with finding delta modulators which could achieve significant bandwidth reduction without regard to complexity or speed. The first such scheme to be investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved two dimensional delta modulator algorithms.
Computational design of the basic dynamical processes of the UCLA general circulation model
NASA Technical Reports Server (NTRS)
Arakawa, A.; Lamb, V. R.
1977-01-01
The 12-layer UCLA general circulation model encompassing troposphere and stratosphere (and superjacent 'sponge layer') is described. Prognostic variables are: surface pressure, horizontal velocity, temperature, water vapor and ozone in each layer, planetary boundary layer (PBL) depth, temperature, moisture and momentum discontinuities at PBL top, ground temperature and water storage, and mass of snow on ground. Selection of space finite-difference schemes for homogeneous incompressible flow, with/without a free surface, nonlinear two-dimensional nondivergent flow, enstrophy conserving schemes, momentum advection schemes, vertical and horizontal difference schemes, and time differencing schemes are discussed.
NASA Astrophysics Data System (ADS)
Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose
2015-09-01
The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.
Measurement of attenuation coefficients of the fundamental and second harmonic waves in water
NASA Astrophysics Data System (ADS)
Zhang, Shuzeng; Jeong, Hyunjo; Cho, Sungjong; Li, Xiongbing
2016-02-01
Attenuation corrections in nonlinear acoustics play an important role in the study of nonlinear fluids, biomedical imaging, or solid material characterization. The measurement of attenuation coefficients in a nonlinear regime is not easy because they depend on the source pressure and requires accurate diffraction corrections. In this work, the attenuation coefficients of the fundamental and second harmonic waves which come from the absorption of water are measured in nonlinear ultrasonic experiments. Based on the quasilinear theory of the KZK equation, the nonlinear sound field equations are derived and the diffraction correction terms are extracted. The measured sound pressure amplitudes are adjusted first for diffraction corrections in order to reduce the impact on the measurement of attenuation coefficients from diffractions. The attenuation coefficients of the fundamental and second harmonics are calculated precisely from a nonlinear least squares curve-fitting process of the experiment data. The results show that attenuation coefficients in a nonlinear condition depend on both frequency and source pressure, which are much different from a linear regime. In a relatively lower drive pressure, the attenuation coefficients increase linearly with frequency. However, they present the characteristic of nonlinear growth in a high drive pressure. As the diffraction corrections are obtained based on the quasilinear theory, it is important to use an appropriate source pressure for accurate attenuation measurements.
Nyilasy, Gergely; Lei, Jing; Nagpal, Anish; Tan, Joseph
2016-08-01
The purpose of the present study was to examine the effects of food label nutrition colouring schemes in interaction with food category healthiness on consumers' perceptions of food healthiness. Three streams of colour theory (colour attention, colour association and colour approach-avoidance) in interaction with heuristic processing theory provide consonant predictions and explanations for the underlying psychological processes. A 2 (food category healthiness: healthy v. unhealthy)×3 (food label nutrient colouring schemes: healthy=green, unhealthy=red (HGUR) v. healthy=red, unhealthy=green (HRUG) v. no colour (control)) between-subjects design was used. The research setting was a randomised-controlled experiment using varying formats of food packages and nutritional information colouring. Respondents (n 196) sourced from a national consumer panel, USA. The findings suggest that, for healthy foods, the nutritional colouring schemes reduced perceived healthiness, irrespective of which nutrients were coloured red or green (healthinesscontrol=4·86; healthinessHGUR=4·10; healthinessHRUG=3·70). In contrast, for unhealthy foods, there was no significant difference in perceptions of food healthiness when comparing different colouring schemes against the control. The results make an important qualification to the common belief that colour coding can enhance the correct interpretation of nutrition information and suggest that this incentive may not necessarily support healthier food choices in all situations.
Methodes d'optimisation des parametres 2D du reflecteur dans un reacteur a eau pressurisee
NASA Astrophysics Data System (ADS)
Clerc, Thomas
With a third of the reactors in activity, the Pressurized Water Reactor (PWR) is today the most used reactor design in the world. This technology equips all the 19 EDF power plants. PWRs fit into the category of thermal reactors, because it is mainly the thermal neutrons that contribute to the fission reaction. The pressurized light water is both used as the moderator of the reaction and as the coolant. The active part of the core is composed of uranium, slightly enriched in uranium 235. The reflector is a region surrounding the active core, and containing mostly water and stainless steel. The purpose of the reflector is to protect the vessel from radiations, and also to slow down the neutrons and reflect them into the core. Given that the neutrons participate to the reaction of fission, the study of their behavior within the core is capital to understand the general functioning of how the reactor works. The neutrons behavior is ruled by the transport equation, which is very complex to solve numerically, and requires very long calculation. This is the reason why the core codes that will be used in this study solve simplified equations to approach the neutrons behavior in the core, in an acceptable calculation time. In particular, we will focus our study on the diffusion equation and approximated transport equations, such as SPN or S N equations. The physical properties of the reflector are radically different from those of the fissile core, and this structural change causes important tilt in the neutron flux at the core/reflector interface. This is why it is very important to accurately design the reflector, in order to precisely recover the neutrons behavior over the whole core. Existing reflector calculation techniques are based on the Lefebvre-Lebigot method. This method is only valid if the energy continuum of the neutrons is discretized in two energy groups, and if the diffusion equation is used. The method leads to the calculation of a homogeneous reflector. The aim of this study is to create a computational scheme able to compute the parameters of heterogeneous, multi-group reflectors, with both diffusion and SPN/SN operators. For this purpose, two computational schemes are designed to perform such a reflector calculation. The strategy used in both schemes is to minimize the discrepancies between a power distribution computed with a core code and a reference distribution, which will be obtained with an APOLLO2 calculation based on the method Method Of Characteristics (MOC). In both computational schemes, the optimization parameters, also called control variables, are the diffusion coefficients in each zone of the reflector, for diffusion calculations, and the P-1 corrected macroscopic total cross-sections in each zone of the reflector, for SPN/SN calculations (or correction factors on these parameters). After a first validation of our computational schemes, the results are computed, always by optimizing the fast diffusion coefficient for each zone of the reflector. All the tools of the data assimilation have been used to reflect the different behavior of the solvers in the different parts of the core. Moreover, the reflector is refined in six separated zones, corresponding to the physical structure of the reflector. There will be then six control variables for the optimization algorithms. [special characters omitted]. Our computational schemes are then able to compute heterogeneous, 2-group or multi-group reflectors, using diffusion or SPN/SN operators. The optimization performed reduces the discrepancies distribution between the power computed with the core codes and the reference power. However, there are two main limitations to this study: first the homogeneous modeling of the reflector assemblies doesn't allow to properly describe its physical structure near the core/reflector interface. Moreover, the fissile assemblies are modeled in infinite medium, and this model reaches its limit at the core/reflector interface. These two problems should be tackled in future studies. (Abstract shortened by UMI.).
Study on micro-water measurement method based on SF6 insulation equipment in high altitude area
NASA Astrophysics Data System (ADS)
Zhang, Han; Liu, Yajin; Yan, Jun; Liu, Zhijian; Yan, Yongfei
2018-06-01
Moisture content is an important indicator of the insulation and arc extinguishing performance of SF6 insulated electrical equipment. The research shows that moisture measurements are strongly influenced by altitude pressures and the different order of pressure correction and temperature correction calculation, different calculation results will result. Therefore, in this paper, we studies the pressure and temperature environment based on moisture test of SF6 gas insulated equipment in power industry. Firstly, the PVT characteristics of pure SF6 gas and water vapor were analyzed and put forward the necessity of pressure correction, then combined the Pitzer-Veli equation of SF6 gas and Water Pitzer-Veli equation to fit PVT equation of state of SF6-H20 that suitable for electric power industry and deduced the Correction Formula of Moisture Measurement in SF6 Gas. Finally, through experiments, completion of the calibration formula optimization and verification SF6 electrical equipment on, proof of the applicability and effectiveness of the correction formula.
Correlated environmental corrections in TOPEX/POSEIDON, with a note on ionospheric accuracy
NASA Technical Reports Server (NTRS)
Zlotnicki, V.
1994-01-01
Estimates of the effectiveness of an altimetric correction, and interpretation of sea level variability as a response to atmospheric forcing, both depend upon assuming that residual errors in altimetric corrections are uncorrelated among themselves and with residual sea level, or knowing the correlations. Not surprisingly, many corrections are highly correlated since they involve atmospheric properties and the ocean surface's response to them. The full corrections (including their geographically varying time mean values), show correlations between electromagnetic bias (mostly the height of wind waves) and either atmospheric pressure or water vapor of -40%, and between atmospheric pressure and water vapor of 28%. In the more commonly used collinear differences (after removal of the geographically varying time mean), atmospheric pressure and wave height show a -30% correlation, atmospheric pressure and water vapor a -10% correlation, both pressure and water vapor a 7% correlation with residual sea level, and a bit surprisingly, ionospheric electron content and wave height a 15% correlation. Only the ocean tide is totally uncorrelated with other corrections or residual sea level. The effectiveness of three ionospheric corrections (TOPEX dual-frequency, a smoothed version of the TOPEX dual-frequency, and Doppler orbitography and radiopositioning integrated by satellite (DORIS) is also evaluated in terms of their reduction in variance of residual sea level. Smooth (90-200 km along-track) versions of the dual-frequency altimeter ionosphere perform best both globally and within 20 deg in latitude from the equator. The noise variance in the 1/s TOPEX inospheric samples is approximately (11 mm) squared, about the same as noise in the DORIS-based correction; however, the latter has its error over scales of order 10(exp 3) km. Within 20 deg of the equator, the DORIS-based correction adds (14 mm) squared to the residual sea level variance.
A single-stage flux-corrected transport algorithm for high-order finite-volume methods
Chaplin, Christopher; Colella, Phillip
2017-05-08
We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less
NASA Astrophysics Data System (ADS)
Wu, Fenxiang; Xu, Yi; Yu, Linpeng; Yang, Xiaojun; Li, Wenkai; Lu, Jun; Leng, Yuxin
2016-11-01
Pulse front distortion (PFD) is mainly induced by the chromatic aberration in femtosecond high-peak power laser systems, and it can temporally distort the pulse in the focus and therefore decrease the peak intensity. A novel measurement scheme is proposed to directly measure the PFD of ultra-intensity ultra-short laser pulses, which can work not only without any extra struggle for the desired reference pulse, but also largely reduce the size of the required optical elements in measurement. The measured PFD in an experimental 200TW/27fs laser system is in good agreement with the calculated result, which demonstrates the validity and feasibility of this method effectively. In addition, a simple compensation scheme based on the combination of concave lens and parabolic lens is also designed and proposed to correct the PFD. Based on the theoretical calculation, the PFD of above experimental laser system can almost be completely corrected by using this compensator with proper parameters.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data
NASA Technical Reports Server (NTRS)
Deschamps, P.-Y.; Frouin, R.
1997-01-01
The investigation focuses on two key issues in satellite ocean color remote sensing, namely the presence of whitecaps on the sea surface and the validity of the aerosol models selected for the atmospheric correction of SeaWiFS data. Experiments were designed and conducted at the Scripps Institution of Oceanography to measure the optical properties of whitecaps and to study the aerosol optical properties in a typical mid-latitude coastal environment. CIMEL Electronique sunphotometers, now integrated in the AERONET network, were also deployed permanently in Bermuda and in Lanai, calibration/validation sites for SeaWiFS and MODIS. Original results were obtained on the spectral reflectance of whitecaps and on the choice of aerosol models for atmospheric correction schemes and the type of measurements that should be made to verify those schemes. Bio-optical algorithms to remotely sense primary productivity from space were also evaluated, as well as current algorithms to estimate PAR at the earth's surface.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
NASA Astrophysics Data System (ADS)
Jackson, Thomas L.; Sridharan, Prashanth; Zhang, Ju; Balachandar, S.
2015-11-01
In this work we present axisymmetric numerical simulations of shock propagating in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. The numerical method is a finite-volume based solver on a Cartesian grid, which allows for multi-material interfaces and shocks. To preserve particle mass and volume, a novel constraint reinitialization scheme is introduced. We compute the unsteady drag coefficient as a function of post-shock pressure, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. Using this information, we also present a simplified point-particle force model that can be used for mesoscale simulations.
Nanowire growth kinetics in aberration corrected environmental transmission electron microscopy
Chou, Yi -Chia; Panciera, Federico; Reuter, Mark C.; ...
2016-03-15
Here, we visualize atomic level dynamics during Si nanowire growth using aberration corrected environmental transmission electron microscopy, and compare with lower pressure results from ultra-high vacuum microscopy. We discuss the importance of higher pressure observations for understanding growth mechanisms and describe protocols to minimize effects of the higher pressure background gas.
Role of relativity in high-pressure phase transitions of thallium.
Kotmool, Komsilp; Chakraborty, Sudip; Bovornratanaraks, Thiti; Ahuja, Rajeev
2017-02-20
We demonstrate the relativistic effects in high-pressure phase transitions of heavy element thallium. The known first phase transition from h.c.p. to f.c.c. is initially investigated by various relativistic levels and exchange-correlation functionals as implemented in FPLO method, as well as scalar relativistic scheme within PAW formalism. The electronic structure calculations are interpreted from the perspective of energetic stability and electronic density of states. The full relativistic scheme (FR) within L(S)DA performs to be the scheme that resembles mostly with experimental results with a transition pressure of 3 GPa. The s-p hybridization and the valence-core overlapping of 6s and 5d states are the primary reasons behind the f.c.c. phase occurrence. A recent proposed phase, i.e., a body-centered tetragonal (b.c.t.) phase, is confirmed with a small distortion from the f.c.c. phase. We have also predicted a reversible b.c.t. → f.c.c. phase transition at 800 GPa. This finding has been suggested that almost all the III-A elements (Ga, In and Tl) exhibit the b.c.t. → f.c.c. phase transition at extremely high pressure.
Wang, Qian; Hisatomi, Takashi; Suzuki, Yohichi; Pan, Zhenhua; Seo, Jeongsuk; Katayama, Masao; Minegishi, Tsutomu; Nishiyama, Hiroshi; Takata, Tsuyoshi; Seki, Kazuhiko; Kudo, Akihiko; Yamada, Taro; Domen, Kazunari
2017-02-01
Development of sunlight-driven water splitting systems with high efficiency, scalability, and cost-competitiveness is a central issue for mass production of solar hydrogen as a renewable and storable energy carrier. Photocatalyst sheets comprising a particulate hydrogen evolution photocatalyst (HEP) and an oxygen evolution photocatalyst (OEP) embedded in a conductive thin film can realize efficient and scalable solar hydrogen production using Z-scheme water splitting. However, the use of expensive precious metal thin films that also promote reverse reactions is a major obstacle to developing a cost-effective process at ambient pressure. In this study, we present a standalone particulate photocatalyst sheet based on an earth-abundant, relatively inert, and conductive carbon film for efficient Z-scheme water splitting at ambient pressure. A SrTiO 3 :La,Rh/C/BiVO 4 :Mo sheet is shown to achieve unassisted pure-water (pH 6.8) splitting with a solar-to-hydrogen energy conversion efficiency (STH) of 1.2% at 331 K and 10 kPa, while retaining 80% of this efficiency at 91 kPa. The STH value of 1.0% is the highest among Z-scheme pure water splitting operating at ambient pressure. The working mechanism of the photocatalyst sheet is discussed on the basis of band diagram simulation. In addition, the photocatalyst sheet split pure water more efficiently than conventional powder suspension systems and photoelectrochemical parallel cells because H + and OH - concentration overpotentials and an IR drop between the HEP and OEP were effectively suppressed. The proposed carbon-based photocatalyst sheet, which can be used at ambient pressure, is an important alternative to (photo)electrochemical systems for practical solar hydrogen production.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Mamalakis, Antonis; Deidda, Roberto; Marrocu, Marino
2015-04-01
To improve the level skill of Global Climate Models (GCMs) and Regional Climate Models (RCMs) in reproducing the statistics of rainfall at a basin level and at hydrologically relevant temporal scales (e.g. daily), two types of statistical approaches have been suggested. One is the statistical correction of climate model rainfall outputs using historical series of precipitation. The other is the use of stochastic models of rainfall to conditionally simulate precipitation series, based on large-scale atmospheric predictors produced by climate models (e.g. geopotential height, relative vorticity, divergence, mean sea level pressure). The latter approach, usually referred to as statistical rainfall downscaling, aims at reproducing the statistical character of rainfall, while accounting for the effects of large-scale atmospheric circulation (and, therefore, climate forcing) on rainfall statistics. While promising, statistical rainfall downscaling has not attracted much attention in recent years, since the suggested approaches involved complex (i.e. subjective or computationally intense) identification procedures of the local weather, in addition to demonstrating limited success in reproducing several statistical features of rainfall, such as seasonal variations, the distributions of dry and wet spell lengths, the distribution of the mean rainfall intensity inside wet periods, and the distribution of rainfall extremes. In an effort to remedy those shortcomings, Langousis and Kaleris (2014) developed a statistical framework for simulation of daily rainfall intensities conditional on upper air variables, which accurately reproduces the statistical character of rainfall at multiple time-scales. Here, we study the relative performance of: a) quantile-quantile (Q-Q) correction of climate model rainfall products, and b) the statistical downscaling scheme of Langousis and Kaleris (2014), in reproducing the statistical structure of rainfall, as well as rainfall extremes, at a regional level. This is done for an intermediate-sized catchment in Italy, i.e. the Flumendosa catchment, using climate model rainfall and atmospheric data from the ENSEMBLES project (http://ensembleseu.metoffice.com). In doing so, we split the historical rainfall record of mean areal precipitation (MAP) in 15-year calibration and 45-year validation periods, and compare the historical rainfall statistics to those obtained from: a) Q-Q corrected climate model rainfall products, and b) synthetic rainfall series generated by the suggested downscaling scheme. To our knowledge, this is the first time that climate model rainfall and statistically downscaled precipitation are compared to catchment-averaged MAP at a daily resolution. The obtained results are promising, since the proposed downscaling scheme is more accurate and robust in reproducing a number of historical rainfall statistics, independent of the climate model used and the length of the calibration period. This is particularly the case for the yearly rainfall maxima, where direct statistical correction of climate model rainfall outputs shows increased sensitivity to the length of the calibration period and the climate model used. The robustness of the suggested downscaling scheme in modeling rainfall extremes at a daily resolution, is a notable feature that can effectively be used to assess hydrologic risk at a regional level under changing climatic conditions. Acknowledgments The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State. CRS4 highly acknowledges the contribution of the Sardinian regional authorities.
Fuel cell flooding detection and correction
DiPierno Bosco, Andrew; Fronk, Matthew Howard
2000-08-15
Method and apparatus for monitoring an H.sub.2 -O.sub.2 PEM fuel cells to detect and correct flooding. The pressure drop across a given H.sub.2 or O.sub.2 flow field is monitored and compared to predetermined thresholds of unacceptability. If the pressure drop exists a threshold of unacceptability corrective measures are automatically initiated.
Threshold quantum secret sharing based on single qubit
NASA Astrophysics Data System (ADS)
Lu, Changbin; Miao, Fuyou; Meng, Keju; Yu, Yue
2018-03-01
Based on unitary phase shift operation on single qubit in association with Shamir's ( t, n) secret sharing, a ( t, n) threshold quantum secret sharing scheme (or ( t, n)-QSS) is proposed to share both classical information and quantum states. The scheme uses decoy photons to prevent eavesdropping and employs the secret in Shamir's scheme as the private value to guarantee the correctness of secret reconstruction. Analyses show it is resistant to typical intercept-and-resend attack, entangle-and-measure attack and participant attacks such as entanglement swapping attack. Moreover, it is easier to realize in physic and more practical in applications when compared with related ones. By the method in our scheme, new ( t, n)-QSS schemes can be easily constructed using other classical ( t, n) secret sharing.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay
Huang, Jian
2013-03-12
A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.
Correction of Dynamic Characteristics of SAR Cryogenic GTE on Consumption of Gasified Fuel
NASA Astrophysics Data System (ADS)
Bukin, V. A.; Gimadiev, A. G.; Gangisetty, G.
2018-01-01
When the gas turbine engines (GTE) NK-88 were developed for liquid hydrogen and NK-89 for liquefied natural gas, performance of the systems with a turbo-pump unitary was improved and its proved without direct regulation of the flow of a cryogenic fuel, which was supplied by a centrifugal pump of the turbo-pump unit (TPU) Command from the “kerosene” system. Such type of the automatic control system (SAR) has the property of partial “neutralization” of the delay caused by gasification of the fuel. This does not require any measurements in the cryogenic medium, and the failure of the centrifugal cryogenic pump does not lead to engine failure. On the other hand, the system without direct regulation of the flow of cryogenic fuel has complex internal dynamic connections, their properties are determined by the characteristics of the incoming units and assemblies, and it is difficult to maintain accurate the maximum boundary level and minimum fuel consumption due to the influence of a booster pressure change. Direct regulation of the consumption of cryogenic fuel (prior to its gasification) is the preferred solution, since for using traditional liquid and gaseous fuels this is the main and proven method. The scheme of correction of dynamic characteristics of a single-loop SAR GTE for the consumption of a liquefied cryogenic fuel with a flow rate correction in its gasified state, which ensures the dynamic properties of the system is not worse than for NK-88 and NK-89 engines.
On the security of two remote user authentication schemes for telecare medical information systems.
Kim, Kee-Won; Lee, Jae-Dong
2014-05-01
The telecare medical information systems (TMISs) support convenient and rapid health-care services. A secure and efficient authentication scheme for TMIS provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Kumari et al. proposed a password based user authentication scheme using smart cards for TMIS, and claimed that the proposed scheme could resist various malicious attacks. However, we point out that their scheme is still vulnerable to lost smart card and cannot provide forward secrecy. Subsequently, Das and Goswami proposed a secure and efficient uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. They simulated their scheme for the formal security verification using the widely-accepted automated validation of Internet security protocols and applications (AVISPA) tool to ensure that their scheme is secure against passive and active attacks. However, we show that their scheme is still vulnerable to smart card loss attacks and cannot provide forward secrecy property. The proposed cryptanalysis discourages any use of the two schemes under investigation in practice and reveals some subtleties and challenges in designing this type of schemes.
Two-out-of-two color matching based visual cryptography schemes.
Machizaud, Jacques; Fournel, Thierry
2012-09-24
Visual cryptography which consists in sharing a secret message between transparencies has been extended to color prints. In this paper, we propose a new visual cryptography scheme based on color matching. The stacked printed media reveal a uniformly colored message decoded by the human visual system. In contrast with the previous color visual cryptography schemes, the proposed one enables to share images without pixel expansion and to detect a forgery as the color of the message is kept secret. In order to correctly print the colors on the media and to increase the security of the scheme, we use spectral models developed for color reproduction describing printed colors from an optical point of view.
Galilean invariant resummation schemes of cosmological perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peloso, Marco; Pietroni, Massimo, E-mail: peloso@physics.umn.edu, E-mail: massimo.pietroni@unipr.it
2017-01-01
Many of the methods proposed so far to go beyond Standard Perturbation Theory break invariance under time-dependent boosts (denoted here as extended Galilean Invariance, or GI). This gives rise to spurious large scale effects which spoil the small scale predictions of these approximation schemes. By using consistency relations we derive fully non-perturbative constraints that GI imposes on correlation functions. We then introduce a method to quantify the amount of GI breaking of a given scheme, and to correct it by properly tailored counterterms. Finally, we formulate resummation schemes which are manifestly GI, discuss their general features, and implement them inmore » the so called Time-Flow, or TRG, equations.« less
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
Ionization cross section, pressure shift and isotope shift measurements of osmium
NASA Astrophysics Data System (ADS)
Hirayama, Yoshikazu; Mukai, Momo; Watanabe, Yutaka; Oyaizu, Michihiro; Ahmed, Murad; Kakiguchi, Yutaka; Kimura, Sota; Miyatake, Hiroari; Schury, Peter; Wada, Michiharu; Jeong, Sun-Chan
2017-11-01
In-gas-cell laser resonance ionization spectroscopy of neutral osmium atoms was performed with the use of a two-color two-step laser resonance ionization technique. Saturation curves for the ionization scheme were measured, and the ionization cross section was experimentally determined by solving the rate equations for the ground, intermediate and ionization continuum populations. The pressure shift and pressure broadening in the resonance spectra of the excitation transition were measured. The electronic factor {F}247 for the transition {λ }1=247.7583 nm to the intermediate state was deduced from the measured isotope shifts of stable {}{188,189,{190,192}}Os isotopes. The efficient ionization scheme, pressure shift, nuclear isotope shift and {F}247 are expected to be useful for applications of laser ion sources to unstable nuclei and for nuclear spectroscopy based on laser ionization techniques.
NASA Astrophysics Data System (ADS)
Pasten Zapata, Ernesto; Moggridge, Helen; Jones, Julie; Widmann, Martin
2017-04-01
Run-of-the-River (ROR) hydropower schemes are expected to be importantly affected by climate change as they rely in the availability of river flow to generate energy. As temperature and precipitation are expected to vary in the future, the hydrological cycle will also undergo changes. Therefore, climate models based on complex physical atmospheric interactions have been developed to simulate future climate scenarios considering the atmosphere's greenhouse gas concentrations. These scenarios are classified according to the Representative Concentration Pathways (RCP) that are generated according to the concentration of greenhouse gases. This study evaluates possible scenarios for selected ROR hydropower schemes within the UK, considering three different RCPs: 2.6, 4.5 and 8.5 W/m2 for 2100 relative to pre-industrial values. The study sites cover different climate, land cover, topographic and hydropower scheme characteristics representative of the UK's heterogeneity. Precipitation and temperature outputs from state-of-the-art Regional Climate Models (RCMs) from the Euro-CORDEX project are used as input for a HEC-HMS hydrological model to simulate the future river flow available. Both uncorrected and bias-corrected RCM simulations are analyzed. The results of this project provide an insight of the possible effects of climate change towards the generation of power from the ROR hydropower schemes according to the different RCP scenarios and contrasts the results obtained from uncorrected and bias-corrected RCMs. This analysis can aid on the adaptation to climate change as well as the planning of future ROR schemes in the region.
Validation d'un nouveau calcul de reference en evolution pour les reacteurs thermiques
NASA Astrophysics Data System (ADS)
Canbakan, Axel
Resonance self-shielding calculations are an essential component of a deterministic lattice code calculation. Even if their aim is to correct the cross sections deviation, they introduce a non negligible error in evaluated parameters such as the flux. Until now, French studies for light water reactors are based on effective reaction rates obtained using an equivalence in dilution technique. With the increase of computing capacities, this method starts to show its limits in precision and can be replaced by a subgroup method. Originally used for fast neutron reactor calculations, the subgroup method has many advantages such as using an exact slowing down equation. The aim of this thesis is to suggest a validation as precise as possible without burnup, and then with an isotopic depletion study for the subgroup method. In the end, users interested in implementing a subgroup method in their scheme for Pressurized Water Reactors can rely on this thesis to justify their modelization choices. Moreover, other parameters are validated to suggest a new reference scheme for fast execution and precise results. These new techniques are implemented in the French lattice scheme SHEM-MOC, composed of a Method Of Characteristics flux calculation and a SHEM-like 281-energy group mesh. First, the libraries processed by the CEA are compared. Then, this thesis suggests the most suitable energetic discretization for a subgroup method. Finally, other techniques such as the representation of the anisotropy of the scattering sources and the spatial representation of the source in the MOC calculation are studied. A DRAGON5 scheme is also validated as it shows interesting elements: the DRAGON5 subgroup method is run with a 295-eenergy group mesh (compared to 361 groups for APOLLO2). There are two reasons to use this code. The first involves offering a new reference lattice scheme for Pressurized Water Reactors to DRAGON5 users. The second is to study parameters that are not available in APOLLO2 such as self-shielding in a temperature gradient and using a flux calculation based on MOC in the self-shielding part of the simulation. This thesis concludes that: (1) The subgroup method is at least more precise than a technique based on effective reaction rates, only if we use a 361-energy group mesh; (2) MOC with a linear source in a geometrical region gives better results than a MOC with a constant model. A moderator discretization is compulsory; (3) A P3 choc law is satisfactory, ensuring a coherence with 2D full core calculations; (4) SHEM295 is viable with a Subgroup Projection Method for DRAGON5.
Temperature and pressure effects on capacitance probe cryogenic liquid level measurement accuracy
NASA Technical Reports Server (NTRS)
Edwards, Lawrence G.; Haberbusch, Mark
1993-01-01
The inaccuracies of liquid nitrogen and liquid hydrogen level measurements by use of a coaxial capacitance probe were investigated as a function of fluid temperatures and pressures. Significant liquid level measurement errors were found to occur due to the changes in the fluids dielectric constants which develop over the operating temperature and pressure ranges of the cryogenic storage tanks. The level measurement inaccuracies can be reduced by using fluid dielectric correction factors based on measured fluid temperatures and pressures. The errors in the corrected liquid level measurements were estimated based on the reported calibration errors of the temperature and pressure measurement systems. Experimental liquid nitrogen (LN2) and liquid hydrogen (LH2) level measurements were obtained using the calibrated capacitance probe equations and also by the dielectric constant correction factor method. The liquid levels obtained by the capacitance probe for the two methods were compared with the liquid level estimated from the fluid temperature profiles. Results show that the dielectric constant corrected liquid levels agreed within 0.5 percent of the temperature profile estimated liquid level. The uncorrected dielectric constant capacitance liquid level measurements deviated from the temperature profile level by more than 5 percent. This paper identifies the magnitude of liquid level measurement error that can occur for LN2 and LH2 fluids due to temperature and pressure effects on the dielectric constants over the tank storage conditions from 5 to 40 psia. A method of reducing the level measurement errors by using dielectric constant correction factors based on fluid temperature and pressure measurements is derived. The improved accuracy by use of the correction factors is experimentally verified by comparing liquid levels derived from fluid temperature profiles.
NASA Astrophysics Data System (ADS)
Li, Yan; Li, Lin; Huang, Yi-Fan; Du, Bao-Lin
2009-07-01
This paper analyses the dynamic residual aberrations of a conformal optical system and introduces adaptive optics (AO) correction technology to this system. The image sharpening AO system is chosen as the correction scheme. Communication between MATLAB and Code V is established via ActiveX technique in computer simulation. The SPGD algorithm is operated at seven zoom positions to calculate the optimized surface shape of the deformable mirror. After comparison of performance of the corrected system with the baseline system, AO technology is proved to be a good way of correcting the dynamic residual aberration in conformal optical design.
NASA Astrophysics Data System (ADS)
Morbec, Juliana M.; Kratzer, Peter
2017-01-01
Using first-principles calculations based on density-functional theory (DFT), we investigated the effects of the van der Waals (vdW) interactions on the structural and electronic properties of anthracene and pentacene adsorbed on the Ag(111) surface. We found that the inclusion of vdW corrections strongly affects the binding of both anthracene/Ag(111) and pentacene/Ag(111), yielding adsorption heights and energies more consistent with the experimental results than standard DFT calculations with generalized gradient approximation (GGA). For anthracene/Ag(111) the effect of the vdW interactions is even more dramatic: we found that "pure" DFT-GGA calculations (without including vdW corrections) result in preference for a tilted configuration, in contrast to the experimental observations of flat-lying adsorption; including vdW corrections, on the other hand, alters the binding geometry of anthracene/Ag(111), favoring the flat configuration. The electronic structure obtained using a self-consistent vdW scheme was found to be nearly indistinguishable from the conventional DFT electronic structure once the correct vdW geometry is employed for these physisorbed systems. Moreover, we show that a vdW correction scheme based on a hybrid functional DFT calculation (HSE) results in an improved description of the highest occupied molecular level of the adsorbed molecules.
Operator splitting method for simulation of dynamic flows in natural gas pipeline networks
Dyachenko, Sergey A.; Zlotnik, Anatoly; Korotkevich, Alexander O.; ...
2017-09-19
Here, we develop an operator splitting method to simulate flows of isothermal compressible natural gas over transmission pipelines. The method solves a system of nonlinear hyperbolic partial differential equations (PDEs) of hydrodynamic type for mass flow and pressure on a metric graph, where turbulent losses of momentum are modeled by phenomenological Darcy-Weisbach friction. Mass flow balance is maintained through the boundary conditions at the network nodes, where natural gas is injected or withdrawn from the system. Gas flow through the network is controlled by compressors boosting pressure at the inlet of the adjoint pipe. Our operator splitting numerical scheme ismore » unconditionally stable and it is second order accurate in space and time. The scheme is explicit, and it is formulated to work with general networks with loops. We test the scheme over range of regimes and network configurations, also comparing its performance with performance of two other state of the art implicit schemes.« less
NASA Astrophysics Data System (ADS)
Poirier, Vincent
Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.
A velocity-correction projection method based immersed boundary method for incompressible flows
NASA Astrophysics Data System (ADS)
Cai, Shanggui
2014-11-01
In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.
NASA Astrophysics Data System (ADS)
Alappattu, Denny P.; Wang, Qing; Yamaguchi, Ryan; Lind, Richard J.; Reynolds, Mike; Christman, Adam J.
2017-08-01
The sea surface temperature (SST) relevant to air-sea interaction studies is the temperature immediately adjacent to the air, referred to as skin SST. Generally, SST measurements from ships and buoys are taken at depths varies from several centimeters to 5 m below the surface. These measurements, known as bulk SST, can differ from skin SST up to O(1°C). Shipboard bulk and skin SST measurements were made during the Coupled Air-Sea Processes and Electromagnetic ducting Research east coast field campaign (CASPER-East). An Infrared SST Autonomous Radiometer (ISAR) recorded skin SST, while R/V Sharp's Surface Mapping System (SMS) provided bulk SST from 1 m water depth. Since the ISAR is sensitive to sea spray and rain, missing skin SST data occurred in these conditions. However, SMS measurement is less affected by adverse weather and provided continuous bulk SST measurements. It is desirable to correct the bulk SST to obtain a good representation of the skin SST, which is the objective of this research. Bulk-skin SST difference has been examined with respect to meteorological factors associated with cool skin and diurnal warm layers. Strong influences of wind speed, diurnal effects, and net longwave radiation flux on temperature difference are noticed. A three-step scheme is established to correct for wind effect, diurnal variability, and then for dependency on net longwave radiation flux. Scheme is tested and compared to existing correction schemes. This method is able to effectively compensate for multiple factors acting to modify bulk SST measurements over the range of conditions experienced during CASPER-East.
NASA Astrophysics Data System (ADS)
Bejaoui, Najoua
The pressurized water nuclear reactors (PWRs) is the largest fleet of nuclear reactors in operation around the world. Although these reactors have been studied extensively by designers and operators using efficient numerical methods, there are still some calculation weaknesses, given the geometric complexity of the core, still unresolved such as the analysis of the neutron flux's behavior at the core-reflector interface. The standard calculation scheme is a two steps process. In the first step, a detailed calculation at the assembly level with reflective boundary conditions, provides homogenized cross-sections for the assemblies, condensed to a reduced number of groups; this step is called the lattice calculation. The second step uses homogenized properties in each assemblies to calculate reactor properties at the core level. This step is called the full-core calculation or whole-core calculation. This decoupling of the two calculation steps is the origin of methodological bias particularly at the interface core reflector: the periodicity hypothesis used to calculate cross section librairies becomes less pertinent for assemblies that are adjacent to the reflector generally represented by these two models: thus the introduction of equivalent reflector or albedo matrices. The reflector helps to slowdown neutrons leaving the reactor and returning them to the core. This effect leads to two fission peaks in fuel assemblies localised at the core/reflector interface, the fission rate increasing due to the greater proportion of reentrant neutrons. This change in the neutron spectrum arises deep inside the fuel located on the outskirts of the core. To remedy this we simulated a peripheral assembly reflected with TMI-PWR reflector and developed an advanced calculation scheme that takes into account the environment of the peripheral assemblies and generate equivalent neutronic properties for the reflector. This scheme is tested on a core without control mechanisms and charged with fresh fuel. The results of this study showed that explicit representation of reflector and calculation of peripheral assembly with our advanced scheme allow corrections to the energy spectrum at the core interface and increase the peripheral power by up to 12% compared with that of the reference scheme.
Application of wavelet multi-resolution analysis for correction of seismic acceleration records
NASA Astrophysics Data System (ADS)
Ansari, Anooshiravan; Noorzad, Assadollah; Zare, Mehdi
2007-12-01
During an earthquake, many stations record the ground motion, but only a few of them could be corrected using conventional high-pass and low-pass filtering methods and the others were identified as highly contaminated by noise and as a result useless. There are two major problems associated with these noisy records. First, since the signal to noise ratio (S/N) is low, it is not possible to discriminate between the original signal and noise either in the frequency domain or in the time domain. Consequently, it is not possible to cancel out noise using conventional filtering methods. The second problem is the non-stationary characteristics of the noise. In other words, in many cases the characteristics of the noise are varied over time and in these situations, it is not possible to apply frequency domain correction schemes. When correcting acceleration signals contaminated with high-level non-stationary noise, there is an important question whether it is possible to estimate the state of the noise in different bands of time and frequency. Wavelet multi-resolution analysis decomposes a signal into different time-frequency components, and besides introducing a suitable criterion for identification of the noise among each component, also provides the required mathematical tool for correction of highly noisy acceleration records. In this paper, the characteristics of the wavelet de-noising procedures are examined through the correction of selected real and synthetic acceleration time histories. It is concluded that this method provides a very flexible and efficient tool for the correction of very noisy and non-stationary records of ground acceleration. In addition, a two-step correction scheme is proposed for long period correction of the acceleration records. This method has the advantage of stable results in displacement time history and response spectrum.
Mishra, Dheerendra; Mukhopadhyay, Sourav; Kumari, Saru; Khan, Muhammad Khurram; Chaturvedi, Ankita
2014-05-01
Telecare medicine information systems (TMIS) present the platform to deliver clinical service door to door. The technological advances in mobile computing are enhancing the quality of healthcare and a user can access these services using its mobile device. However, user and Telecare system communicate via public channels in these online services which increase the security risk. Therefore, it is required to ensure that only authorized user is accessing the system and user is interacting with the correct system. The mutual authentication provides the way to achieve this. Although existing schemes are either vulnerable to attacks or they have higher computational cost while an scalable authentication scheme for mobile devices should be secure and efficient. Recently, Awasthi and Srivastava presented a biometric based authentication scheme for TMIS with nonce. Their scheme only requires the computation of the hash and XOR functions.pagebreak Thus, this scheme fits for TMIS. However, we observe that Awasthi and Srivastava's scheme does not achieve efficient password change phase. Moreover, their scheme does not resist off-line password guessing attack. Further, we propose an improvement of Awasthi and Srivastava's scheme with the aim to remove the drawbacks of their scheme.
A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function
Odelu, Vanga; Goswami, Adrijit
2014-01-01
In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme. PMID:24892078
A robust and effective smart-card-based remote user authentication mechanism using hash function.
Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit
2014-01-01
In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme.
Mechanical Extraction of Power From Ocean Currents and Tides
NASA Technical Reports Server (NTRS)
Jones, Jack; Chao, Yi
2010-01-01
A proposed scheme for generating electric power from rivers and from ocean currents, tides, and waves is intended to offer economic and environmental advantages over prior such schemes, some of which are at various stages of implementation, others of which have not yet advanced beyond the concept stage. This scheme would be less environmentally objectionable than are prior schemes that involve the use of dams to block rivers and tidal flows. This scheme would also not entail the high maintenance costs of other proposed schemes that call for submerged electric generators and cables, which would be subject to degradation by marine growth and corrosion. A basic power-generation system according to the scheme now proposed would not include any submerged electrical equipment. The submerged portion of the system would include an all-mechanical turbine/pump unit that would superficially resemble a large land-based wind turbine (see figure). The turbine axis would turn slowly as it captured energy from the local river flow, ocean current, tidal flow, or flow from an ocean-wave device. The turbine axis would drive a pump through a gearbox to generate an enclosed flow of water, hydraulic fluid, or other suitable fluid at a relatively high pressure [typically approx.500 psi (approx.3.4 MPa)]. The pressurized fluid could be piped to an onshore or offshore facility, above the ocean surface, where it would be used to drive a turbine that, in turn, would drive an electric generator. The fluid could be recirculated between the submerged unit and the power-generation facility in a closed flow system; alternatively, if the fluid were seawater, it could be taken in from the ocean at the submerged turbine/pump unit and discharged back into the ocean from the power-generation facility. Another alternative would be to use the pressurized flow to charge an elevated reservoir or other pumped-storage facility, from whence fluid could later be released to drive a turbine/generator unit at a time of high power demand. Multiple submerged turbine/pump units could be positioned across a channel to extract more power than could be extracted by a single unit. In that case, the pressurized flows in their output pipes would be combined, via check valves, into a wider pipe that would deliver the combined flow to a power-generating or pumped-storage facility.
NASA Technical Reports Server (NTRS)
Bhat, Thonse R. S.; Baty, Roy S.; Morris, Philip J.
1990-01-01
The shock structure in non-circular supersonic jets is predicted using a linear model. This model includes the effects of the finite thickness of the mixing layer and the turbulence in the jet shear layer. A numerical solution is obtained using a conformal mapping grid generation scheme with a hybrid pseudo-spectral discretization method. The uniform pressure perturbation at the jet exit is approximated by a Fourier-Mathieu series. The pressure at downstream locations is obtained from an eigenfunction expansion that is matched to the pressure perturbation at the jet exit. Results are presented for a circular jet and for an elliptic jet of aspect ratio 2.0. Comparisons are made with experimental data.
Lépy, M-C; Altzitzoglou, T; Anagnostakis, M J; Capogni, M; Ceccatelli, A; De Felice, P; Djurasevic, M; Dryak, P; Fazio, A; Ferreux, L; Giampaoli, A; Han, J B; Hurtado, S; Kandic, A; Kanisch, G; Karfopoulos, K L; Klemola, S; Kovar, P; Laubenstein, M; Lee, J H; Lee, J M; Lee, K B; Pierre, S; Carvalhal, G; Sima, O; Tao, Chau Van; Thanh, Tran Thien; Vidmar, T; Vukanac, I; Yang, M J
2012-09-01
The second part of an intercomparison of the coincidence summing correction methods is presented. This exercise concerned three volume sources, filled with liquid radioactive solution. The same experimental spectra, decay scheme and photon emission intensities were used by all the participants. The results were expressed as coincidence summing corrective factors for several energies of (152)Eu and (134)Cs, and different source-to-detector distances. They are presented and discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
A high-resolution Godunov method for compressible multi-material flow on overlapping grids
NASA Astrophysics Data System (ADS)
Banks, J. W.; Schwendeman, D. W.; Kapila, A. K.; Henshaw, W. D.
2007-04-01
A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on a uniform-pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on the Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of a planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.
NASA Technical Reports Server (NTRS)
Jameson, Antony
1994-01-01
The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.
High resolution schemes and the entropy condition
NASA Technical Reports Server (NTRS)
Osher, S.; Chakravarthy, S.
1983-01-01
A systematic procedure for constructing semidiscrete, second order accurate, variation diminishing, five point band width, approximations to scalar conservation laws, is presented. These schemes are constructed to also satisfy a single discrete entropy inequality. Thus, in the convex flux case, convergence is proven to be the unique physically correct solution. For hyperbolic systems of conservation laws, this construction is used formally to extend the first author's first order accurate scheme, and show (under some minor technical hypotheses) that limit solutions satisfy an entropy inequality. Results concerning discrete shocks, a maximum principle, and maximal order of accuracy are obtained. Numerical applications are also presented.
Deficiencies of the cryptography based on multiple-parameter fractional Fourier transform.
Ran, Qiwen; Zhang, Haiying; Zhang, Jin; Tan, Liying; Ma, Jing
2009-06-01
Methods of image encryption based on fractional Fourier transform have an incipient flaw in security. We show that the schemes have the deficiency that one group of encryption keys has many groups of keys to decrypt the encrypted image correctly for several reasons. In some schemes, many factors result in the deficiencies, such as the encryption scheme based on multiple-parameter fractional Fourier transform [Opt. Lett.33, 581 (2008)]. A modified method is proposed to avoid all the deficiencies. Security and reliability are greatly improved without increasing the complexity of the encryption process. (c) 2009 Optical Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heid, Matthias; Luetkenhaus, Norbert
2006-05-15
We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered.
NASA Astrophysics Data System (ADS)
Schmitteckert, Peter
2018-04-01
We present an infinite lattice density matrix renormalization group sweeping procedure which can be used as a replacement for the standard infinite lattice blocking schemes. Although the scheme is generally applicable to any system, its main advantages are the correct representation of commensurability issues and the treatment of degenerate systems. As an example we apply the method to a spin chain featuring a highly degenerate ground-state space where the new sweeping scheme provides an increase in performance as well as accuracy by many orders of magnitude compared to a recently published work.
Secure Wake-Up Scheme for WBANs
NASA Astrophysics Data System (ADS)
Liu, Jing-Wei; Ameen, Moshaddique Al; Kwak, Kyung-Sup
Network life time and hence device life time is one of the fundamental metrics in wireless body area networks (WBAN). To prolong it, especially those of implanted sensors, each node must conserve its energy as much as possible. While a variety of wake-up/sleep mechanisms have been proposed, the wake-up radio potentially serves as a vehicle to introduce vulnerabilities and attacks to WBAN, eventually resulting in its malfunctions. In this paper, we propose a novel secure wake-up scheme, in which a wake-up authentication code (WAC) is employed to ensure that a BAN Node (BN) is woken up by the correct BAN Network Controller (BNC) rather than unintended users or malicious attackers. The scheme is thus particularly implemented by a two-radio architecture. We show that our scheme provides higher security while consuming less energy than the existing schemes.
Comparison of Several Dissipation Algorithms for Central Difference Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Radespiel, R.; Turkel, E.
1997-01-01
Several algorithms for introducing artificial dissipation into a central difference approximation to the Euler and Navier Stokes equations are considered. The focus of the paper is on the convective upwind and split pressure (CUSP) scheme, which is designed to support single interior point discrete shock waves. This scheme is analyzed and compared in detail with scalar and matrix dissipation (MATD) schemes. Resolution capability is determined by solving subsonic, transonic, and hypersonic flow problems. A finite-volume discretization and a multistage time-stepping scheme with multigrid are used to compute solutions to the flow equations. Numerical results are also compared with either theoretical solutions or experimental data. For transonic airfoil flows the best accuracy on coarse meshes for aerodynamic coefficients is obtained with a simple MATD scheme.
[Hemodynamic changes in hypoglycemic shock].
Gutiérrez, C; Piza, R; Chousleb, A; Hidalgo, M A; Ortigosa, J L
1977-01-01
Severe hypoglycemia may be present in seriously ill patients; if it is not corrected opportunely a series of neuroendocrinal mechanisms take place aimed at correcting metabolic alterations. These mechanisms can produce hemodynamic alterations as well. Nine mongrel dogs were studied with continuous registration of: blood pressure, central venous pressure, cardiac frequency, respiratory frequency, electrocardiogram and first derivative (Dp/Dt). Six dogs received crystalline (fast acting) insuline intravenously (group 1). After hemodynamic changes were registered hypoglycemia was corrected with 50 per cent glucose solution. Complementary insuline doses were administered to three dogs (group 2); in this group hypoglycemia was not corrected. In group 1 during hypoglycemia there was an increase in blood pressure, central venous pressure, cardiac frequency, respiratory frequency and Dp/Dt, and changes in QT and T wave on the EKG; these changes were partially reversible after hypoglycemia was corrected. The above mentioned alterations persisted in group 2, breathing became irregular irregular and respiratory arrest supervened. It can be inferred that the hemodynamic response to hypoglycemia is predominantly adrenergic. The role of catecolamines, glucocorticoides, glucagon, insuline, cyclic AMP in metabolic and hemodynamic alterations consecutive to hypoglycemia are discussed.
Coarse-grained modeling of polyethylene melts: Effect on dynamics
Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya; ...
2017-05-23
The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less
Coarse-grained modeling of polyethylene melts: Effect on dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya
The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less
Ordering policy for stock-dependent demand rate under progressive payment scheme: a comment
NASA Astrophysics Data System (ADS)
Glock, Christoph H.; Ries, Jörg M.; Schwindl, Kurt
2015-04-01
In a recent paper, Soni and Shah developed a model for finding the optimal ordering policy for a retailer facing stock-dependent demand and a supplier offering a progressive payment scheme. In this comment, we correct several errors in the formulation of the models of Soni and Shah and modify some assumptions to increase the model's applicability. Numerical examples illustrate the benefits of our modifications.
An RFID solution for enhancing inpatient medication safety with real-time verifiable grouping-proof.
Chen, Yu-Yi; Tsai, Meng-Lin
2014-01-01
The occurrence of a medication error can threaten patient safety. The medication administration process is complex and cumbersome, and nursing staffs are prone to error when they are tired. Proper Information Technology (IT) can assist the nurse in correct medication administration. We review a recent proposal regarding a leading-edge solution to enhance inpatient medication safety by using RFID technology. The proof mechanism is the kernel concept in their design and worth studying to develop a well-designed grouping-proof scheme. Other RFID grouping-proof protocols could be similarly applied in administering physician orders. We improve on the weaknesses of previous works and develop a reading-order independent RFID grouping-proof scheme in this paper. In our scheme, tags are queried and verified under the direct control of the authorized reader without connecting to the back-end database server. Immediate verification in our design makes this application more portable and efficient and critical security issues have been analyzed by the threat model. Our scheme is suitable for the safe drug administration scenario and the drug package scenario in a hospital environment to enhance inpatient medication safety. It automatically checks for correct drug unit-dose and appropriate inpatient treatments. Copyright © 2013. Published by Elsevier Ireland Ltd.
Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.
Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin
2012-06-10
The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.
On Formulations of Discontinuous Galerkin and Related Methods for Conservation Laws
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2014-01-01
A formulation for the discontinuous Galerkin (DG) method that leads to solutions using the differential form of the equation (as opposed to the standard integral form) is presented. The formulation includes (a) a derivative calculation that involves only data within each cell with no data interaction among cells, and (b) for each cell, corrections to this derivative that deal with the jumps in fluxes at the cell boundaries and allow data across cells to interact. The derivative with no interaction is obtained by a projection, but for nodal-type methods, evaluating this derivative by interpolation at the nodal points is more economical. The corrections are derived using the approximate (Dirac) delta functions. The formulation results in a family of schemes: different approximate delta functions give rise to different methods. It is shown that the current formulation is essentially equivalent to the flux reconstruction (FR) formulation. Due to the use of approximate delta functions, an energy stability proof simpler than that of Vincent, Castonguay, and Jameson (2011) for a family of schemes is derived. Accuracy and stability of resulting schemes are discussed via Fourier analyses. Similar to FR, the current formulation provides a unifying framework for high-order methods by recovering the DG, spectral difference (SD), and spectral volume (SV) schemes. It also yields stable, accurate, and economical methods.
A Systematic Error Correction Method for TOVS Radiances
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)
2000-01-01
Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries
NASA Astrophysics Data System (ADS)
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
A comprehensive numerical analysis of background phase correction with V-SHARP.
Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand
2017-04-01
Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Prediction of fluctuating pressure environments associated with plume-induced separated flow fields
NASA Technical Reports Server (NTRS)
Plotkin, K. J.
1973-01-01
The separated flow environment induced by underexpanded rocket plumes during boost phase of rocket vehicles has been investigated. A simple semi-empirical model for predicting the extent of separation was developed. This model offers considerable computational economy as compared to other schemes reported in the literature, and has been shown to be in good agreement with limited flight data. The unsteady pressure field in plume-induced separated regions was investigated. It was found that fluctuations differed from those for a rigid flare only at low frequencies. The major difference between plume-induced separation and flare-induced separation was shown to be an increase in shock oscillation distance for the plume case. The prediction schemes were applied to PRR shuttle launch configuration. It was found that fluctuating pressures from plume-induced separation are not as severe as for other fluctuating environments at the critical flight condition of maximum dynamic pressure.
A Multigrid NLS-4DVar Data Assimilation Scheme with Advanced Research WRF (ARW)
NASA Astrophysics Data System (ADS)
Zhang, H.; Tian, X.
2017-12-01
The motions of the atmosphere have multiscale properties in space and/or time, and the background error covariance matrix (Β) should thus contain error information at different correlation scales. To obtain an optimal analysis, the multigrid three-dimensional variational data assimilation scheme is used widely when sequentially correcting errors from large to small scales. However, introduction of the multigrid technique into four-dimensional variational data assimilation is not easy, due to its strong dependence on the adjoint model, which has extremely high computational costs in data coding, maintenance, and updating. In this study, the multigrid technique was introduced into the nonlinear least-squares four-dimensional variational assimilation (NLS-4DVar) method, which is an advanced four-dimensional ensemble-variational method that can be applied without invoking the adjoint models. The multigrid NLS-4DVar (MG-NLS-4DVar) scheme uses the number of grid points to control the scale, with doubling of this number when moving from a coarse to a finer grid. Furthermore, the MG-NLS-4DVar scheme not only retains the advantages of NLS-4DVar, but also sufficiently corrects multiscale errors to achieve a highly accurate analysis. The effectiveness and efficiency of the proposed MG-NLS-4DVar scheme were evaluated by several groups of observing system simulation experiments using the Advanced Research Weather Research and Forecasting Model. MG-NLS-4DVar outperformed NLS-4DVar, with a lower computational cost.
Park, Seok Chan; Kim, Minjung; Noh, Jaegeun; Chung, Hoeil; Woo, Youngah; Lee, Jonghwa; Kemper, Mark S
2007-06-12
The concentration of acetaminophen in a turbid pharmaceutical suspension has been measured successfully using Raman spectroscopy. The spectrometer was equipped with a large spot probe which enabled the coverage of a representative area during sampling. This wide area illumination (WAI) scheme (coverage area 28.3 mm2) for Raman data collection proved to be more reliable for the compositional determination of these pharmaceutical suspensions, especially when the samples were turbid. The reproducibility of measurement using the WAI scheme was compared to that of using a conventional small-spot scheme which employed a much smaller illumination area (about 100 microm spot size). A layer of isobutyric anhydride was placed in front of the sample vials to correct the variation in the Raman intensity due to the fluctuation of laser power. Corrections were accomplished using the isolated carbonyl band of isobutyric anhydride. The acetaminophen concentrations of prediction samples were accurately estimated using a partial least squares (PLS) calibration model. The prediction accuracy was maintained even with changes in laser power. It was noted that the prediction performance was somewhat degraded for turbid suspensions with high acetaminophen contents. When comparing the results of reproducibility obtained with the WAI scheme and those obtained using the conventional scheme, it was concluded that the quantitative determination of the active pharmaceutical ingredient (API) in turbid suspensions is much improved when employing a larger laser coverage area. This is presumably due to the improvement in representative sampling.
Convergence Analysis of Triangular MAC Schemes for Two Dimensional Stokes Equations
Wang, Ming; Zhong, Lin
2015-01-01
In this paper, we consider the use of H(div) elements in the velocity–pressure formulation to discretize Stokes equations in two dimensions. We address the error estimate of the element pair RT0–P0, which is known to be suboptimal, and render the error estimate optimal by the symmetry of the grids and by the superconvergence result of Lagrange inter-polant. By enlarging RT0 such that it becomes a modified BDM-type element, we develop a new discretization BDM1b–P0. We, therefore, generalize the classical MAC scheme on rectangular grids to triangular grids and retain all the desirable properties of the MAC scheme: exact divergence-free, solver-friendly, and local conservation of physical quantities. Further, we prove that the proposed discretization BDM1b–P0 achieves the optimal convergence rate for both velocity and pressure on general quasi-uniform grids, and one and half order convergence rate for the vorticity and a recovered pressure. We demonstrate the validity of theories developed here by numerical experiments. PMID:26041948
NASA Astrophysics Data System (ADS)
Chirico, G. B.; Medina, H.; Romano, N.
2014-07-01
This paper examines the potential of different algorithms, based on the Kalman filtering approach, for assimilating near-surface observations into a one-dimensional Richards equation governing soil water flow in soil. Our specific objectives are: (i) to compare the efficiency of different Kalman filter algorithms in retrieving matric pressure head profiles when they are implemented with different numerical schemes of the Richards equation; (ii) to evaluate the performance of these algorithms when nonlinearities arise from the nonlinearity of the observation equation, i.e. when surface soil water content observations are assimilated to retrieve matric pressure head values. The study is based on a synthetic simulation of an evaporation process from a homogeneous soil column. Our first objective is achieved by implementing a Standard Kalman Filter (SKF) algorithm with both an explicit finite difference scheme (EX) and a Crank-Nicolson (CN) linear finite difference scheme of the Richards equation. The Unscented (UKF) and Ensemble Kalman Filters (EnKF) are applied to handle the nonlinearity of a backward Euler finite difference scheme. To accomplish the second objective, an analogous framework is applied, with the exception of replacing SKF with the Extended Kalman Filter (EKF) in combination with a CN numerical scheme, so as to handle the nonlinearity of the observation equation. While the EX scheme is computationally too inefficient to be implemented in an operational assimilation scheme, the retrieval algorithm implemented with a CN scheme is found to be computationally more feasible and accurate than those implemented with the backward Euler scheme, at least for the examined one-dimensional problem. The UKF appears to be as feasible as the EnKF when one has to handle nonlinear numerical schemes or additional nonlinearities arising from the observation equation, at least for systems of small dimensionality as the one examined in this study.
Schroeder, Elizabeth C; Rosenberg, Alexander J; Hilgenkamp, Thessa I M; White, Daniel W; Baynard, Tracy; Fernhall, Bo
2017-12-01
To evaluate changes in arterial stiffness with positional change and whether the stiffness changes are due to hydrostatic pressure alone or if physiological changes in vasoconstriction of the conduit arteries play a role in the modulation of arterial stiffness. Thirty participants' (male = 15, 24 ± 4 years) upper bodies were positioned at 0, 45, and 72° angles. Pulse wave velocity (PWV), cardio-ankle vascular index, carotid beta-stiffness index, carotid blood pressure (cBP), and carotid diameters were measured at each position. A gravitational height correction was determined using the vertical fluid column distance (mmHg) between the heart and carotid artery. Carotid beta-stiffness was calibrated using three methods: nonheight corrected cBP of each position, height corrected cBP of each position, and height corrected cBP of the supine position (theoretical model). Low frequency systolic blood pressure variability (LFSAP) was analyzed as a marker of sympathetic activity. PWV and cardio-ankle vascular index increased with position (P < 0.05). Carotid beta-stiffness did not increase if not corrected for hydrostatic pressure. Arterial stiffness indices based on Method 2 were not different from Method 3 (P = 0.65). LFSAP increased in more upright positions (P < 0.05) but diastolic diameter relative to diastolic pressure did not (P > 0.05). Arterial stiffness increases with a more upright body position. Carotid beta-stiffness needs to be calibrated accounting for hydrostatic effects of gravity if measured in a seated position. It is unclear why PWV increased as this increase was independent of blood pressure. No difference between Methods 2 and 3 presumably indicates that the beta-stiffness increases are only pressure dependent, despite the increase in vascular sympathetic modulation.
Kim, Jung Hyeun; Mulholland, George W.; Kukuck, Scott R.; Pui, David Y. H.
2005-01-01
The slip correction factor has been investigated at reduced pressures and high Knudsen number using polystyrene latex (PSL) particles. Nano-differential mobility analyzers (NDMA) were used in determining the slip correction factor by measuring the electrical mobility of 100.7 nm, 269 nm, and 19.90 nm particles as a function of pressure. The aerosol was generated via electrospray to avoid multiplets for the 19.90 nm particles and to reduce the contaminant residue on the particle surface. System pressure was varied down to 8.27 kPa, enabling slip correction measurements for Knudsen numbers as large as 83. A condensation particle counter was modified for low pressure application. The slip correction factor obtained for the three particle sizes is fitted well by the equation: C = 1 + Kn (α + β exp(−γ/Kn)), with α = 1.165, β = 0.483, and γ = 0.997. The first quantitative uncertainty analysis for slip correction measurements was carried out. The expanded relative uncertainty (95 % confidence interval) in measuring slip correction factor was about 2 % for the 100.7 nm SRM particles, about 3 % for the 19.90 nm PSL particles, and about 2.5 % for the 269 nm SRM particles. The major sources of uncertainty are the diameter of particles, the geometric constant associated with NDMA, and the voltage. PMID:27308102
New optimization scheme to obtain interaction potentials for oxide glasses
NASA Astrophysics Data System (ADS)
Sundararaman, Siddharth; Huang, Liping; Ispas, Simona; Kob, Walter
2018-05-01
We propose a new scheme to parameterize effective potentials that can be used to simulate atomic systems such as oxide glasses. As input data for the optimization, we use the radial distribution functions of the liquid and the vibrational density of state of the glass, both obtained from ab initio simulations, as well as experimental data on the pressure dependence of the density of the glass. For the case of silica, we find that this new scheme facilitates finding pair potentials that are significantly more accurate than the previous ones even if the functional form is the same, thus demonstrating that even simple two-body potentials can be superior to more complex three-body potentials. We have tested the new potential by calculating the pressure dependence of the elastic moduli and found a good agreement with the corresponding experimental data.
A massively parallel computational approach to coupled thermoelastic/porous gas flow problems
NASA Technical Reports Server (NTRS)
Shia, David; Mcmanus, Hugh L.
1995-01-01
A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.
NASA Astrophysics Data System (ADS)
Kleinböhl, Armin; Friedson, A. James; Schofield, John T.
2017-01-01
The remote sounding of infrared emission from planetary atmospheres using limb-viewing geometry is a powerful technique for deriving vertical profiles of structure and composition on a global scale. Compared with nadir viewing, limb geometry provides enhanced vertical resolution and greater sensitivity to atmospheric constituents. However, standard limb profile retrieval techniques assume spherical symmetry and are vulnerable to biases produced by horizontal gradients in atmospheric parameters. We present a scheme for the correction of horizontal gradients in profile retrievals from limb observations of the martian atmosphere. It characterizes horizontal gradients in temperature, pressure, and aerosol extinction along the line-of-sight of a limb view through neighboring measurements, and represents these gradients by means of two-dimensional radiative transfer in the forward model of the retrieval. The scheme is applied to limb emission measurements from the Mars Climate Sounder instrument on Mars Reconnaissance Orbiter. Retrieval simulations using data from numerical models indicate that biases of up to 10 K in the winter polar region, obtained with standard retrievals using spherical symmetry, are reduced to about 2 K in most locations by the retrieval with two-dimensional radiative transfer. Retrievals from Mars atmospheric measurements suggest that the two-dimensional radiative transfer greatly reduces biases in temperature and aerosol opacity caused by observational geometry, predominantly in the polar winter regions.
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris
2005-01-01
FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.
The Root Cause of the Overheating Problem
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
2017-01-01
Previously we identified the receding flow, where two fluid streams recede from each other, as an open numerical problem, because all well-known numerical fluxes give an anomalous temperature rise, thus called the overheating problem. This phenomenon, although presented in several textbooks, and many previous publications, has scarcely been satisfactorily addressed and the root cause of the overheating problem not well understood. We found that this temperature rise was solely connected to entropy rise and proposed to use the method of characteristics to eradicate the problem. However, the root cause of the entropy production was still unclear. In the present study, we identify the cause of this problem: the entropy rise is rooted in the pressure flux in a finite volume formulation and is implanted at the first time step. It is found theoretically inevitable for all existing numerical flux schemes used in the finite volume setting, as confirmed by numerical tests. This difficulty cannot be eliminated by manipulating time step, grid size, spatial accuracy, etc, although the rate of overheating depends on the flux scheme used. Finally, we incorporate the entropy transport equation, in place of the energy equation, to ensure preservation of entropy, thus correcting this temperature anomaly. Its applicability is demonstrated for some relevant 1D and 2D problems. Thus, the present study validates that the entropy generated ab initio is the genesis of the overheating problem.
Orżanowski, Tomasz
2016-01-01
This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.
Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A
2009-11-07
Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).
NASA Technical Reports Server (NTRS)
Li, Xiaowen; Tao, Wei-Kuo; Khain, Alexander P.; Simpson, Joanne; Johnson, Daniel E.
2009-01-01
Part I of this paper compares two simulations, one using a bulk and the other a detailed bin microphysical scheme, of a long-lasting, continental mesoscale convective system with leading convection and trailing stratiform region. Diagnostic studies and sensitivity tests are carried out in Part II to explain the simulated contrasts in the spatial and temporal variations by the two microphysical schemes and to understand the interactions between cloud microphysics and storm dynamics. It is found that the fixed raindrop size distribution in the bulk scheme artificially enhances rain evaporation rate and produces a stronger near surface cool pool compared with the bin simulation. In the bulk simulation, cool pool circulation dominates the near-surface environmental wind shear in contrast to the near-balance between cool pool and wind shear in the bin simulation. This is the main reason for the contrasting quasi-steady states simulated in Part I. Sensitivity tests also show that large amounts of fast-falling hail produced in the original bulk scheme not only result in a narrow trailing stratiform region but also act to further exacerbate the strong cool pool simulated in the bulk parameterization. An empirical formula for a correction factor, r(q(sub r)) = 0.11q(sub r)(exp -1.27) + 0.98, is developed to correct the overestimation of rain evaporation in the bulk model, where r is the ratio of the rain evaporation rate between the bulk and bin simulations and q(sub r)(g per kilogram) is the rain mixing ratio. This formula offers a practical fix for the simple bulk scheme in rain evaporation parameterization.
A Constrained Scheme for High Precision Downward Continuation of Potential Field Data
NASA Astrophysics Data System (ADS)
Wang, Jun; Meng, Xiaohong; Zhou, Zhiwen
2018-04-01
To further improve the accuracy of the downward continuation of potential field data, we present a novel constrained scheme in this paper combining the ideas of the truncated Taylor series expansion, the principal component analysis, the iterative continuation and the prior constraint. In the scheme, the initial downward continued field on the target plane is obtained from the original measured field using the truncated Taylor series expansion method. If the original field was with particularly low signal-to-noise ratio, the principal component analysis is utilized to suppress the noise influence. Then, the downward continued field is upward continued to the plane of the prior information. If the prior information was on the target plane, it should be upward continued over a short distance to get the updated prior information. Next, the difference between the calculated field and the updated prior information is calculated. The cosine attenuation function is adopted to get the scope of constraint and the corresponding modification item. Afterward, a correction is performed on the downward continued field on the target plane by adding the modification item. The correction process is iteratively repeated until the difference meets the convergence condition. The accuracy of the proposed constrained scheme is tested on synthetic data with and without noise. Numerous model tests demonstrate that downward continuation using the constrained strategy can yield more precise results compared to other downward continuation methods without constraints and is relatively insensitive to noise even for downward continuation over a large distance. Finally, the proposed scheme is applied to real magnetic data collected within the Dapai polymetallic deposit from the Fujian province in South China. This practical application also indicates the superiority of the presented scheme.
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
NASA Technical Reports Server (NTRS)
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
Ensuring correct rollback recovery in distributed shared memory systems
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. Kent
1995-01-01
Distributed shared memory (DSM) implemented on a cluster of workstations is an increasingly attractive platform for executing parallel scientific applications. Checkpointing and rollback techniques can be used in such a system to allow the computation to progress in spite of the temporary failure of one or more processing nodes. This paper presents the design of an independent checkpointing method for DSM that takes advantage of DSM's specific properties to reduce error-free and rollback overhead. The scheme reduces the dependencies that need to be considered for correct rollback to those resulting from transfers of pages. Furthermore, in-transit messages can be recovered without the use of logging. We extend the scheme to a DSM implementation using lazy release consistency, where the frequency of dependencies is further reduced.
Presumptive identification of streptococci with a new test system.
Facklam, R R; Thacker, L G; Fox, B; Eriquez, L
1982-01-01
A test is described that could replace bacitracin susceptibility for presumptive identification of group A streptococci as well as 6.5% NaCl agar tolerance for presumptive identification of enterococcal streptococci. The L-pyrrolidonyl-beta-naphthylamide test, based on hydrolysis of pyrrolidonyl-beta-naphthylamide, was used in conjunction with the CAMP and bile-esculin tests to presumptively identify the streptococci. Among the beta-hemolytic streptococci; 98% of 50 group A, 98% of 46 group B, and 100% of 70 strains that were not group A, B, or D were correctly identified by the new presumptive test scheme. Among the non-beta-hemolytic streptococci; 96% of 74 group D enterococcal, 100% of 30 group D nonenterococcal, and 82% of 112 viridans strains were correctly identified by the new presumptive test scheme. PMID:7050157
Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia
2018-01-01
Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.
Continuous light absorption photometer for long-term studies
NASA Astrophysics Data System (ADS)
Ogren, John A.; Wendell, Jim; Andrews, Elisabeth; Sheridan, Patrick J.
2017-12-01
A new photometer is described for continuous determination of the aerosol light absorption coefficient, optimized for long-term studies of the climate-forcing properties of aerosols. Measurements of the light attenuation coefficient are made at blue, green, and red wavelengths, with a detection limit of 0.02 Mm-1 and a precision of 4 % for hourly averages. The uncertainty of the light absorption coefficient is primarily determined by the uncertainty of the correction scheme commonly used to convert the measured light attenuation to light absorption coefficient and ranges from about 20 % at sites with high loadings of strongly absorbing aerosols up to 100 % or more at sites with low loadings of weakly absorbing aerosols. Much lower uncertainties (ca. 40 %) for the latter case can be achieved with an advanced correction scheme.
Second derivatives for approximate spin projection methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less
Implement a Sub-grid Turbulent Orographic Form Drag in WRF and its application to Tibetan Plateau
NASA Astrophysics Data System (ADS)
Zhou, X.; Yang, K.; Wang, Y.; Huang, B.
2017-12-01
Sub-grid-scale orographic variation exerts turbulent form drag on atmospheric flows. The Weather Research and Forecasting model (WRF) includes a turbulent orographic form drag (TOFD) scheme that adds the stress to the surface layer. In this study, another TOFD scheme has been incorporated in WRF3.7, which exerts an exponentially decaying drag on each model layer. To investigate the effect of the new scheme, WRF with the old and new one was used to simulate the climate over the complex terrain of the Tibetan Plateau. The two schemes were evaluated in terms of the direct impact (on wind) and the indirect impact (on air temperature, surface pressure and precipitation). Both in winter and summer, the new TOFD scheme reduces the mean bias in the surface wind, and clearly reduces the root mean square error (RMSEs) in comparisons with the station measurements (Figure 1). Meanwhile, the 2-m air temperature and surface pressure is also improved (Figure 2) due to the more warm air northward transport across south boundary of TP in winter. The 2-m air temperature is hardly improved in summer but the precipitation improvement is more obvious, with reduced mean bias and RMSEs. This is due to the weakening of water vapor flux (at low-level flow with the new scheme) crossing the Himalayan Mountains from South Asia.
NASA Astrophysics Data System (ADS)
Minotti, Luca; Savaré, Giuseppe
2018-02-01
We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.
Bio-inspired adaptive feedback error learning architecture for motor control.
Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo
2012-10-01
This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).
Deans, Zandra C; Tull, Justyna; Beighton, Gemma; Abbs, Stephen; Robinson, David O; Butler, Rachel
2011-11-01
Laboratories are increasingly required to perform molecular tests for the detection of mutations in the KRAS gene in metastatic colorectal cancers to allow better clinical management and more effective treatment for these patients. KRAS mutation status predicts a patient's likely response to the monoclonal antibody cetuximab. To provide a high standard of service, these laboratories require external quality assessment (EQA) to monitor the level of laboratory output and measure the performance of the laboratory against other service providers. National External Quality Assurance Services for Molecular Genetics provided a pilot EQA scheme for KRAS molecular analysis in metastatic colorectal cancers during 2009. Very few genotyping errors were reported by participating laboratories; however, the reporting nomenclature of the genotyping results varied considerably between laboratories. The pilot EQA scheme highlighted the need for continuing EQA in this field which will assess the laboratories' ability not only to obtain accurate, reliable results but also to interpret them safely and correctly ensuring that the referring clinician has the correct information to make the best clinical therapeutic decision for their patient.
An O(Nm(sup 2)) Plane Solver for the Compressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Bonhaus, D. L.; Anderson, W. K.; Rumsey, C. L.; Biedron, R. T.
1999-01-01
A hierarchical multigrid algorithm for efficient steady solutions to the two-dimensional compressible Navier-Stokes equations is developed and demonstrated. The algorithm applies multigrid in two ways: a Full Approximation Scheme (FAS) for a nonlinear residual equation and a Correction Scheme (CS) for a linearized defect correction implicit equation. Multigrid analyses which include the effect of boundary conditions in one direction are used to estimate the convergence rate of the algorithm for a model convection equation. Three alternating-line- implicit algorithms are compared in terms of efficiency. The analyses indicate that full multigrid efficiency is not attained in the general case; the number of cycles to attain convergence is dependent on the mesh density for high-frequency cross-stream variations. However, the dependence is reasonably small and fast convergence is eventually attained for any given frequency with either the FAS or the CS scheme alone. The paper summarizes numerical computations for which convergence has been attained to within truncation error in a few multigrid cycles for both inviscid and viscous ow simulations on highly stretched meshes.
Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen
2016-06-01
High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.
NASA Astrophysics Data System (ADS)
Ngo, N. H.; Nguyen, H. T.; Tran, H.
2018-03-01
In this work, we show that precise predictions of the shapes of H2O rovibrational lines broadened by N2, over a wide pressure range, can be made using simulations corrected by a single measurement. For that, we use the partially-correlated speed-dependent Keilson-Storer (pcsdKS) model whose parameters are deduced from molecular dynamics simulations and semi-classical calculations. This model takes into account the collision-induced velocity-changes effects, the speed dependences of the collisional line width and shift as well as the correlation between velocity and internal-state changes. For each considered transition, the model is corrected by using a parameter deduced from its broadening coefficient measured for a single pressure. The corrected-pcsdKS model is then used to simulate spectra for a wide pressure range. Direct comparisons of the corrected-pcsdKS calculated and measured spectra of 5 rovibrational lines of H2O for various pressures, from 0.1 to 1.2 atm, show very good agreements. Their maximum differences are in most cases well below 1%, much smaller than residuals obtained when fitting the measurements with the Voigt line shape. This shows that the present procedure can be used to predict H2O line shapes for various pressure conditions and thus the simulated spectra can be used to deduce the refined line-shape parameters to complete spectroscopic databases, in the absence of relevant experimental values.
Using farmers' attitude and social pressures to design voluntary Bluetongue vaccination strategies.
Sok, J; Hogeveen, H; Elbers, A R W; Oude Lansink, A G J M
2016-10-01
Understanding the context and drivers of farmers' decision-making is critical to designing successful voluntary disease control interventions. This study uses a questionnaire based on the Reasoned Action Approach framework to assess the determinants of farmers' intention to participate in a hypothetical reactive vaccination scheme against Bluetongue. Results suggest that farmers' attitude and social pressures best explained intention. A mix of policy instruments can be used in a complementary way to motivate voluntary vaccination based on the finding that participation is influenced by both internal and external motivation. Next to informational and incentive-based instruments, social pressures, which stem from different type of perceived norms, can spur farmers' vaccination behaviour and serve as catalysts in voluntary vaccination schemes. Copyright © 2016 Elsevier B.V. All rights reserved.
Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks
NASA Astrophysics Data System (ADS)
Shimizu, Kaoru; Imoto, Nobuyuki
2002-03-01
This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.
Intelligent Power Swing Detection Scheme to Prevent False Relay Tripping Using S-Transform
NASA Astrophysics Data System (ADS)
Mohamad, Nor Z.; Abidin, Ahmad F.; Musirin, Ismail
2014-06-01
Distance relay design is equipped with out-of-step tripping scheme to ensure correct distance relay operation during power swing. The out-of-step condition is a consequence result from unstable power swing. It requires proper detection of power swing to initiate a tripping signal followed by separation of unstable part from the entire power system. The distinguishing process of unstable swing from stable swing poses a challenging task. This paper presents an intelligent approach to detect power swing based on S-Transform signal processing tool. The proposed scheme is based on the use of S-Transform feature of active power at the distance relay measurement point. It is demonstrated that the proposed scheme is able to detect and discriminate the unstable swing from stable swing occurring in the system. To ascertain validity of the proposed scheme, simulations were carried out with the IEEE 39 bus system and its performance has been compared with the wavelet transform-based power swing detection scheme.
Viscous pressure correction in the irrotational flow outside Prandtl's boundary layer
NASA Astrophysics Data System (ADS)
Joseph, Daniel; Wang, Jing
2004-11-01
We argue that boundary layers on solid with irrotational motion outside are like a gas bubble because the shear stress vanishes at the edge of the boundary layer but the irrotational shear stress does not. This discrepancy induces a pressure correction and an additional drag which can be advertised as due to the viscous dissipation of the irrotational flow. Typically, this extra correction to the drag would be relatively small. A much more interesting implication of the extra pressure theory arises from the consideration of the effects of viscosity on the normal stress on a solid boundary which are entirely neglected in Prandtl's theory. It is very well known and easily demonstrated that as a consequence of the continuity equation the viscous normal stress must vanish on a rigid solid. It follows that all the greatly important effects of viscosity on the normal stress are buried in the pressure and the leading order effects of viscosity on the normal stress can be obtained from the viscous correction of viscous potential flow.
Criterion for correct recalls in associative-memory neural networks
NASA Astrophysics Data System (ADS)
Ji, Han-Bing
1992-12-01
A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis.
Pressure-Sensitive Paint Measurements on Surfaces with Non-Uniform Temperature
NASA Technical Reports Server (NTRS)
Bencic, Timothy J.
1999-01-01
Pressure-sensitive paint (PSP) has become a useful tool to augment conventional pressure taps in measuring the surface pressure distribution of aerodynamic components in wind tunnel testing. While the PSP offers the advantage of a non-intrusive global mapping of the surface pressure, one prominent drawback to the accuracy of this technique is the inherent temperature sensitivity of the coating's luminescent intensity. A typical aerodynamic surface PSP test has relied on the coated surface to be both spatially and temporally isothermal, along with conventional instrumentation for an in situ calibration to generate the highest accuracy pressure mappings. In some tests however, spatial and temporal thermal gradients are generated by the nature of the test as in a blowing jet impinging on a surface. In these cases, the temperature variations on the painted surface must be accounted for in order to yield high accuracy and reliable data. A new temperature correction technique was developed at NASA Lewis to collapse a "family" of PSP calibration curves to a single intensity ratio versus pressure curve. This correction allows a streamlined procedure to be followed whether or not temperature information is used in the data reduction of the PSP. This paper explores the use of conventional instrumentation such as thermocouples and pressure taps along with temperature-sensitive paint (TSP) to correct for the thermal gradients that exist in aeropropulsion PSP tests. Temperature corrected PSP measurements for both a supersonic mixer ejector and jet cavity interaction tests are presented.
Experimental Verification of Buffet Calculation Procedure Using Unsteady PSP
NASA Technical Reports Server (NTRS)
Panda, Jayanta
2016-01-01
Typically a limited number of dynamic pressure sensors are employed to determine the unsteady aerodynamic forces on large, slender aerospace structures. The estimated forces are known to be very sensitive to the number of the dynamic pressure sensors and the details of the integration scheme. This report describes a robust calculation procedure, based on frequency-specific correlation lengths, that is found to produce good estimation of fluctuating forces from a few dynamic pressure sensors. The validation test was conducted on a flat panel, placed on the floor of a wind tunnel, and was subjected to vortex shedding from a rectangular bluff-body. The panel was coated with fast response Pressure Sensitive Paint (PSP), which allowed time-resolved measurements of unsteady pressure fluctuations on a dense grid of spatial points. The first part of the report describes the detail procedure used to analyze the high-speed, PSP camera images. The procedure includes steps to reduce contamination by electronic shot noise, correction for spatial non-uniformities, and lamp brightness variation, and finally conversion of fluctuating light intensity to fluctuating pressure. The latter involved applying calibration constants from a few dynamic pressure sensors placed at selective points on the plate. Excellent comparison in the spectra, coherence and phase, calculated via PSP and dynamic pressure sensors validated the PSP processing steps. The second part of the report describes the buffet validation process, for which the first step was to use pressure histories from all PSP points to determine the "true" force fluctuations. In the next step only a selected number of pixels were chosen as "virtual sensors" and a correlation-length based buffet calculation procedure was applied to determine "modeled" force fluctuations. By progressively decreasing the number of virtual sensors it was observed that the present calculation procedure was able to make a close estimate of the "true" unsteady forces only from four sensors. It is believed that the present work provides the first validation of the buffet calculation procedure which has been used for the development of many space vehicles.
Continuous-Reading Cryogen Level Sensor
NASA Technical Reports Server (NTRS)
Barone, F. E.; Fox, E.; Macumber, S.
1984-01-01
Two pressure transducers used in system for measuring amount of cryogenic liquid in tank. System provides continuous measurements accurate within 0.03 percent. Sensors determine pressure in liquid and vapor in tank. Microprocessor uses pressure difference to compute mass of cryogenic liquid in tank. New system allows continuous sensing; unaffected by localized variations in composition and density as are capacitance-sensing schemes.
Correction of Altitude-Induced Changes in Performance of the Volumetric Diffusive Respirator
2017-04-05
to a plateau pressure. The positive pressure delivery of each percussive pulse is followed by a passive fall in pressure as the spring moves the ...AFRL-SA-WP-SR-2017-0007 Correction of Altitude- Induced Changes in Performance of the Volumetric Diffusive Respirator Thomas...Blakeman, MSc RRT April 2017 Air Force Research Laboratory 711th Human Performance Wing U.S. Air Force School of Aerospace
Performance Analysis of a Wind Turbine Driven Swash Plate Pump for Large Scale Offshore Applications
NASA Astrophysics Data System (ADS)
Buhagiar, D.; Sant, T.
2014-12-01
This paper deals with the performance modelling and analysis of offshore wind turbine-driven hydraulic pumps. The concept consists of an open loop hydraulic system with the rotor main shaft directly coupled to a swash plate pump to supply pressurised sea water. A mathematical model is derived to cater for the steady state behaviour of entire system. A simplified model for the pump is implemented together with different control scheme options for regulating the rotor shaft power. A new control scheme is investigated, based on the combined use of hydraulic pressure and pitch control. Using a steady-state analysis, the study shows how the adoption of alternative control schemes in a the wind turbine-hydraulic pump system may result in higher energy yields than those from a conventional system with an electrical generator and standard pitch control for power regulation. This is in particular the case with the new control scheme investigated in this study that is based on the combined use of pressure and rotor blade pitch control.
TripSense: A Trust-Based Vehicular Platoon Crowdsensing Scheme with Privacy Preservation in VANETs
Hu, Hao; Lu, Rongxing; Huang, Cheng; Zhang, Zonghua
2016-01-01
In this paper, we propose a trust-based vehicular platoon crowdsensing scheme, named TripSense, in VANET. The proposed TripSense scheme introduces a trust-based system to evaluate vehicles’ sensing abilities and then selects the more capable vehicles in order to improve sensing results accuracy. In addition, the sensing tasks are accomplished by platoon member vehicles and preprocessed by platoon head vehicles before the data are uploaded to server. Hence, it is less time-consuming and more efficient compared with the way where the data are submitted by individual platoon member vehicles. Hence it is more suitable in ephemeral networks like VANET. Moreover, our proposed TripSense scheme integrates unlinkable pseudo-ID techniques to achieve PM vehicle identity privacy, and employs a privacy-preserving sensing vehicle selection scheme without involving the PM vehicle’s trust score to keep its location privacy. Detailed security analysis shows that our proposed TripSense scheme not only achieves desirable privacy requirements but also resists against attacks launched by adversaries. In addition, extensive simulations are conducted to show the correctness and effectiveness of our proposed scheme. PMID:27258287
NASA Astrophysics Data System (ADS)
Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming
2006-10-01
The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix
Towards information-optimal simulation of partial differential equations.
Leike, Reimar H; Enßlin, Torsten A
2018-03-01
Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.
The Ames 12-Foot Pressure Tunnel: Tunnel Empty Flow Calibration Results and Discussion
NASA Technical Reports Server (NTRS)
Zell, Peter T.; Banducci, David E. (Technical Monitor)
1996-01-01
An empty test section flow calibration of the refurbished NASA Ames 12-Foot Pressure Tunnel was recently completed. Distributions of total pressure, dynamic pressure, Mach number, flow angularity temperature, and turbulence are presented along with results obtained prior to facility demolition. Axial static pressure distributions along tunnel centerline are also compared. Test section model support geometric configurations will be presented along with a discussion of the issues involved with different model mounting schemes.
Chang, Hochan; Kim, Sungwoong; Jin, Sumin; Lee, Seung-Woo; Yang, Gil-Tae; Lee, Ki-Young; Yi, Hyunjung
2018-01-10
Flexible piezoresistive sensors have huge potential for health monitoring, human-machine interfaces, prosthetic limbs, and intelligent robotics. A variety of nanomaterials and structural schemes have been proposed for realizing ultrasensitive flexible piezoresistive sensors. However, despite the success of recent efforts, high sensitivity within narrower pressure ranges and/or the challenging adhesion and stability issues still potentially limit their broad applications. Herein, we introduce a biomaterial-based scheme for the development of flexible pressure sensors that are ultrasensitive (resistance change by 5 orders) over a broad pressure range of 0.1-100 kPa, promptly responsive (20 ms), and yet highly stable. We show that employing biomaterial-incorporated conductive networks of single-walled carbon nanotubes as interfacial layers of contact-based resistive pressure sensors significantly enhances piezoresistive response via effective modulation of the interlayer resistance and provides stable interfaces for the pressure sensors. The developed flexible sensor is capable of real-time monitoring of wrist pulse waves under external medium pressure levels and providing pressure profiles applied by a thumb and a forefinger during object manipulation at a low voltage (1 V) and power consumption (<12 μW). This work provides a new insight into the material candidates and approaches for the development of wearable health-monitoring and human-machine interfaces.
A rotationally biased upwind difference scheme for the Euler equations
NASA Technical Reports Server (NTRS)
Davis, S. F.
1983-01-01
The upwind difference schemes of Godunov, Osher, Roe and van Leer are able to resolve one dimensional steady shocks for the Euler equations within one or two mesh intervals. Unfortunately, this resolution is lost in two dimensions when the shock crosses the computing grid at an oblique angle. To correct this problem, a numerical scheme was developed which automatically locates the angle at which a shock might be expected to cross the computing grid and then constructs separate finite difference formulas for the flux components normal and tangential to this direction. Numerical results which illustrate the ability of this method to resolve steady oblique shocks are presented.
QR code based noise-free optical encryption and decryption of a gray scale image
NASA Astrophysics Data System (ADS)
Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-03-01
In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.
Paschalidou, A K; Kassomenos, P A
2016-01-01
Wildfire management is closely linked to robust forecasts of changes in wildfire risk related to meteorological conditions. This link can be bridged either through fire weather indices or through statistical techniques that directly relate atmospheric patterns to wildfire activity. In the present work the COST-733 classification schemes are applied in order to link wildfires in Greece with synoptic circulation patterns. The analysis reveals that the majority of wildfire events can be explained by a small number of specific synoptic circulations, hence reflecting the synoptic climatology of wildfires. All 8 classification schemes used, prove that the most fire-dangerous conditions in Greece are characterized by a combination of high atmospheric pressure systems located N to NW of Greece, coupled with lower pressures located over the very Eastern part of the Mediterranean, an atmospheric pressure pattern closely linked to the local Etesian winds over the Aegean Sea. During these events, the atmospheric pressure has been reported to be anomalously high, while anomalously low 500hPa geopotential heights and negative total water column anomalies were also observed. Among the various classification schemes used, the 2 Principal Component Analysis-based classifications, namely the PCT and the PXE, as well as the Leader Algorithm classification LND proved to be the best options, in terms of being capable to isolate the vast amount of fire events in a small number of classes with increased frequency of occurrence. It is estimated that these 3 schemes, in combination with medium-range to seasonal climate forecasts, could be used by wildfire risk managers to provide increased wildfire prediction accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang; Hsu, Yi-Kai
2017-03-01
Three-arm dual-balanced detection scheme is studied in an optical code division multiple access system. As the MAI and beat noise are the main deleterious source of system performance, we utilize optical hard-limiters to alleviate such channel impairment. In addition, once the channel condition is improved effectively, the proposed two-dimensional error correction code can remarkably enhance the system performance. In our proposed scheme, the optimal thresholds of optical hard-limiters and decision circuitry are fixed, and they will not change with other system parameters. Our proposed scheme can accommodate a large number of users simultaneously and is suitable for burst traffic with asynchronous transmission. Therefore, it is highly recommended as the platform for broadband optical access network.
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1993-01-01
The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pae, Ki Hong; Kim, Chul Min, E-mail: chulmin@gist.ac.kr; Advanced Photonics Research Institute, Gwangju Institute of Science and Technology, Gwangju 61005
In laser-driven proton acceleration, generation of quasi-monoenergetic proton beams has been considered a crucial feature of the radiation pressure acceleration (RPA) scheme, but the required difficult physical conditions have hampered its experimental realization. As a method to generate quasi-monoenergetic protons under experimentally viable conditions, we investigated using double-species targets of controlled composition ratio in order to make protons bunched in the phase space in the RPA scheme. From a modified optimum condition and three-dimensional particle-in-cell simulations, we showed by varying the ion composition ratio of proton and carbon that quasi-monoenergetic protons could be generated from ultrathin plane targets irradiated withmore » a circularly polarized Gaussian laser pulse. The proposed scheme should facilitate the experimental realization of ultrashort quasi-monoenergetic proton beams for unique applications in high field science.« less
SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, J; Gao, H
2015-06-15
Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of opticalmore » coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the acoustic data. Jing Feng and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Upwind schemes and bifurcating solutions in real gas computations
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
The area of high speed flow is seeing a renewed interest due to advanced propulsion concepts such as the National Aerospace Plane (NASP), Space Shuttle, and future civil transport concepts. Upwind schemes to solve such flows have become increasingly popular in the last decade due to their excellent shock capturing properties. In the first part of this paper the authors present the extension of the Osher scheme to equilibrium and non-equilibrium gases. For simplicity, the source terms are treated explicitly. Computations based on the above scheme are presented to demonstrate the feasibility, accuracy and efficiency of the proposed scheme. One of the test problems is a Chapman-Jouguet detonation problem for which numerical solutions have been known to bifurcate into spurious weak detonation solutions on coarse grids. Results indicate that the numerical solution obtained depends both on the upwinding scheme used and the limiter employed to obtain second order accuracy. For example, the Osher scheme gives the correct CJ solution when the super-bee limiter is used, but gives the spurious solution when the Van Leer limiter is used. With the Roe scheme the spurious solution is obtained for all limiters.
Jung, Jaewook; Kang, Dongwoo; Lee, Donghoon; Won, Dongho
2017-01-01
Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency.
Kang, Dongwoo; Lee, Donghoon; Won, Dongho
2017-01-01
Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency. PMID:28046075
Self-correcting electronically scanned pressure sensor
NASA Technical Reports Server (NTRS)
Gross, C. (Inventor)
1983-01-01
A multiple channel high data rate pressure sensing device is disclosed for use in wind tunnels, spacecraft, airborne, process control, automotive, etc., pressure measurements. Data rates in excess of 100,000 measurements per second are offered with inaccuracies from temperature shifts less than 0.25% (nominal) of full scale over a temperature span of 55 C. The device consists of thirty-two solid state sensors, signal multiplexing electronics to electronically address each sensor, and digital electronic circuitry to automatically correct the inherent thermal shift errors of the pressure sensors and their associated electronics.
A pilot evaluation of two G-seat cueing schemes
NASA Technical Reports Server (NTRS)
Showalter, T. W.
1978-01-01
A comparison was made of two contrasting G-seat cueing schemes. The G-seat, an aircraft simulation subsystem, creates aircraft acceleration cues via seat contour changes. Of the two cueing schemes tested, one was designed to create skin pressure cues and the other was designed to create body position cues. Each cueing scheme was tested and evaluated subjectively by five pilots regarding its ability to cue the appropriate accelerations in each of four simple maneuvers: a pullout, a pushover, an S-turn maneuver, and a thrusting maneuver. A divergence of pilot opinion occurred, revealing that the perception and acceptance of G-seat stimuli is a highly individualistic phenomena. The creation of one acceptable G-seat cueing scheme was, therefore, deemed to be quite difficult.
A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur
2009-07-01
For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).
Viscous wing theory development. Volume 1: Analysis, method and results
NASA Technical Reports Server (NTRS)
Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.
1986-01-01
Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.
Turbomachinery for Low-to-High Mach Number Flight
NASA Technical Reports Server (NTRS)
Tan, Choon S.; Shah, Parthiv N.
2004-01-01
The thrust capability of turbojet cycles is reduced at high flight Mach number (3+) by the increase in inlet stagnation temperature. The 'hot section' temperature limit imposed by materials technology sets the maximum heat addition and, hence, sets the maximum flight Mach number of the operating envelope. Compressor pre-cooling, either via a heat exchanger or mass-injection, has been suggested as a means to reduce compressor inlet temperature and increase mass flow capability, thereby increasing thrust. To date, however, no research has looked at compressor cooling (i.e., using a compressor both to perform work on the gas path air and extract heat from it simultaneously). We wish to assess the feasibility of this novel concept for use in low-to-high Mach number flight. The results to-date show that an axial compressor with cooling: (1) relieves choking in rear stages (hence opening up operability), (2) yields higher-pressure ratio and (3) yields higher efficiency for a given corrected speed and mass flow. The performance benefit is driven: (i) at the blade passage level, by a decrease in the total pressure reduction coefficient and an increase in the flow turning; and (ii) by the reduction in temperature that results in less work required for a given pressure ratio. The latter is a thermodynamic effect. As an example, calculations were performed for an eight-stage compressor with an adiabatic design pressure ratio of 5. By defining non-dimensional cooling as the percentage of compressor inlet stagnation enthalpy removed by a heat sink, the model shows that a non-dimensional cooling of percent in each blade row of the first two stages can increase the compressor pressure ratio by as much as 10-20 percent. Maximum corrected mass flow at a given corrected speed may increase by as much as 5 percent. In addition, efficiency may increase by as much as 5 points. A framework for characterizing and generating the performance map for a cooled compressor has been developed. The approach is based upon CFD computations and mean line analysis. Figures of merit that characterize the bulk performance of blade passage flows with and without cooling are extracted from CFD solutions. Such performance characterization is then applied to a preliminary compressor design framework (mean line). The generic nature of this approach makes it suitable for assessing the effect of different types of compressor cooling schemes, such as heat exchange or evaporative cooling (mass injection). Future work will focus on answering system level questions regarding the feasibility of compressor cooling. Specifically, we wish to determine the operational parametric space in which compressor cooling would be advantageous over other high flight Mach number propulsion concepts. In addition, we will explore the design requirements of cooled compressor turbomachinery, as well as the flow phenomena that limit and control its operation, and the technology barriers that must be crossed for its implementation.
Variationally consistent approximation scheme for charge transfer
NASA Technical Reports Server (NTRS)
Halpern, A. M.
1978-01-01
The author has developed a technique for testing various charge-transfer approximation schemes for consistency with the requirements of the Kohn variational principle for the amplitude to guarantee that the amplitude is correct to second order in the scattering wave functions. Applied to Born-type approximations for charge transfer it allows the selection of particular groups of first-, second-, and higher-Born-type terms that obey the consistency requirement, and hence yield more reliable approximation to the amplitude.
Asymptotic-induced numerical methods for conservation laws
NASA Technical Reports Server (NTRS)
Garbey, Marc; Scroggs, Jeffrey S.
1990-01-01
Asymptotic-induced methods are presented for the numerical solution of hyperbolic conservation laws with or without viscosity. The methods consist of multiple stages. The first stage is to obtain a first approximation by using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problems identified by using techniques derived via asymptotics. Finally, a residual correction increases the accuracy of the scheme. The method is derived and justified with singular perturbation techniques.
Extracting Baseline Electricity Usage Using Gradient Tree Boosting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Taehoon; Lee, Dongeun; Choi, Jaesik
To understand how specific interventions affect a process observed over time, we need to control for the other factors that influence outcomes. Such a model that captures all factors other than the one of interest is generally known as a baseline. In our study of how different pricing schemes affect residential electricity consumption, the baseline would need to capture the impact of outdoor temperature along with many other factors. In this work, we examine a number of different data mining techniques and demonstrate Gradient Tree Boosting (GTB) to be an effective method to build the baseline. We train GTB onmore » data prior to the introduction of new pricing schemes, and apply the known temperature following the introduction of new pricing schemes to predict electricity usage with the expected temperature correction. Our experiments and analyses show that the baseline models generated by GTB capture the core characteristics over the two years with the new pricing schemes. In contrast to the majority of regression based techniques which fail to capture the lag between the peak of daily temperature and the peak of electricity usage, the GTB generated baselines are able to correctly capture the delay between the temperature peak and the electricity peak. Furthermore, subtracting this temperature-adjusted baseline from the observed electricity usage, we find that the resulting values are more amenable to interpretation, which demonstrates that the temperature-adjusted baseline is indeed effective.« less
Research on the Application of Fast-steering Mirror in Stellar Interferometer
NASA Astrophysics Data System (ADS)
Mei, R.; Hu, Z. W.; Xu, T.; Sun, C. S.
2017-07-01
For a stellar interferometer, the fast-steering mirror (FSM) is widely utilized to correct wavefront tilt caused by atmospheric turbulence and internal instrumental vibration due to its high resolution and fast response frequency. In this study, the non-coplanar error between the FSM and actuator deflection axis introduced by manufacture, assembly, and adjustment is analyzed. Via a numerical method, the additional optical path difference (OPD) caused by above factors is studied, and its effects on tracking accuracy of stellar interferometer are also discussed. On the other hand, the starlight parallelism between the beams of two arms is one of the main factors of the loss of fringe visibility. By analyzing the influence of wavefront tilt caused by the atmospheric turbulence on fringe visibility, a simple and efficient real-time correction scheme of starlight parallelism is proposed based on a single array detector. The feasibility of this scheme is demonstrated by laboratory experiment. The results show that starlight parallelism meets the requirement of stellar interferometer in wavefront tilt preliminarily after the correction of fast-steering mirror.
Developing a lower-cost atmospheric CO2 monitoring system using commercial NDIR sensor
NASA Astrophysics Data System (ADS)
Arzoumanian, E.; Bastos, A.; Gaynullin, B.; Laurent, O.; Vogel, F. R.
2017-12-01
Cities release to the atmosphere about 44 % of global energy-related CO2. It is clear that accurate estimates of the magnitude of anthropogenic and natural urban emissions are needed to assess their influence on the carbon balance. A dense ground-based CO2 monitoring network in cities would potentially allow retrieving sector specific CO2 emission estimates when combined with an atmospheric inversion framework using reasonably accurate observations (ca. 1 ppm for hourly means). One major barrier for denser observation networks can be the high cost of high precision instruments or high calibration cost of cheaper and unstable instruments. We have developed and tested a novel inexpensive NDIR sensors for CO2 measurements which fulfils cost and typical parameters requirements (i.e. signal stability, efficient handling, and connectivity) necessary for this task. Such sensors are essential in the market of emissions estimates in cities from continuous monitoring networks as well as for leak detection of MRV (monitoring, reporting, and verification) services for industrial sites. We conducted extensive laboratory tests (short and long-term repeatability, cross-sensitivities, etc.) on a series of prototypes and the final versions were also tested in a climatic chamber. On four final HPP prototypes the sensitivity to pressure and temperature were precisely quantified and correction&calibration strategies developed. Furthermore, we fully integrated these HPP sensors in a Raspberry PI platform containing the CO2 sensor and additional sensors (pressure, temperature and humidity sensors), gas supply pump and a fully automated data acquisition unit. This platform was deployed in parallel to Picarro G2401 instruments in the peri-urban site Saclay - next to Paris, and in the urban site Jussieu - Paris, France. These measurements were conducted over several months in order to characterize the long-term drift of our HPP instruments and the ability of the correction and calibration scheme to provide bias free observations. From the lessons learned in the laboratory tests and field measurements, we developed a specific correction and calibration strategy for our NDIR sensors. Latest results and calibration strategies will be shown.
Density measurement in air with a saturable absorbing seed gas
NASA Technical Reports Server (NTRS)
Baganoff, D.
1981-01-01
Resonantly enhanced scattering from the iodine molecule is studied experimentally for the purpose of developing a scheme for the measurement of density in a gas dynamic flow. A study of the spectrum of iodine, the collection of saturation data in iodine, and the development of a mathematical model for correlating saturation effects were pursued for a mixture of 0.3 torr iodine in nitrogen and for mixture pressures up to one atmosphere. For the desired pressure range, saturation effects in iodine were found to be too small to be useful in allowing density measurements to be made. The effects of quenching can be reduced by detuning the exciting laser wavelength from the absorption line center of the iodine line used (resonant Raman scattering). The signal was found to be nearly independent of pressure, for pressures up to one atmosphere, when the excitation beam was detuned 6 GHz from line center for an isolated line in iodine. The signal amplitude was found to be nearly equal to the amplitude for fluorescence at atmospheric pressure, which indicates a density measurement scheme is possible.
One-loop corrections to light cone wave functions: The dipole picture DIS cross section
NASA Astrophysics Data System (ADS)
Hänninen, H.; Lappi, T.; Paatelainen, R.
2018-06-01
We develop methods to perform loop calculations in light cone perturbation theory using a helicity basis, refining the method introduced in our earlier work. In particular this includes implementing a consistent way to contract the four-dimensional tensor structures from the helicity vectors with d-dimensional tensors arising from loop integrals, in a way that can be fully automatized. We demonstrate this explicitly by calculating the one-loop correction to the virtual photon to quark-antiquark dipole light cone wave function. This allows us to calculate the deep inelastic scattering cross section in the dipole formalism to next-to-leading order accuracy. Our results, obtained using the four dimensional helicity scheme, agree with the recent calculation by Beuf using conventional dimensional regularization, confirming the regularization scheme independence of this cross section.
Automatic Calculation of Hydrostatic Pressure Gradient in Patients with Head Injury: A Pilot Study.
Moss, Laura; Shaw, Martin; Piper, Ian; Arvind, D K; Hawthorne, Christopher
2016-01-01
The non-surgical management of patients with traumatic brain injury is the treatment and prevention of secondary insults, such as low cerebral perfusion pressure (CPP). Most clinical pressure monitoring systems measure pressure relative to atmospheric pressure. If a patient is managed with their head tilted up, relative to their arterial pressure transducer, then a hydrostatic pressure gradient (HPG) can act against arterial pressure and cause significant errors in calculated CPP.To correct for HPG, the arterial pressure transducer should be placed level with the intracranial pressure transducer. However, this is not always achieved. In this chapter, we describe a pilot study investigating the application of speckled computing (or "specks") for the automatic monitoring of the patient's head tilt and subsequent automatic calculation of HPG. In future applications this will allow us to automatically correct CPP to take into account any HPG.
Unitary reconstruction of secret for stabilizer-based quantum secret sharing
NASA Astrophysics Data System (ADS)
Matsumoto, Ryutaroh
2017-08-01
We propose a unitary procedure to reconstruct quantum secret for a quantum secret sharing scheme constructed from stabilizer quantum error-correcting codes. Erasure correcting procedures for stabilizer codes need to add missing shares for reconstruction of quantum secret, while unitary reconstruction procedures for certain class of quantum secret sharing are known to work without adding missing shares. The proposed procedure also works without adding missing shares.
Organic electronics based pressure sensor towards intracranial pressure monitoring
NASA Astrophysics Data System (ADS)
Rai, Pratyush; Varadan, Vijay K.
2010-04-01
The intra-cranial space, which houses the brain, contains cerebrospinal fluid (CSF) that acts as a fluid suspension medium for the brain. The CSF is always in circulation, is secreted in the cranium and is drained out through ducts called epidural veins. The venous drainage system has inherent resistance to the flow. Pressure is developed inside the cranium, which is similar to a rigid compartment. Normally a pressure of 5-15 mm Hg, in excess of atmospheric pressure, is observed at different locations inside the cranium. Increase in Intra-Cranial Pressure (ICP) can be caused by change in CSF volume caused by cerebral tumors, meningitis, by edema of a head injury or diseases related to cerebral atrophy. Hence, efficient ways of monitoring ICP need to be developed. A sensor system and monitoring scheme has been discussed here. The system architecture consists of a membrane less piezoelectric pressure sensitive element, organic thin film transistor (OTFT) based signal transduction, and signal telemetry. The components were fabricated on flexible substrate and have been assembled using flip-chip packaging technology. Material science and fabrication processes, subjective to the device performance, have been discussed. Capability of the device in detecting pressure variation, within the ICP pressure range, is investigated and applicability of measurement scheme to medical conditions has been argued for. Also, applications of such a sensor-OTFT assembly for logic sensor switching and patient specific-secure monitoring system have been discussed.
How to securely replicate services
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth
1992-01-01
A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by n servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter k, at least k servers are correct and fewer than k servers are corrupt. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires fewer than k servers to be corrupt and that is live if at least k+b servers are correct, where b is the assumed maximum total number of corrupt servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service. The practicality of these schemes is illustrated through a discussion of several issues pertinent to their implementation and use, and their intended role in a secure version of the Isis system is also described.
Song, Jong-Won; Hirao, Kimihiko
2015-10-14
Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular and periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.
Post-processing through linear regression
NASA Astrophysics Data System (ADS)
van Schaeybroeck, B.; Vannitsem, S.
2011-03-01
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
A Integrated Circuit for a Biomedical Capacitive Pressure Transducer
NASA Astrophysics Data System (ADS)
Smith, Michael John Sebastian
Medical research has an urgent need for a small, accurate, stable, low-power, biocompatible and inexpensive pressure sensor with a zero to full-scale range of 0-300 mmHg. An integrated circuit (IC) for use with a capacitive pressure transducer was designed, built and tested. The random pressure measurement error due to resolution and non-linearity is (+OR-)0.4 mmHg (at mid-range with a full -scale of 300 mmHg). The long-term systematic error due to falling battery voltage is (+OR-)0.6 mmHg. These figures were calculated from measurements of temperature, supply dependence and non-linearity on completed integrated circuits. The sensor IC allows measurement of temperature to (+OR-)0.1(DEGREES)C to allow for temperature compensation of the transducer. Novel micropower circuit design of the system components enabled these levels of accuracy to be reached. Capacitance is measured by a new ratiometric scheme employing an on -chip reference capacitor. This method greatly reduces the effects of voltage supply, temperature and manufacturing variations on the sensor circuit performance. The limits on performance of the bandgap reference circuit fabricated with a standard bipolar process using ion-implanted resistors were determined. Measurements confirm the limits of temperature stability as approximately (+OR-)300 ppm/(DEGREES)C. An exact analytical expression for the period of the Schmitt trigger oscillator, accounting for non-constant capacitor charging current, was formulated. Experiments to test agreement with theory showed that prediction of the oscillator period was very accurate. The interaction of fundamental and practical limits on the scaling of the transducer size was investigated including a correction to previous theoretical analysis of jitter in an RC oscillator. An areal reduction of 4 times should be achievable.
NASA Astrophysics Data System (ADS)
Singhal, G.; Subbarao, P. M. V.; Mainuddin; Tyagi, R. K.; Dawar, A. L.
2017-05-01
A class of flowing medium gas lasers with low generator pressures employ supersonic flows with low cavity pressure and are primarily categorized as high throughput systems capable of being scaled up to MW class. These include; Chemical Oxygen Iodine Laser (COIL) and Hydrogen (Deuterium) Fluoride (HF/DF). The practicability of such laser systems for various applications is enhanced by exhausting the effluents directly to ambient atmosphere. Consequently, ejector based pressure recovery forms a potent configuration for open cycle operation. Conventionally these gas laser systems require at least two ejector stages with low pressure stage being more critical, since it directly entrains the laser media, and the ensuing perturbation of cavity flow, if any, may affect laser operation. Hence, the choice of plausible motive gas injection schemes viz., peripheral or central is a fluid dynamic issue of interest, and a parametric experimental performance comparison would be beneficial. Thus, the focus is to experimentally characterize the effect of variation in motive gas supply pressure, entrainment ratio, back pressure conditions, nozzle injection position operated together with a COIL device and discern the reasons for the behavior.
High-speed cylindrical collapse of two perfect fluids
NASA Astrophysics Data System (ADS)
Sharif, M.; Ahmad, Zahid
2007-09-01
In this paper, the study of the gravitational collapse of cylindrically distributed two perfect fluid system has been carried out. It is assumed that the collapsing speeds of the two fluids are very large. We explore this condition by using the high-speed approximation scheme. There arise two cases, i.e., bounded and vanishing of the ratios of the pressures with densities of two fluids given by c s , d s . It is shown that the high-speed approximation scheme breaks down by non-zero pressures p 1, p 2 when c s , d s are bounded below by some positive constants. The failure of the high-speed approximation scheme at some particular time of the gravitational collapse suggests the uncertainty on the evolution at and after this time. In the bounded case, the naked singularity formation seems to be impossible for the cylindrical two perfect fluids. For the vanishing case, if a linear equation of state is used, the high-speed collapse does not break down by the effects of the pressures and consequently a naked singularity forms. This work provides the generalisation of the results already given by Nakao and Morisawa (Prog Theor Phys 113:73, 2005) for the perfect fluid.
High Order Well-balanced WENO Scheme for the Gas Dynamics Equations under Gravitational Fields
2011-11-12
there exists the hydrostatic balance where the flux produced by the pressure is canceled by the gravitational source term. Many astro - physical...approximation to W (x) to obtain an approximation to W ′(xi) = fx (U(xi, yj)). See again [7, 15] for more details of finite difference WENO schemes in
An improved method to detect correct protein folds using partial clustering.
Zhou, Jianjun; Wishart, David S
2013-01-16
Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.
An improved method to detect correct protein folds using partial clustering
2013-01-01
Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835
Comparative Study on High-Order Positivity-preserving WENO Schemes
NASA Technical Reports Server (NTRS)
Kotov, Dmitry V.; Yee, Helen M.; Sjogreen, Bjorn Axel
2013-01-01
The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more di cult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium ow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case (LeVeque & Yee 1990) as well as for the case consisting of two species and one reaction (Wang et al. 2012). For further information concerning this issue see (LeVeque & Yee 1990; Griffiths et al. 1992; Lafon & Yee 1996; Yee et al. 2012). This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.
SU-E-T-123: Anomalous Altitude Effect in Permanent Implant Brachytherapy Seeds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watt, E; Spencer, DP; Meyer, T
Purpose: Permanent seed implant brachytherapy procedures require the measurement of the air kerma strength of seeds prior to implant. This is typically accomplished using a well-type ionization chamber. Previous measurements (Griffin et al., 2005; Bohm et al., 2005) of several low-energy seeds using the air-communicating HDR 1000 Plus chamber have demonstrated that the standard temperature-pressure correction factor, P{sub TP}, may overcompensate for air density changes induced by altitude variations by up to 18%. The purpose of this work is to present empirical correction factors for two clinically-used seeds (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) for which empiricalmore » altitude correction factors do not yet exist in the literature when measured with the HDR 1000 Plus chamber. Methods: An in-house constructed pressure vessel containing the HDR 1000 Plus well chamber and a digital barometer/thermometer was pumped or evacuated, as appropriate, to a variety of pressures from 725 to 1075 mbar. Current measurements, corrected with P{sub TP}, were acquired for each seed at these pressures and normalized to the reading at ‘standard’ pressure (1013.25 mbar). Results: Measurements in this study have shown that utilization of P{sub TP} can overcompensate in the corrected current reading by up to 20% and 17% for the IsoAid Pd-103 and the Nucletron I-125 seed respectively. Compared to literature correction factors for other seed models, the correction factors in this study diverge by up to 2.6% and 3.0% for iodine (with silver) and palladium respectively, indicating the need for seed-specific factors. Conclusion: The use of seed specific altitude correction factors can reduce uncertainty in the determination of air kerma strength. The empirical correction factors determined in this work can be applied in clinical quality assurance measurements of air kerma strength for two previously unpublished seed designs (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) with the HDR 1000 Plus well chamber.« less
The manufacture of moulded supportive seating for the handicapped.
Nelham, R L
1975-10-01
The wheelchair-bound population often have difficulty in obtaining a correct or comfortable posture in their chairs and sometimes develop pressure sores from long-duration sitting. This problem is being solved by manufacturing personalised, contoured seats which support the patient over the maximum area possible thereby reducing the pressure on the body and the incidence of pressure sores. A cast is obtained of the patient in a comfortable, medically correct posture and from this cast the seat is vacuum formed in thermoplastic materials or hand layed up in glass fibre reinforced resin. Some correction of deformity may be achieved. It is also possible to use the moulded seat in a vehicle.
Hybrid Upwinding for Two-Phase Flow in Heterogeneous Porous Media with Buoyancy and Capillarity
NASA Astrophysics Data System (ADS)
Hamon, F. P.; Mallison, B.; Tchelepi, H.
2016-12-01
In subsurface flow simulation, efficient discretization schemes for the partial differential equations governing multiphase flow and transport are critical. For highly heterogeneous porous media, the temporal discretization of choice is often the unconditionally stable fully implicit (backward-Euler) method. In this scheme, the simultaneous update of all the degrees of freedom requires solving large algebraic nonlinear systems at each time step using Newton's method. This is computationally expensive, especially in the presence of strong capillary effects driven by abrupt changes in porosity and permeability between different rock types. Therefore, discretization schemes that reduce the simulation cost by improving the nonlinear convergence rate are highly desirable. To speed up nonlinear convergence, we present an efficient fully implicit finite-volume scheme for immiscible two-phase flow in the presence of strong capillary forces. In this scheme, the discrete viscous, buoyancy, and capillary spatial terms are evaluated separately based on physical considerations. We build on previous work on Implicit Hybrid Upwinding (IHU) by using the upstream saturations with respect to the total velocity to compute the relative permeabilities in the viscous term, and by determining the directionality of the buoyancy term based on the phase density differences. The capillary numerical flux is decomposed into a rock- and geometry-dependent transmissibility factor, a nonlinear capillary diffusion coefficient, and an approximation of the saturation gradient. Combining the viscous, buoyancy, and capillary terms, we obtain a numerical flux that is consistent, bounded, differentiable, and monotone for homogeneous one-dimensional flow. The proposed scheme also accounts for spatially discontinuous capillary pressure functions. Specifically, at the interface between two rock types, the numerical scheme accurately honors the entry pressure condition by solving a local nonlinear problem to compute the numerical flux. Heterogeneous numerical tests demonstrate that this extended IHU scheme is non-oscillatory and convergent upon refinement. They also illustrate the superior accuracy and nonlinear convergence rate of the IHU scheme compared with the standard phase-based upstream weighting approach.
Comparative Study of Advanced Turbulence Models for Turbomachinery
NASA Technical Reports Server (NTRS)
Hadid, Ali H.; Sindir, Munir M.
1996-01-01
A computational study has been undertaken to study the performance of advanced phenomenological turbulence models coded in a modular form to describe incompressible turbulent flow behavior in two dimensional/axisymmetric and three dimensional complex geometry. The models include a variety of two equation models (single and multi-scale k-epsilon models with different near wall treatments) and second moment algebraic and full Reynolds stress closure models. These models were systematically assessed to evaluate their performance in complex flows with rotation, curvature and separation. The models are coded as self contained modules that can be interfaced with a number of flow solvers. These modules are stand alone satellite programs that come with their own formulation, finite-volume discretization scheme, solver and boundary condition implementation. They will take as input (from any generic Navier-Stokes solver) the velocity field, grid (structured H-type grid) and computational domain specification (boundary conditions), and will deliver, depending on the model used, turbulent viscosity, or the components of the Reynolds stress tensor. There are separate 2D/axisymmetric and/or 3D decks for each module considered. The modules are tested using Rocketdyn's proprietary code REACT. The code utilizes an efficient solution procedure to solve Navier-Stokes equations in a non-orthogonal body-fitted coordinate system. The differential equations are discretized over a finite-volume grid using a non-staggered variable arrangement and an efficient solution procedure based on the SIMPLE algorithm for the velocity-pressure coupling is used. The modules developed have been interfaced and tested using finite-volume, pressure-correction CFD solvers which are widely used in the CFD community. Other solvers can also be used to test these modules since they are independently structured with their own discretization scheme and solver methodology. Many of these modules have been independently tested by Professor C.P. Chen and his group at the University of Alabama at Huntsville (UAH) by interfacing them with own flow solver (MAST).
Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale
NASA Astrophysics Data System (ADS)
Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru
2013-04-01
Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate regions all over the world. The biases are controlled very well by using this scheme in all applied basins. After that, bias-corrected and downscaled GCM precipitation are ready to use for simulating the Water and Energy Budget based Distributed Hydrological Model (WEB-DHM) to analyse the stream flow change or water availability of a target basin under the climate change in near future. Furthermore, it can be investigated any inter-disciplinary studies such as drought, flood, food, health and so on.In summary, an effective and comprehensive statistical bias-correction method was established to fulfil the generative applicability of GCM scale to basin scale without difficulty. This gap filling also promotes the sound decision of river management in the basin with more reliable information to build the resilience society.
Zhang, Lanqiang; Guo, Youming; Rao, Changhui
2017-02-20
Multi-conjugate adaptive optics (MCAO) is the most promising technique currently developed to enlarge the corrected field of view of adaptive optics for astronomy. In this paper, we propose a new configuration of solar MCAO based on high order ground layer adaptive optics and low order high altitude correction, which result in a homogeneous correction effect in the whole field of view. An individual high order multiple direction Shack-Hartmann wavefront sensor is employed in the configuration to detect the ground layer turbulence for low altitude correction. Furthermore, the other low order multiple direction Shack-Hartmann wavefront sensor supplies the wavefront information caused by high layers' turbulence through atmospheric tomography for high altitude correction. Simulation results based on the system design at the 1-meter New Vacuum Solar Telescope show that the correction uniform of the new scheme is obviously improved compared to conventional solar MCAO configuration.
Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian
2018-05-01
We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.
Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits.
Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; Li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu
2017-03-15
The calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution. A continuously working polarization-basis tracking scheme (PBTS) will effectively promote the efficiency of the system and reduce the potential security risk when switching between the transmission and calibration modes. Here, we proposed a single-photon level continuously working PBTS using only sifted key bits revealed during an error correction procedure, without introducing additional reference light or interrupting the transmission of quantum signals. We applied the scheme to a polarization-encoding BB84 QKD system in a 50 km fiber channel, and obtained an average quantum bit error rate (QBER) of 2.32% and a standard derivation of 0.87% during 24 h of continuous operation. The stable and relatively low QBER validates the effectiveness of the scheme.
Pre-correction of distorted Bessel-Gauss beams without wavefront detection
NASA Astrophysics Data System (ADS)
Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing
2017-12-01
By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.
CoFFEE: Corrections For Formation Energy and Eigenvalues for charged defect simulations
NASA Astrophysics Data System (ADS)
Naik, Mit H.; Jain, Manish
2018-05-01
Charged point defects in materials are widely studied using Density Functional Theory (DFT) packages with periodic boundary conditions. The formation energy and defect level computed from these simulations need to be corrected to remove the contributions from the spurious long-range interaction between the defect and its periodic images. To this effect, the CoFFEE code implements the Freysoldt-Neugebauer-Van de Walle (FNV) correction scheme. The corrections can be applied to charged defects in a complete range of material shapes and size: bulk, slab (or two-dimensional), wires and nanoribbons. The code is written in Python and features MPI parallelization and optimizations using the Cython package for slow steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wnek, W.J.; Ramshaw, J.D.; Trapp, J.A.
1975-11-01
A mathematical model and a numerical solution scheme for thermal- hydraulic analysis of fuel rod arrays are given. The model alleviates the two major deficiencies associated with existing rod array analysis models, that of a correct transverse momentum equation and the capability of handling reversing and circulatory flows. Possible applications of the model include steady state and transient subchannel calculations as well as analysis of flows in heat exchangers, other engineering equipment, and porous media. (auth)
2012-12-01
c) image, and unfolding arti- facts (d). (e), (f), (g). Susceptibility artifacts with geometric distortion before (e), (f) and after (g) correction...either using an electrostatic repul- sion scheme [45] or through various geometric polyhe- dral schemes [59]. 2.1.2.3. Signal-to-Noise (SNR) The...inhomogeneity (∆B), causes signal loss due to a shift of the maximal signal away from the theoretical echo time, leading to geometric distortion due to suscep
Analysis of drift correction in different simulated weighing schemes
NASA Astrophysics Data System (ADS)
Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.
2015-10-01
In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.
Error-correcting pairs for a public-key cryptosystem
NASA Astrophysics Data System (ADS)
Pellikaan, Ruud; Márquez-Corbella, Irene
2017-06-01
Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.
Combining states without scale hierarchies with ordered parton showers
Fischer, Nadine; Prestel, Stefan
2017-09-12
Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less
Nagy-Soper Subtraction: a Review
NASA Astrophysics Data System (ADS)
Robens, Tania
2013-07-01
In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.
Atmospheric parameterization schemes for satellite cloud property retrieval during FIRE IFO 2
NASA Technical Reports Server (NTRS)
Titlow, James; Baum, Bryan A.
1993-01-01
Satellite cloud retrieval algorithms generally require atmospheric temperature and humidity profiles to determine such cloud properties as pressure and height. For instance, the CO2 slicing technique called the ratio method requires the calculation of theoretical upwelling radiances both at the surface and a prescribed number (40) of atmospheric levels. This technique has been applied to data from, for example, the High Resolution Infrared Radiometer Sounder (HIRS/2, henceforth HIRS) flown aboard the NOAA series of polar orbiting satellites and the High Resolution Interferometer Sounder (HIS). In this particular study, four NOAA-11 HIRS channels in the 15-micron region are used. The ratio method may be applied to various channel combinations to estimate cloud top heights using channels in the 15-mu m region. Presently, the multispectral, multiresolution (MSMR) scheme uses 4 HIRS channel combination estimates for mid- to high-level cloud pressure retrieval and Advanced Very High Resolution Radiometer (AVHRR) data for low-level (is greater than 700 mb) cloud level retrieval. In order to determine theoretical upwelling radiances, atmospheric temperature and water vapor profiles must be provided as well as profiles of other radiatively important gas absorber constituents such as CO2, O3, and CH4. The assumed temperature and humidity profiles have a large effect on transmittance and radiance profiles, which in turn are used with HIRS data to calculate cloud pressure, and thus cloud height and temperature. For large spatial scale satellite data analysis, atmospheric parameterization schemes for cloud retrieval algorithms are usually based on a gridded product such as that provided by the European Center for Medium Range Weather Forecasting (ECMWF) or the National Meteorological Center (NMC). These global, gridded products prescribe temperature and humidity profiles for a limited number of pressure levels (up to 14) in a vertical atmospheric column. The FIRE IFO 2 experiment provides an opportunity to investigate current atmospheric profile parameterization schemes, compare satellite cloud height results using both gridded products (ECMWF) and high vertical resolution sonde data from the National Weather Service (NWS) and Cross Chain Loran Atmospheric Sounding System (CLASS), and suggest modifications in atmospheric parameterization schemes based on these results.
A High-Resolution Godunov Method for Compressible Multi-Material Flow on Overlapping Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banks, J W; Schwendeman, D W; Kapila, A K
2006-02-13
A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on amore » uniform pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of an planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.« less
NASA Technical Reports Server (NTRS)
Chan, David T.; Milholen, William E., II; Jones, Gregory S.; Goodliff, Scott L.
2014-01-01
A second wind tunnel test of the FAST-MAC circulation control semi-span model was recently completed in the National Transonic Facility at the NASA Langley Research Center. The model allowed independent control of four circulation control plenums producing a high momentum jet from a blowing slot near the wing trailing edge that was directed over a 15% chord simple-hinged flap. The model was configured for transonic testing of the cruise configuration with 0deg flap deflection to determine the potential for drag reduction with the circulation control blowing. Encouraging results from analysis of wing surface pressures suggested that the circulation control blowing was effective in reducing the transonic drag on the configuration, however this could not be quantified until the thrust generated by the blowing slot was correctly removed from the force and moment balance data. This paper will present the thrust removal methodology used for the FAST-MAC circulation control model and describe the experimental measurements and techniques used to develop the methodology. A discussion on the impact to the force and moment data as a result of removing the thrust from the blowing slot will also be presented for the cruise configuration, where at some Mach and Reynolds number conditions, the thrust-removed corrected data showed that a drag reduction was realized as a consequence of the blowing.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Comparison of the AUSM(+) and H-CUSP Schemes for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Liou, Meng-Sing
2003-01-01
Many turbomachinery CFD codes use second-order central-difference (C-D) schemes with artificial viscosity to control point decoupling and to capture shocks. While C-D schemes generally give accurate results, they can also exhibit minor numerical problems including overshoots at shocks and at the edges of viscous layers, and smearing of shocks and other flow features. In an effort to improve predictive capability for turbomachinery problems, two C-D codes developed by Chima, RVCQ3D and Swift, were modified by the addition of two upwind schemes: the AUSM+ scheme developed by Liou, et al., and the H-CUSP scheme developed by Tatsumi, et al. Details of the C-D scheme and the two upwind schemes are described, and results of three test cases are shown. Results for a 2-D transonic turbine vane showed that the upwind schemes eliminated viscous layer overshoots. Results for a 3-D turbine vane showed that the upwind schemes gave improved predictions of exit flow angles and losses, although the HCUSP scheme predicted slightly higher losses than the other schemes. Results for a 3-D supersonic compressor (NASA rotor 37) showed that the AUSM+ scheme predicted exit distributions of total pressure and temperature that are not generally captured by C-D codes. All schemes showed similar convergence rates, but the upwind schemes required considerably more CPU time per iteration.
A soft damping function for dispersion corrections with less overfitting
NASA Astrophysics Data System (ADS)
Ucak, Umit V.; Ji, Hyunjun; Singh, Yashpal; Jung, Yousung
2016-11-01
The use of damping functions in empirical dispersion correction schemes is common and widespread. These damping functions contain scaling and damping parameters, and they are usually optimized for the best performance in practical systems. In this study, it is shown that the overfitting problem can be present in current damping functions, which can sometimes yield erroneous results for real applications beyond the nature of training sets. To this end, we present a damping function called linear soft damping (lsd) that suffers less from this overfitting. This linear damping function damps the asymptotic curve more softly than existing damping functions, attempting to minimize the usual overcorrection. The performance of the proposed damping function was tested with benchmark sets for thermochemistry, reaction energies, and intramolecular interactions, as well as intermolecular interactions including nonequilibrium geometries. For noncovalent interactions, all three damping schemes considered in this study (lsd, lg, and BJ) roughly perform comparably (approximately within 1 kcal/mol), but for atomization energies, lsd clearly exhibits a better performance (up to 2-6 kcal/mol) compared to other schemes due to an overfitting in lg and BJ. The number of unphysical parameters resulting from global optimization also supports the overfitting symptoms shown in the latter numerical tests.
Radiation reaction effect on laser driven auto-resonant particle acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagar, Vikram; Sengupta, Sudip; Kaw, P. K.
2015-12-15
The effects of radiation reaction force on laser driven auto-resonant particle acceleration scheme are studied using Landau-Lifshitz equation of motion. These studies are carried out for both linear and circularly polarized laser fields in the presence of static axial magnetic field. From the parametric study, a radiation reaction dominated region has been identified in which the particle dynamics is greatly effected by this force. In the radiation reaction dominated region, the two significant effects on particle dynamics are seen, viz., (1) saturation in energy gain by the initially resonant particle and (2) net energy gain by an initially non-resonant particlemore » which is caused due to resonance broadening. It has been further shown that with the relaxation of resonance condition and with optimum choice of parameters, this scheme may become competitive with the other present-day laser driven particle acceleration schemes. The quantum corrections to the Landau-Lifshitz equation of motion have also been taken into account. The difference in the energy gain estimates of the particle by the quantum corrected and classical Landau-Lifshitz equation is found to be insignificant for the present day as well as upcoming laser facilities.« less
Wang, Jinke; Guo, Haoyan
2016-01-01
This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.
An upwind multigrid method for solving viscous flows on unstructured triangular meshes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl Lawrence
1993-01-01
A multigrid algorithm is combined with an upwind scheme for solving the two dimensional Reynolds averaged Navier-Stokes equations on triangular meshes resulting in an efficient, accurate code for solving complex flows around multiple bodies. The relaxation scheme uses a backward-Euler time difference and relaxes the resulting linear system using a red-black procedure. Roe's flux-splitting scheme is used to discretize convective and pressure terms, while a central difference is used for the diffusive terms. The multigrid scheme is demonstrated for several flows around single and multi-element airfoils, including inviscid, laminar, and turbulent flows. The results show an appreciable speed up of the scheme for inviscid and laminar flows, and dramatic increases in efficiency for turbulent cases, especially those on increasingly refined grids.
Bohm, Tim D; Griffin, Sheridan L; DeLuca, Paul M; DeWerd, Larry A
2005-04-01
The determination of the air kerma strength of a brachytherapy seed is necessary for effective treatment planning. Well ionization chambers are used on site at therapy clinics to determine the air kerma strength of seeds. In this work, the response of the Standard Imaging HDR 1000 Plus well chamber to ambient pressure is examined using Monte Carlo calculations. The experimental work examining the response of this chamber as well as other chambers is presented in a companion paper. The Monte Carlo results show that for low-energy photon sources, the application of the standard temperature pressure PTP correction factor produces an over-response at the reduced air densities/pressures corresponding to high elevations. With photon sources of 20 to 40 keV, the normalized PTP corrected chamber response is as much as 10% to 20% over unity for air densities/pressures corresponding to an elevation of 3048 m (10000 ft) above sea level. At air densities corresponding to an elevation of 1524 m (5000 ft), the normalized PTP-corrected chamber response is 5% to 10% over unity for these photon sources. With higher-energy photon sources (>100 keV), the normalized PTP corrected chamber response is near unity. For low-energy beta sources of 0.25 to 0.50 MeV, the normalized PTP-corrected chamber response is as much as 4% to 12% over unity for air densities/pressures corresponding to an elevation of 3048 m (10000 ft) above sea level. Higher-energy beta sources (>0.75 MeV) have a normalized PTP corrected chamber response near unity. Comparing calculated and measured chamber responses for common 103Pd- and 125I-based brachytherapy seeds show agreement to within 2.7% and 1.9%, respectively. Comparing MCNP calculated chamber responses with EGSnrc calculated chamber responses show agreement to within 3.1% at photon energies of 20 to 40 keV. We conclude that Monte Carlo transport calculations accurately model the response of this well chamber. Further, applying the standard PTP correction factor for this well chamber is insufficient in accounting for the change in chamber response with air pressure for low-energy (<100 keV) photon and low-energy (<0.75 MeV)beta sources.
A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.
2018-06-01
A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.
Lee, Tian-Fu
2014-12-01
Telecare medicine information systems provide a communicating platform for accessing remote medical resources through public networks, and help health care workers and medical personnel to rapidly making correct clinical decisions and treatments. An authentication scheme for data exchange in telecare medicine information systems enables legal users in hospitals and medical institutes to establish a secure channel and exchange electronic medical records or electronic health records securely and efficiently. This investigation develops an efficient and secure verified-based three-party authentication scheme by using extended chaotic maps for data exchange in telecare medicine information systems. The proposed scheme does not require server's public keys and avoids time-consuming modular exponential computations and scalar multiplications on elliptic curve used in previous related approaches. Additionally, the proposed scheme is proven secure in the random oracle model, and realizes the lower bounds of messages and rounds in communications. Compared to related verified-based approaches, the proposed scheme not only possesses higher security, but also has lower computational cost and fewer transmissions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Islam, S K Hafizul; Khan, Muhammad Khurram; Li, Xiong
2015-01-01
Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.'s scheme for integrated electronic patient record (EPR) information system, which has been analyzed in this study. We have found that Wen's scheme still has the following inefficiencies: (1) the correctness of identity and password are not verified during the login and password change phases; (2) it is vulnerable to impersonation attack and privileged-insider attack; (3) it is designed without the revocation of lost/stolen smart card; (4) the explicit key confirmation and the no key control properties are absent, and (5) user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP) in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturm, C.; Soni, A.; Aoki, Y.
2009-07-01
We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation withmore » substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.« less
Islam, SK Hafizul; Khan, Muhammad Khurram; Li, Xiong
2015-01-01
Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.’s scheme for integrated electronic patient record (EPR) information system, which has been analyzed in this study. We have found that Wen’s scheme still has the following inefficiencies: (1) the correctness of identity and password are not verified during the login and password change phases; (2) it is vulnerable to impersonation attack and privileged-insider attack; (3) it is designed without the revocation of lost/stolen smart card; (4) the explicit key confirmation and the no key control properties are absent, and (5) user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP) in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature. PMID:26263401
A computational framework for automation of point defect calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
A computational framework for automation of point defect calculations
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...
2017-01-13
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems
NASA Astrophysics Data System (ADS)
Bose, S. K.
1980-12-01
Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.
NASA Astrophysics Data System (ADS)
Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing
2017-12-01
We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.