NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.
1991-01-01
A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.
1976-01-01
An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.
An Explicit Upwind Algorithm for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Korte, John J.
1991-01-01
An explicit, upwind algorithm was developed for the direct (noniterative) integration of the 3-D Parabolized Navier-Stokes (PNS) equations in a generalized coordinate system. The new algorithm uses upwind approximations of the numerical fluxes for the pressure and convection terms obtained by combining flux difference splittings (FDS) formed from the solution of an approximate Riemann (RP). The approximate RP is solved using an extension of the method developed by Roe for steady supersonic flow of an ideal gas. Roe's method is extended for use with the 3-D PNS equations expressed in generalized coordinates and to include Vigneron's technique of splitting the streamwise pressure gradient. The difficulty associated with applying Roe's scheme in the subsonic region is overcome. The second-order upwind differencing of the flux derivatives are obtained by adding FDS to either an original forward or backward differencing of the flux derivative. This approach is used to modify an explicit MacCormack differencing scheme into an upwind differencing scheme. The second order upwind flux approximations, applied with flux limiters, provide a method for numerically capturing shocks without the need for additional artificial damping terms which require adjustment by the user. In addition, a cubic equation is derived for determining Vegneron's pressure splitting coefficient using the updated streamwise flux vector. Decoding the streamwise flux vector with the updated value of Vigneron's pressure splitting improves the stability of the scheme. The new algorithm is applied to 2-D and 3-D supersonic and hypersonic laminar flow test cases. Results are presented for the experimental studies of Holden and of Tracy. In addition, a flow field solution is presented for a generic hypersonic aircraft at a Mach number of 24.5 and angle of attack of 1 degree. The computed results compare well to both experimental data and numerical results from other algorithms. Computational times required for the upwind PNS code are approximately equal to an explicit PNS MacCormack's code and existing implicit PNS solvers.
Performance of differenced range data types in Voyager navigation
NASA Technical Reports Server (NTRS)
Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.
1982-01-01
Voyager radio navigation made use of a differenced rage data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter-to-Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.
Performance of differenced range data types in Voyager navigation
NASA Technical Reports Server (NTRS)
Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.
1982-01-01
Voyager radio navigation made use of differenced range data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter to Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Solving the Sea-Level Equation in an Explicit Time Differencing Scheme
NASA Astrophysics Data System (ADS)
Klemann, V.; Hagedoorn, J. M.; Thomas, M.
2016-12-01
In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x
NASA Astrophysics Data System (ADS)
Wang, Xiaoqiang; Ju, Lili; Du, Qiang
2016-07-01
The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.
Upwind differencing and LU factorization for chemical non-equilibrium Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun
1992-01-01
By means of either the Roe or the Van Leer flux-splittings for inviscid terms, in conjunction with central differencing for viscous terms in the explicit operator and the Steger-Warming splitting and lower-upper approximate factorization for the implicit operator, the present, robust upwind method for solving the chemical nonequilibrium Navier-Stokes equations yields formulas for finite-volume discretization in general coordinates. Numerical tests in the illustrative cases of a hypersonic blunt body, a ramped duct, divergent nozzle flows, and shock wave/boundary layer interactions, establish the method's efficiency.
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2013-01-01
A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.
EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES
Börgers, Christoph; Nectow, Alexander R.
2013-01-01
Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHugh, P.R.; Ramshaw, J.D.
MAGMA is a FORTRAN computer code designed to viscous flow in in situ vitrification melt pools. It models three-dimensional, incompressible, viscous flow and heat transfer. The momentum equation is coupled to the temperature field through the buoyancy force terms arising from the Boussinesq approximation. All fluid properties, except density, are assumed variable. Density is assumed constant except in the buoyancy force terms in the momentum equation. A simple melting model based on the enthalpy method allows the study of the melt front progression and latent heat effects. An indirect addressing scheme used in the numerical solution of the momentum equationmore » voids unnecessary calculations in cells devoid of liquid. Two-dimensional calculations can be performed using either rectangular or cylindrical coordinates, while three-dimensional calculations use rectangular coordinates. All derivatives are approximated by finite differences. The incompressible Navier-Stokes equations are solved using a new fully implicit iterative technique, while the energy equation is differenced explicitly in time. Spatial derivatives are written in conservative form using a uniform, rectangular, staggered mesh based on the marker and cell placement of variables. Convective terms are differenced using a weighted average of centered and donor cell differencing to ensure numerical stability. Complete descriptions of MAGMA governing equations, numerics, code structure, and code verification are provided. 14 refs.« less
The CFL condition for spectral approximations to hyperbolic initial-boundary value problems
NASA Technical Reports Server (NTRS)
Gottlieb, David; Tadmor, Eitan
1991-01-01
The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.
The CFL condition for spectral approximations to hyperbolic initial-boundary value problems
NASA Technical Reports Server (NTRS)
Gottlieb, David; Tadmor, Eitan
1990-01-01
The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.
Three-dimensional time dependent computation of turbulent flow
NASA Technical Reports Server (NTRS)
Kwak, D.; Reynolds, W. C.; Ferziger, J. H.
1975-01-01
The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.
Multigrid for hypersonic viscous two- and three-dimensional flows
NASA Technical Reports Server (NTRS)
Turkel, E.; Swanson, R. C.; Vatsa, V. N.; White, J. A.
1991-01-01
The use of a multigrid method with central differencing to solve the Navier-Stokes equations for hypersonic flows is considered. The time dependent form of the equations is integrated with an explicit Runge-Kutta scheme accelerated by local time stepping and implicit residual smoothing. Variable coefficients are developed for the implicit process that removes the diffusion limit on the time step, producing significant improvement in convergence. A numerical dissipation formulation that provides good shock capturing capability for hypersonic flows is presented. This formulation is shown to be a crucial aspect of the multigrid method. Solutions are given for two-dimensional viscous flow over a NACA 0012 airfoil and three-dimensional flow over a blunt biconic.
A hybridized method for computing high-Reynolds-number hypersonic flow about blunt bodies
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Hamilton, H. H., II
1979-01-01
A hybridized method for computing the flow about blunt bodies is presented. In this method the flow field is split into its viscid and inviscid parts. The forebody flow field about a parabolic body is computed. For the viscous solution, the Navier-Stokes equations are solved on orthogonal parabolic coordinates using explicit finite differencing. The inviscid flow is determined by using a Moretti type scheme in which the Euler equations are solved, using explicit finite differences, on a nonorthogonal coordinate system which uses the bow shock as an outer boundary. The two solutions are coupled along a common data line and are marched together in time until a converged solution is obtained. Computed results, when compared with experimental and analytical results, indicate the method works well over a wide range of Reynolds numbers and Mach numbers.
Three-dimensional control of crystal growth using magnetic fields
NASA Astrophysics Data System (ADS)
Dulikravich, George S.; Ahuja, Vineet; Lee, Seungsoo
1993-07-01
Two coupled systems of partial differential equations governing three-dimensional laminar viscous flow undergoing solidification or melting under the influence of arbitrarily oriented externally applied magnetic fields have been formulated. The model accounts for arbitrary temperature dependence of physical properties including latent heat release, effects of Joule heating, magnetic field forces, and mushy region existence. On the basis of this model a numerical algorithm has been developed and implemented using central differencing on a curvilinear boundary-conforming grid and Runge-Kutta explicit time-stepping. The numerical results clearly demonstrate possibilities for active and practically instantaneous control of melt/solid interface shape, the solidification/melting front propagation speed, and the amount and location of solid accrued.
NASA Astrophysics Data System (ADS)
Lu, Tiao; Cai, Wei
2008-10-01
In this paper, we propose a high order Fourier spectral-discontinuous Galerkin method for time-dependent Schrödinger-Poisson equations in 3-D spaces. The Fourier spectral Galerkin method is used for the two periodic transverse directions and a high order discontinuous Galerkin method for the longitudinal propagation direction. Such a combination results in a diagonal form for the differential operators along the transverse directions and a flexible method to handle the discontinuous potentials present in quantum heterojunction and supperlattice structures. As the derivative matrices are required for various time integration schemes such as the exponential time differencing and Crank Nicholson methods, explicit derivative matrices of the discontinuous Galerkin method of various orders are derived. Numerical results, using the proposed method with various time integration schemes, are provided to validate the method.
A multiblock multigrid three-dimensional Euler equation solver
NASA Technical Reports Server (NTRS)
Cannizzaro, Frank E.; Elmiligui, Alaa; Melson, N. Duane; Vonlavante, E.
1990-01-01
Current aerodynamic designs are often quite complex (geometrically). Flexible computational tools are needed for the analysis of a wide range of configurations with both internal and external flows. In the past, geometrically dissimilar configurations required different analysis codes with different grid topologies in each. The duplicity of codes can be avoided with the use of a general multiblock formulation which can handle any grid topology. Rather than hard wiring the grid topology into the program, it is instead dictated by input to the program. In this work, the compressible Euler equations, written in a body-fitted finite-volume formulation, are solved using a pseudo-time-marching approach. Two upwind methods (van Leer's flux-vector-splitting and Roe's flux-differencing) were investigated. Two types of explicit solvers (a two-step predictor-corrector and a modified multistage Runge-Kutta) were used with multigrid acceleration to enhance convergence. A multiblock strategy is used to allow greater geometric flexibility. A report on simple explicit upwind schemes for solving compressible flows is included.
Orbit determination performances using single- and double-differenced methods: SAC-C and KOMPSAT-2
NASA Astrophysics Data System (ADS)
Hwang, Yoola; Lee, Byoung-Sun; Kim, Haedong; Kim, Jaehoon
2011-01-01
In this paper, Global Positioning System-based (GPS) Orbit Determination (OD) for the KOrea-Multi-Purpose-SATellite (KOMPSAT)-2 using single- and double-differenced methods is studied. The requirement of KOMPSAT-2 orbit accuracy is to allow 1 m positioning error to generate 1-m panchromatic images. KOMPSAT-2 OD is computed using real on-board GPS data. However, the local time of the KOMPSAT-2 GPS receiver is not synchronized with the zero fractional seconds of the GPS time internally, and it continuously drifts according to the pseudorange epochs. In order to resolve this problem, an OD based on single-differenced GPS data from the KOMPSAT-2 uses the tagged time of the GPS receiver, and the accuracy of the OD result is assessed using the overlapping orbit solution between two adjacent days. The clock error of the GPS satellites in the KOMPSAT-2 single-differenced method is corrected using International GNSS Service (IGS) clock information at 5-min intervals. KOMPSAT-2 OD using both double- and single-differenced methods satisfies the requirement of 1-m accuracy in overlapping three dimensional orbit solutions. The results of the SAC-C OD compared with JPL’s POE (Precise Orbit Ephemeris) are also illustrated to demonstrate the implementation of the single- and double-differenced methods using a satellite that has independent orbit information available for validation.
Spatially explicit rangeland erosion monitoring using high-resolution digital aerial imagery
Gillan, Jeffrey K.; Karl, Jason W.; Barger, Nichole N.; Elaksher, Ahmed; Duniway, Michael C.
2016-01-01
Nearly all of the ecosystem services supported by rangelands, including production of livestock forage, carbon sequestration, and provisioning of clean water, are negatively impacted by soil erosion. Accordingly, monitoring the severity, spatial extent, and rate of soil erosion is essential for long-term sustainable management. Traditional field-based methods of monitoring erosion (sediment traps, erosion pins, and bridges) can be labor intensive and therefore are generally limited in spatial intensity and/or extent. There is a growing effort to monitor natural resources at broad scales, which is driving the need for new soil erosion monitoring tools. One remote-sensing technique that can be used to monitor soil movement is a time series of digital elevation models (DEMs) created using aerial photogrammetry methods. By geographically coregistering the DEMs and subtracting one surface from the other, an estimate of soil elevation change can be created. Such analysis enables spatially explicit quantification and visualization of net soil movement including erosion, deposition, and redistribution. We constructed DEMs (12-cm ground sampling distance) on the basis of aerial photography immediately before and 1 year after a vegetation removal treatment on a 31-ha Piñon-Juniper woodland in southeastern Utah to evaluate the use of aerial photography in detecting soil surface change. On average, we were able to detect surface elevation change of ± 8−9cm and greater, which was sufficient for the large amount of soil movement exhibited on the study area. Detecting more subtle soil erosion could be achieved using the same technique with higher-resolution imagery from lower-flying aircraft such as unmanned aerial vehicles. DEM differencing and process-focused field methods provided complementary information and a more complete assessment of soil loss and movement than any single technique alone. Photogrammetric DEM differencing could be used as a technique to quantitatively monitor surface change over time relative to management activities.
Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1997-01-01
A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.
NASA Astrophysics Data System (ADS)
Moeeni, Hamid; Bonakdari, Hossein; Fatemi, Seyed Ehsan
2017-04-01
Because time series stationarization has a key role in stochastic modeling results, three methods are analyzed in this study. The methods are seasonal differencing, seasonal standardization and spectral analysis to eliminate the periodic effect on time series stationarity. First, six time series including 4 streamflow series and 2 water temperature series are stationarized. The stochastic term for these series obtained with ARIMA is subsequently modeled. For the analysis, 9228 models are introduced. It is observed that seasonal standardization and spectral analysis eliminate the periodic term completely, while seasonal differencing maintains seasonal correlation structures. The obtained results indicate that all three methods present acceptable performance overall. However, model accuracy in monthly streamflow prediction is higher with seasonal differencing than with the other two methods. Another advantage of seasonal differencing over the other methods is that the monthly streamflow is never estimated as negative. Standardization is the best method for predicting monthly water temperature although it is quite similar to seasonal differencing, while spectral analysis performed the weakest in all cases. It is concluded that for each monthly seasonal series, seasonal differencing is the best stationarization method in terms of periodic effect elimination. Moreover, the monthly water temperature is predicted with more accuracy than monthly streamflow. The criteria of the average stochastic term divided by the amplitude of the periodic term obtained for monthly streamflow and monthly water temperature were 0.19 and 0.30, 0.21 and 0.13, and 0.07 and 0.04 respectively. As a result, the periodic term is more dominant than the stochastic term for water temperature in the monthly water temperature series compared to streamflow series.
Choice of implicit and explicit operators for the upwind differencing method
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Vanleer, Bram
1988-01-01
The flux-vector and flux-difference splittings of Steger-Warming, van Leer and Roe are tested in all possible combinations on the implicit and explicit operators that can be distinguished in implicit relaxation methods for the steady Euler and Navier-Stokes equations. The tests include one-dimensional inviscid nozzle flow, and two-dimensional inviscid and viscous shock reflection. Roe's splitting, as anticipated, is found to uniformly yield the most accurate results. On the other hand, an approximate Roe splitting of the implicit operator (the complete Roe splitting is too complicated for practical use) proves to be the least robust with regard to convergence to the steady state. In this respect, the Steger-Warming splitting is the most robust; it leads to convergence when combined with any of the splittings in the explicit operator, although not necessarily in the most efficient way.
Performance Analysis of Several GPS/Galileo Precise Point Positioning Models
Afifi, Akram; El-Rabbany, Ahmed
2015-01-01
This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada’s GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference. PMID:26102495
Performance Analysis of Several GPS/Galileo Precise Point Positioning Models.
Afifi, Akram; El-Rabbany, Ahmed
2015-06-19
This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada's GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference.
NASA Technical Reports Server (NTRS)
Stewart, R. B.
1972-01-01
Numberical solutions are obtained for the quasi-compressible Navier-Stokes equations governing the time dependent natural convection flow within a horizontal cylinder. The early time flow development and wall heat transfer is obtained after imposing a uniformly cold wall boundary condition on the cylinder. Solutions are also obtained for the case of a time varying cold wall boundary condition. Windware explicit differ-encing is used for the numerical solutions. The viscous truncation error associated with this scheme is controlled so that first order accuracy is maintained in time and space. The results encompass a range of Grashof numbers from 8.34 times 10,000 to 7 times 10 to the 7th power which is within the laminar flow regime for gravitationally driven fluid flows. Experiments within a small scale instrumented horizontal cylinder revealed the time development of the temperature distribution across the boundary layer and also the decay of wall heat transfer with time.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1990-01-01
A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Digital Data Registration and Differencing Compression System
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1996-01-01
A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.
1987-09-01
Eulerian or Lagrangian flow problems, use of real equations of state and transport properties from the Los Alamos National Laboratory SESAME package...permissible problem geometries; time differencing; and spatial discretization, centering, and differ- encing of MACH2. /. I." - Magnetohydrodynamics...R-A & Y7 24 9 5.2 THE IDEAL COORDINATE SYSTEM DTIC TAB 13 24 5.3 THE MATERIAL DERIVATIVE Uannounoed 0 26 Justifloatlo- 6. TIME DIFFERENCING 31 6.1
Second- and third-order upwind difference schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Yang, J. Y.
1984-01-01
Second- and third-order two time-level five-point explicit upwind-difference schemes are described for the numerical solution of hyperbolic systems of conservation laws and applied to the Euler equations of inviscid gas dynamics. Nonliner smoothing techniques are used to make the schemes total variation diminishing. In the method both hyperbolicity and conservation properties of the hyperbolic conservation laws are combined in a very natural way by introducing a normalized Jacobian matrix of the hyperbolic system. Entropy satisfying shock transition operators which are consistent with the upwind differencing are locally introduced when transonic shock transition is detected. Schemes thus constructed are suitable for shockcapturing calculations. The stability and the global order of accuracy of the proposed schemes are examined. Numerical experiments for the inviscid Burgers equation and the compressible Euler equations in one and two space dimensions involving various situations of aerodynamic interest are included and compared.
Trend time-series modeling and forecasting with neural networks.
Qi, Min; Zhang, G Peter
2008-05-01
Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.
1990-01-01
The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.
NASA Astrophysics Data System (ADS)
Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang
2018-01-01
Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm for BeiDou GEO satellites. The real-time positioning results prove that the GPS + BeiDou + Galileo RT-PPP comparing to GPS-only can effectively accelerate convergence time by about 60%, improve the positioning accuracy by about 30% and obtain averaged RMS 4 cm in horizontal and 6 cm in vertical; additionally RT-SPP accuracy in the prototype system can realize positioning accuracy with about averaged RMS 1 m in horizontal and 1.5-2 m in vertical, which are improved by 60% and 70% to SPP based on broadcast ephemeris, respectively.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1992-01-01
A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Non-oscillatory central differencing for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Nessyahu, Haim; Tadmor, Eitan
1988-01-01
Many of the recently developed high resolution schemes for hyperbolic conservation laws are based on upwind differencing. The building block for these schemes is the averaging of an appropriate Godunov solver; its time consuming part involves the field-by-field decomposition which is required in order to identify the direction of the wind. Instead, the use of the more robust Lax-Friedrichs (LxF) solver is proposed. The main advantage is simplicity: no Riemann problems are solved and hence field-by-field decompositions are avoided. The main disadvantage is the excessive numerical viscosity typical to the LxF solver. This is compensated for by using high-resolution MUSCL-type interpolants. Numerical experiments show that the quality of results obtained by such convenient central differencing is comparable with those of the upwind schemes.
An Implicit Characteristic Based Method for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.
Development of an explicit multiblock/multigrid flow solver for viscous flows in complex geometries
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Liou, M. S.; Povinelli, L. A.
1993-01-01
A new computer program is being developed for doing accurate simulations of compressible viscous flows in complex geometries. The code employs the full compressible Navier-Stokes equations. The eddy viscosity model of Baldwin and Lomax is used to model the effects of turbulence on the flow. A cell centered finite volume discretization is used for all terms in the governing equations. The Advection Upwind Splitting Method (AUSM) is used to compute the inviscid fluxes, while central differencing is used for the diffusive fluxes. A four-stage Runge-Kutta time integration scheme is used to march solutions to steady state, while convergence is enhanced by a multigrid scheme, local time-stepping, and implicit residual smoothing. To enable simulations of flows in complex geometries, the code uses composite structured grid systems where all grid lines are continuous at block boundaries (multiblock grids). Example results shown are a flow in a linear cascade, a flow around a circular pin extending between the main walls in a high aspect-ratio channel, and a flow of air in a radial turbine coolant passage.
Development of an explicit multiblock/multigrid flow solver for viscous flows in complex geometries
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Liou, M.-S.; Povinelli, L. A.
1993-01-01
A new computer program is being developed for doing accurate simulations of compressible viscous flows in complex geometries. The code employs the full compressible Navier-Stokes equations. The eddy viscosity model of Baldwin and Lomax is used to model the effects of turbulence on the flow. A cell centered finite volume discretization is used for all terms in the governing equations. The Advection Upwind Splitting Method (AUSM) is used to compute the inviscid fluxes, while central differencing is used for the diffusive fluxes. A four-stage Runge-Kutta time integration scheme is used to march solutions to steady state, while convergence is enhanced by a multigrid scheme, local time-stepping and implicit residual smoothing. To enable simulations of flows in complex geometries, the code uses composite structured grid systems where all grid lines are continuous at block boundaries (multiblock grids). Example results are shown a flow in a linear cascade, a flow around a circular pin extending between the main walls in a high aspect-ratio channel, and a flow of air in a radial turbine coolant passage.
High-precision coseismic displacement estimation with a single-frequency GPS receiver
NASA Astrophysics Data System (ADS)
Guo, Bofeng; Zhang, Xiaohong; Ren, Xiaodong; Li, Xingxing
2015-07-01
To improve the performance of Global Positioning System (GPS) in the earthquake/tsunami early warning and rapid response applications, minimizing the blind zone and increasing the stability and accuracy of both the rapid source and rupture inversion, the density of existing GPS networks must be increased in the areas at risk. For economic reasons, low-cost single-frequency receivers would be preferable to make the sparse dual-frequency GPS networks denser. When using single-frequency GPS receivers, the main problem that must be solved is the ionospheric delay, which is a critical factor when determining accurate coseismic displacements. In this study, we introduce a modified Satellite-specific Epoch-differenced Ionospheric Delay (MSEID) model to compensate for the effect of ionospheric error on single-frequency GPS receivers. In the MSEID model, the time-differenced ionospheric delays observed from a regional dual-frequency GPS network to a common satellite are fitted to a plane rather than part of a sphere, and the parameters of this plane are determined by using the coordinates of the stations. When the parameters are known, time-differenced ionospheric delays for a single-frequency GPS receiver could be derived from the observations of those dual-frequency receivers. Using these ionospheric delay corrections, coseismic displacements of a single-frequency GPS receiver can be accurately calculated based on time-differenced carrier-phase measurements in real time. The performance of the proposed approach is validated using 5 Hz GPS data collected during the 2012 Nicoya Peninsula Earthquake (Mw 7.6, 2012 September 5) in Costa Rica. This shows that the proposed approach improves the accuracy of the displacement of a single-frequency GPS station, and coseismic displacements with an accuracy of a few centimetres are achieved over a 10-min interval.
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
CAS2D: FORTRAN program for nonrotating blade-to-blade, steady, potential transonic cascade flows
NASA Technical Reports Server (NTRS)
Dulikravich, D. S.
1980-01-01
An exact, full-potential-equation (FPE) model for the steady, irrotational, homentropic and homoenergetic flow of a compressible, homocompositional, inviscid fluid through two dimensional planar cascades of airfoils was derived, together with its appropriate boundary conditions. A computer program, CAS2D, was developed that numerically solves an artificially time-dependent form of the actual FPE. The governing equation was discretized by using type-dependent, rotated finite differencing and the finite area technique. The flow field was discretized by providing a boundary-fitted, nonuniform computational mesh. The mesh was generated by using a sequence of conforming mapping, nonorthogonal coordinate stretching, and local, isoparametric, bilinear mapping functions. The discretized form of the FPE was solved iteratively by using successive line overrelaxation. The possible isentropic shocks were correctly captured by adding explicitly an artificial viscosity in a conservative form. In addition, a three-level consecutive, mesh refinement feature makes CAS2D a reliable and fast algorithm for the analysis of transonic, two dimensional cascade flows.
NASA Technical Reports Server (NTRS)
Melbourne, William G.
1986-01-01
In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.
Reducing numerical diffusion for incompressible flow calculations
NASA Technical Reports Server (NTRS)
Claus, R. W.; Neely, G. M.; Syed, S. A.
1984-01-01
A number of approaches for improving the accuracy of incompressible, steady-state flow calculations are examined. Two improved differencing schemes, Quadratic Upstream Interpolation for Convective Kinematics (QUICK) and Skew-Upwind Differencing (SUD), are applied to the convective terms in the Navier-Stokes equations and compared with results obtained using hybrid differencing. In a number of test calculations, it is illustrated that no single scheme exhibits superior performance for all flow situations. However, both SUD and QUICK are shown to be generally more accurate than hybrid differencing.
NASA Technical Reports Server (NTRS)
Rodden, John James (Inventor); Price, Xenophon (Inventor); Carrou, Stephane (Inventor); Stevens, Homer Darling (Inventor)
2002-01-01
A control system for providing attitude control in spacecraft. The control system comprising a primary attitude reference system, a secondary attitude reference system, and a hyper-complex number differencing system. The hyper-complex number differencing system is connectable to the primary attitude reference system and the secondary attitude reference system.
Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. The governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models are described in detail.
Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 3D has been developed to solve the three dimensional, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort has been to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation have been emphasized. The governing equations are solved in generalized non-orthogonal body-fitted coordinates by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. It describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.
NASA Technical Reports Server (NTRS)
Betts, W. S., Jr.
1972-01-01
A computer program called HOPI was developed to predict reorientation flow dynamics, wherein liquids move from one end of a closed, partially filled, rigid container to the other end under the influence of container acceleration. The program uses the simplified marker and cell numerical technique and, using explicit finite-differencing, solves the Navier-Stokes equations for an incompressible viscous fluid. The effects of turbulence are also simulated in the program. HOPI can consider curved as well as straight walled boundaries. Both free-surface and confined flows can be calculated. The program was used to simulate five liquid reorientation cases. Three of these cases simulated actual NASA LeRC drop tower test conditions while two cases simulated full-scale Centaur tank conditions. It was concluded that while HOPI can be used to analytically determine the fluid motion in a typical settling problem, there is a current need to optimize HOPI. This includes both reducing the computer usage time and also reducing the core storage required for a given size problem.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.
Cheng, R.T.; Casulli, V.; Gartner, J.W.
1993-01-01
A numerical model using a semi-implicit finite-difference method for solving the two-dimensional shallow-water equations is presented. The gradient of the water surface elevation in the momentum equations and the velocity divergence in the continuity equation are finite-differenced implicitly, the remaining terms are finite-differenced explicitly. The convective terms are treated using an Eulerian-Lagrangian method. The combination of the semi-implicit finite-difference solution for the gravity wave propagation, and the Eulerian-Lagrangian treatment of the convective terms renders the numerical model unconditionally stable. When the baroclinic forcing is included, a salt transport equation is coupled to the momentum equations, and the numerical method is subject to a weak stability condition. The method of solution and the properties of the numerical model are given. This numerical model is particularly suitable for applications to coastal plain estuaries and tidal embayments in which tidal currents are dominant, and tidally generated residual currents are important. The model is applied to San Francisco Bay, California where extensive historical tides and current-meter data are available. The model calibration is considered by comparing time-series of the field data and of the model results. Alternatively, and perhaps more meaningfully, the model is calibrated by comparing the harmonic constants of tides and tidal currents derived from field data with those derived from the model. The model is further verified by comparing the model results with an independent data set representing the wet season. The strengths and the weaknesses of the model are assessed based on the results of model calibration and verification. Using the model results, the properties of tides and tidal currents in San Francisco Bay are characterized and discussed. Furthermore, using the numerical model, estimates of San Francisco Bay's volume, surface area, mean water depth, tidal prisms, and tidal excursions at spring and neap tides are computed. Additional applications of the model reveal, qualitatively the spatial distribution of residual variables. ?? 1993 Academic Press. All rights reserved.
NASA Technical Reports Server (NTRS)
Rudy, David H.; Kumar, Ajay; Thomas, James L.; Gnoffo, Peter A.; Chakravarthy, Sukumar R.
1988-01-01
A comparative study was made using 4 different computer codes for solving the compressible Navier-Stokes equations. Three different test problems were used, each of which has features typical of high speed internal flow problems of practical importance in the design and analysis of propulsion systems for advanced hypersonic vehicles. These problems are the supersonic flow between two walls, one of which contains a 10 deg compression ramp, the flow through a hypersonic inlet, and the flow in a 3-D corner formed by the intersection of two symmetric wedges. Three of the computer codes use similar recently developed implicit upwind differencing technology, while the fourth uses a well established explicit method. The computed results were compared with experimental data where available.
The performance of differential VLBI delay during interplanetary cruise
NASA Technical Reports Server (NTRS)
Moultrie, B.; Wolff, P. J.; Taylor, T. H.
1984-01-01
Project Voyager radio metric data are used to evaluate the orbit determination abilities of several data strategies during spacecraft interplanetary cruise. Benchmark performance is established with an operational data strategy of conventional coherent doppler, coherent range, and explicitly differenced range data from two intercontinental baselines to ameliorate the low declination singularity of the doppler data. Employing a Voyager operations trajectory as a reference, the performance of the operational data strategy is compared to the performances of data strategies using differential VLBI delay data (spacecraft delay minus quasar delay) in combinations with the aforementioned conventional data types. The comparison of strategy performances indicates that high accuracy cruise orbit determination can be achieved with a data strategy employing differential VLBI delay data, where the quantity of coherent radio metric data has been greatly reduced.
Effective image differencing with convolutional neural networks for real-time transient hunting
NASA Astrophysics Data System (ADS)
Sedaghat, Nima; Mahabal, Ashish
2018-06-01
Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.
Fully Implicit, Nonlinear 3D Extended Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Chacon, Luis; Knoll, Dana
2003-10-01
Extended magnetohydrodynamics (XMHD) includes nonideal effects such as nonlinear, anisotropic transport and two-fluid (Hall) effects. XMHD supports multiple, separate time scales that make explicit time differencing approaches extremely inefficient. While a fully implicit implementation promises efficiency without sacrificing numerical accuracy,(D. A. Knoll et al., phJ. Comput. Phys.) 185 (2), 583-611 (2003) the nonlinear nature of the XMHD system and the numerical stiffness associated with the fast waves make this endeavor difficult. Newton-Krylov methods are, however, ideally suited for such a task. These synergistically combine Newton's method for nonlinear convergence, and Krylov techniques to solve the associated Jacobian (linear) systems. Krylov methods can be implemented Jacobian-free and can be preconditioned for efficiency. Successful preconditioning strategies have been developed for 2D incompressible resistive(L. Chacón et al., phJ. Comput. Phys). 178 (1), 15- 36 (2002) and Hall(L. Chacón and D. A. Knoll, phJ. Comput. Phys.), 188 (2), 573-592 (2003) MHD models. These are based on ``physics-based'' ideas, in which knowledge of the physics is exploited to derive well-conditioned (diagonally-dominant) approximations to the original system that are amenable to optimal solver technologies (multigrid). In this work, we will describe the status of the extension of the 2D preconditioning ideas for a 3D compressible, single-fluid XMHD model.
Flux splitting algorithms for two-dimensional viscous flows with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing
1989-01-01
The Roe flux difference splitting method was extended to treat 2-D viscous flows with nonequilibrium chemistry. The derivations have avoided unnecessary assumptions or approximations. For spatial discretization, the second-order Roe upwind differencing is used for the convective terms and central differencing for the viscous terms. An upwind-based TVD scheme is applied to eliminate oscillations and obtain a sharp representation of discontinuities. A two-state Runge-Kutta method is used to time integrate the discretized Navier-Stokes and species transport equations for the asymptotic steady solutions. The present method is then applied to two types of flows: the shock wave/boundary layer interaction problems and the jet in cross flows.
Flux splitting algorithms for two-dimensional viscous flows with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing
1989-01-01
The Roe flux-difference splitting method has been extended to treat two-dimensional viscous flows with nonequilibrium chemistry. The derivations have avoided unnecessary assumptions or approximations. For spatial discretization, the second-order Roe upwind differencing is used for the convective terms and central differencing for the viscous terms. An upwind-based TVD scheme is applied to eliminate oscillations and obtain a sharp representation of discontinuities. A two-stage Runge-Kutta method is used to time integrate the discretized Navier-Stokes and species transport equations for the asymptotic steady solutions. The present method is then applied to two types of flows: the shock wave/boundary layer interaction problems and the jet in cross flows.
The terminal area simulation system. Volume 1: Theoretical formulation
NASA Technical Reports Server (NTRS)
Proctor, F. H.
1987-01-01
A three-dimensional numerical cloud model was developed for the general purpose of studying convective phenomena. The model utilizes a time splitting integration procedure in the numerical solution of the compressible nonhydrostatic primitive equations. Turbulence closure is achieved by a conventional first-order diagnostic approximation. Open lateral boundaries are incorporated which minimize wave reflection and which do not induce domain-wide mass trends. Microphysical processes are governed by prognostic equations for potential temperature water vapor, cloud droplets, ice crystals, rain, snow, and hail. Microphysical interactions are computed by numerous Orville-type parameterizations. A diagnostic surface boundary layer is parameterized assuming Monin-Obukhov similarity theory. The governing equation set is approximated on a staggered three-dimensional grid with quadratic-conservative central space differencing. Time differencing is approximated by the second-order Adams-Bashforth method. The vertical grid spacing may be either linear or stretched. The model domain may translate along with a convective cell, even at variable speeds.
Gigahertz-gated InGaAs/InP single-photon detector with detection efficiency exceeding 55% at 1550 nm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comandar, L. C.; Engineering Department, Cambridge University, 9 J J Thomson Ave, Cambridge CB3 0FA; Fröhlich, B.
We report on a gated single-photon detector based on InGaAs/InP avalanche photodiodes (APDs) with a single-photon detection efficiency exceeding 55% at 1550 nm. Our detector is gated at 1 GHz and employs the self-differencing technique for gate transient suppression. It can operate nearly dead time free, except for the one clock cycle dead time intrinsic to self-differencing, and we demonstrate a count rate of 500 Mcps. We present a careful analysis of the optimal driving conditions of the APD measured with a dead time free detector characterization setup. It is found that a shortened gate width of 360 ps together with anmore » increased driving signal amplitude and operation at higher temperatures leads to improved performance of the detector. We achieve an afterpulse probability of 7% at 50% detection efficiency with dead time free measurement and a record efficiency for InGaAs/InP APDs of 55% at an afterpulse probability of only 10.2% with a moderate dead time of 10 ns.« less
New multigrid approach for three-dimensional unstructured, adaptive grids
NASA Technical Reports Server (NTRS)
Parthasarathy, Vijayan; Kallinderis, Y.
1994-01-01
A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.
Interferometric observations of an artificial satellite.
Preston, R A; Ergas, R; Hinteregger, H F; Knight, C A; Robertson, D S; Shapiro, I I; Whitney, A R; Rogers, A E; Clark, T A
1972-10-27
Very-long-baseline interferometric observations of radio signals from the TACSAT synchronous satellite, even though extending over only 7 hours, have enabled an excellent orbit to be deduced. Precision in differenced delay and delay-rate measurements reached 0.15 nanosecond ( approximately 5 centimeters in equivalent differenced distance) and 0.05 picosecond per second ( approximately 0.002 centimeter per second in equivalent differenced velocity), respectively. The results from this initial three-station experiment demonstrate the feasibility of using the method for accurate satellite tracking and for geodesy. Comparisons are made with other techniques.
AN IMMERSED BOUNDARY METHOD FOR COMPLEX INCOMPRESSIBLE FLOWS
An immersed boundary method for time-dependant, three- dimensional, incompressible flows is presented in this paper. The incompressible Navier-Stokes equations are discretized using a low-diffusion flux splitting method for the inviscid fluxes and a second order central differenc...
The study and realization of BDS un-differenced network-RTK based on raw observations
NASA Astrophysics Data System (ADS)
Tu, Rui; Zhang, Pengfei; Zhang, Rui; Lu, Cuixian; Liu, Jinhai; Lu, Xiaochun
2017-06-01
A BeiDou Navigation Satellite System (BDS) Un-Differenced (UD) Network Real Time Kinematic (URTK) positioning algorithm, which is based on raw observations, is developed in this study. Given an integer ambiguity datum, the UD integer ambiguity can be recovered from Double-Differenced (DD) integer ambiguities, thus the UD observation corrections can be calculated and interpolated for the rover station to achieve the fast positioning. As this URTK model uses raw observations instead of the ionospheric-free combinations, it is applicable for both dual- and single-frequency users to realize the URTK service. The algorithm was validated with the experimental BDS data collected at four regional stations from day of year 080 to 083 in 2016. The achieved results confirmed the high efficiency of the proposed URTK for providing the rover users a rapid and precise positioning service compared to the standard NRTK. In our test, the BDS URTK can provide a positioning service with cm level accuracy, i.e., 1 cm in the horizontal components, and 2-3 cm in the vertical component. Within the regional network, the mean convergence time for the users to fix the UD ambiguities is 2.7 s for the dual-frequency observations and of 6.3 s for the single-frequency observations after the DD ambiguity resolution. Furthermore, due to the feature of realizing URTK technology under the UD processing mode, it is possible to integrate the global Precise Point Positioning (PPP) and the local NRTK into a seamless positioning service.
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2016-06-01
Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.
de Vine, Glenn; McClelland, David E; Gray, Malcolm B; Close, John D
2005-05-15
We present an experimental technique that permits mechanical-noise-free, cavity-enhanced frequency measurements of an atomic transition and its hyperfine structure. We employ the 532-nm frequency-doubled output from a Nd:YAG laser and an iodine vapor cell. The cell is placed in a folded ring cavity (FRC) with counterpropagating pump and probe beams. The FRC is locked with the Pound-Drever-Hall technique. Mechanical noise is rejected by differencing the pump and probe signals. In addition, this differenced error signal provides a sensitive measure of differential nonlinearity within the FRC.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.
Second-order variational equations for N-body simulations
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2016-07-01
First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.
A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)
1994-01-01
We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.
NASA Technical Reports Server (NTRS)
Berman, A. L.
1977-01-01
Observations of Viking differenced S-band/X-band (S-X) range are shown to correlate strongly with Viking Doppler noise. A ratio of proportionality between downlink S-band plasma-induced range error and two-way Doppler noise is calculated. A new parameter (similar to the parameter epsilon which defines the ratio of local electron density fluctuations to mean electron density) is defined as a function of observed data sample interval (Tau) where the time-scale of the observations is 15 Tau. This parameter is interpreted to yield the ratio of net observed phase (or electron density) fluctuations to integrated electron density (in RMS meters/meter). Using this parameter and the thin phase-changing screen approximation, a value for the scale size L is calculated. To be consistent with Doppler noise observations, it is seen necessary for L to be proportional to closest approach distance a, and a strong function of the observed data sample interval, and hence the time-scale of the observations.
Continuous non-invasive blood glucose monitoring by spectral image differencing method
NASA Astrophysics Data System (ADS)
Huang, Hao; Liao, Ningfang; Cheng, Haobo; Liang, Jing
2018-01-01
Currently, the use of implantable enzyme electrode sensor is the main method for continuous blood glucose monitoring. But the effect of electrochemical reactions and the significant drift caused by bioelectricity in body will reduce the accuracy of the glucose measurements. So the enzyme-based glucose sensors need to be calibrated several times each day by the finger-prick blood corrections. This increases the patient's pain. In this paper, we proposed a method for continuous Non-invasive blood glucose monitoring by spectral image differencing method in the near infrared band. The method uses a high-precision CCD detector to switch the filter in a very short period of time, obtains the spectral images. And then by using the morphological method to obtain the spectral image differences, the dynamic change of blood sugar is reflected in the image difference data. Through the experiment proved that this method can be used to monitor blood glucose dynamically to a certain extent.
Basic research for the Earth dynamics program
NASA Technical Reports Server (NTRS)
1981-01-01
The technique of range differencing with Lageos ranges to obtain more accurate estimates of baseline lengths and polar motion variation was studied. Differencing quasi simultaneous range observations eliminate a great deal of orbital biases. Progress is reported on the definition and maintenance of a conventional terrestrial reference system.
Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.
Change analysis in the United Arab Emirates: An investigation of techniques
Sohl, Terry L.
1999-01-01
Much of the landscape of the United Arab Emirates has been transformed over the past 15 years by massive afforestation, beautification, and agricultural programs. The "greening" of the United Arab Emirates has had environmental consequences, however, including degraded groundwater quality and possible damage to natural regional ecosystems. Personnel from the Ground- Water Research project, a joint effort between the National Drilling Company of the Abu Dhabi Emirate and the U.S. Geological Survey, were interested in studying landscape change in the Abu Dhabi Emirate using Landsat thematic mapper (TM) data. The EROs Data Center in Sioux Falls, South Dakota was asked to investigate land-cover change techniques that (1) provided locational, quantitative, and qualitative information on landcover change within the Abu Dhabi Emirate; and (2) could be easily implemented by project personnel who were relatively inexperienced in remote sensing. A number of products were created with 1987 and 1996 Landsat TM data using change-detection techniques, including univariate image differencing, an "enhanced" image differencing, vegetation index differencing, post-classification differencing, and changevector analysis. The different techniques provided products that varied in levels of adequacy according to the specific application and the ease of implementation and interpretation. Specific quantitative values of change were most accurately and easily provided by the enhanced image-differencing technique, while the change-vector analysis excelled at providing rich qualitative detail about the nature of a change.
USDA-ARS?s Scientific Manuscript database
A change detection experiment for an invasive species, saltcedar, near Lovelock, Nevada, was conducted with multi-date Compact Airborne Spectrographic Imager (CASI) hyperspectral datasets. Classification and NDVI differencing change detection methods were tested, In the classification strategy, a p...
CINDA-3G: Improved Numerical Differencing Analyzer Program for Third-Generation Computers
NASA Technical Reports Server (NTRS)
Gaski, J. D.; Lewis, D. R.; Thompson, L. R.
1970-01-01
The goal of this work was to develop a new and versatile program to supplement or replace the original Chrysler Improved Numerical Differencing Analyzer (CINDA) thermal analyzer program in order to take advantage of the improved systems software and machine speeds of the third-generation computers.
Fast Image Subtraction Using Multi-cores and GPUs
NASA Astrophysics Data System (ADS)
Hartung, Steven; Shukla, H.
2013-01-01
Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.
NASA Technical Reports Server (NTRS)
Waldron, Wayne L.; Klein, Larry; Altner, Bruce
1994-01-01
We model the evolution of a density shell propagating through the stellar wind of an early-type star, in order to investigate the effects of such shells on UV P Cygni line profiles. Unlike previous treatments, we solve the mass, momentum, and energy conservation equations, using an explicit time-differencing scheme, and present a parametric study of the density, velocity, and temperature response. Under the assumed conditions, relatively large spatial scale, large-amplitude density shells propagate as stable waves through the supersonic portion of the wind. Their dynamical behavior appears to mimic propagating 'solitary waves,' and they are found to accelerate at the same rate as the underlying steady state stellar wind (i.e., the shell rides the wind). These hydrodynamically stable structures quantitatively reproduce the anomalous 'discrete absorption component' (DAC) behavior observed in the winds of luminous early-type stars, as illustrated by comparisons of model predictions to an extensive International Ultraviolet Explorer (IUE) time series of spectra of zeta Puppis (O4f). From these comparisons, we find no conclusive evidence indicative of DACs accelerating at a significantly slower rate than the underlying stellar wind, contrary to earlier reports. In addition, these density shells are found to be consistent within the constraints set by the IR observations. We conclude that the concept of propagating density shells should be seriously reconsidered as a possible explanation of the DAC phenomenon in early-type stars.
Application of non-coherent Doppler data types for deep space navigation
NASA Technical Reports Server (NTRS)
Bhaskaran, Shyam
1995-01-01
Recent improvements in computational capability and Deep Space Network technology have renewed interest in examining the possibility of using one-way Doppler data alone to navigate interplanetary spacecraft. The one-way data can be formulated as the standard differenced-count Doppler or as phase measurements, and the data can be received at a single station or differenced if obtained simultaneously at two stations. A covariance analysis is performed which analyzes the accuracy obtainable by combinations of one-way Doppler data and compared with similar results using standard two-way Doppler and range. The sample interplanetary trajectory used was that of the Mars Pathfinder mission to Mars. It is shown that differenced one-way data is capable of determining the angular position of the spacecraft to fairly high accuracy, but has relatively poor sensitivity to the range. When combined with single station data, the position dispersions are roughly an order of magnitude larger in range and comparable in angular position as compared to dispersions obtained with standard data two-way types. It was also found that the phase formulation is less sensitive to data weight variations and data coverage than the differenced-count Doppler formulation.
The application of noncoherent Doppler data types for Deep Space Navigation
NASA Technical Reports Server (NTRS)
Bhaskaran, S.
1995-01-01
Recent improvements in computational capability and DSN technology have renewed interest in examining the possibility of using one-way Doppler data alone to navigate interplanetary spacecraft. The one-way data can be formulated as the standard differenced-count Doppler or as phase measurements, and the data can be received at a single station or differenced if obtained simultaneously at two stations. A covariance analysis, which analyzes the accuracy obtainable by combinations of one-way Doppler data, is performed and compared with similar results using standard two-way Doppler and range. The sample interplanetary trajectory used was that of the Mars Pathfinder mission to Mars. It is shown that differenced one-way data are capable of determining the angular position of the spacecraft to fairly high accuracy, but have relatively poor sensitivity to the range. When combined with single-station data, the position dispersions are roughly an order of magnitude larger in range and comparable in angular position as compared to dispersions obtained with standard two-way data types. It was also found that the phase formulation is less sensitive to data weight variations and data coverage than the differenced-count Doppler formulation.
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.
1985-01-01
A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.
Path length differencing and energy conservation of the S[sub N] Boltzmann/Spencer-Lewis equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippone, W.L.; Monahan, S.P.
It is shown that the S[sub N] Boltzmann/Spencer-Lewis equations conserve energy locally if and only if they satisfy particle balance and diamond differencing is used in path length. In contrast, the spatial differencing schemes have no bearing on the energy balance. Energy is conserved globally if it is conserved locally and the multigroup cross sections are energy conserving. Although the coupled electron-photon cross sections generated by CEPXS conserve particles and charge, they do not precisely conserve energy. It is demonstrated that these cross sections can be adjusted such that particles, charge, and energy are conserved. Finally, since a conventional negativemore » flux fixup destroys energy balance when applied to path legend, a modified fixup scheme that does not is presented.« less
Split Space-Marching Finite-Volume Method for Chemically Reacting Supersonic Flow
NASA Technical Reports Server (NTRS)
Rizzi, Arthur W.; Bailey, Harry E.
1976-01-01
A space-marching finite-volume method employing a nonorthogonal coordinate system and using a split differencing scheme for calculating steady supersonic flow over aerodynamic shapes is presented. It is a second-order-accurate mixed explicit-implicit procedure that solves the inviscid adiabatic and nondiffusive equations for chemically reacting flow in integral conservation-law form. The relationship between the finite-volume and differential forms of the equations is examined and the relative merits of each discussed. The method admits initial Cauchy data situated on any arbitrary surface and integrates them forward along a general curvilinear coordinate, distorting and deforming the surface as it advances. The chemical kinetics term is split from the convective terms which are themselves dimensionally split, thereby freeing the fluid operators from the restricted step size imposed by the chemical reactions and increasing the computational efficiency. The accuracy of this splitting technique is analyzed, a sufficient stability criterion is established, a representative flow computation is discussed, and some comparisons are made with another method.
USDA-ARS?s Scientific Manuscript database
Brown rot is a severe disease affecting stone and pome fruits. This disease was recently confirmed to be caused by the following six closely related species: Monilinia fructicola, Monilinia laxa, Monilinia fructigena, Monilia polystroma, Monilia mumecola and Monilia yunnanensis. Because of differenc...
Joint production and substitution in timber supply: a panel data analysis
Torjus F Bolkesjo; Joseph Buongiorno; Birger Solberg
2010-01-01
Supply equations for sawlog and pulpwood were developed with a panel of data from 102 Norwegian municipalities, observed from 1980 to 2000. Static and dynamic models were estimated by cross-section, time-series andpanel data methods. A static model estimated by first differencing gavethe best overall results in terms of theoretical expectations, pattern ofresiduals,...
Near Real-Time Event Detection & Prediction Using Intelligent Software Agents
2006-03-01
value was 0.06743. Multiple autoregressive integrated moving average ( ARIMA ) models were then build to see if the raw data, differenced data, or...slight improvement. The best adjusted r^2 value was found to be 0.1814. Successful results were not expected from linear or ARIMA -based modelling ...appear, 2005. [63] Mora-Lopez, L., Mora, J., Morales-Bueno, R., et al. Modelling time series of climatic parameters with probabilistic finite
NASA Astrophysics Data System (ADS)
Tzanos, Constantine P.
1992-10-01
A higher-order differencing scheme (Tzanos, 1990) is used in conjunction with a multigrid approach to obtain accurate solutions of the Navier-Stokes convection-diffusion equations at high Re numbers. Flow in a square cavity with a moving lid is used as a test problem. a multigrid approach based on the additive correction method (Settari and Aziz) and an iterative incomplete lower and upper solver demonstrated good performance for the whole range of Re number under consideration (from 1000 to 10,000) and for both uniform and nonuniform grids. It is concluded that the combination of the higher-order differencing scheme with a multigrid approach proved to be an effective technique for giving accurate solutions of the Navier-Stokes equations at high Re numbers.
Njemanze, Philip C
2010-11-30
The present study was designed to examine the effects of color stimulation on cerebral blood mean flow velocity (MFV) in men and women. The study included 16 (8 men and 8 women) right-handed healthy subjects. The MFV was recorded simultaneously in both right and left middle cerebral arteries in Dark and white Light conditions, and during color (Blue, Yellow and Red) stimulations, and was analyzed using functional transcranial Doppler spectroscopy (fTCDS) technique. Color processing occurred within cortico-subcortical circuits. In men, wavelength-differencing of Yellow/Blue pairs occurred within the right hemisphere by processes of cortical long-term depression (CLTD) and subcortical long-term potentiation (SLTP). Conversely, in women, frequency-differencing of Blue/Yellow pairs occurred within the left hemisphere by processes of cortical long-term potentiation (CLTP) and subcortical long-term depression (SLTD). In both genders, there was luminance effect in the left hemisphere, while in men it was along an axis opposite (orthogonal) to that of chromatic effect, in women, it was parallel. Gender-related differences in color processing demonstrated a right hemisphere cognitive style for wavelength-differencing in men, and a left hemisphere cognitive style for frequency-differencing in women. There are potential applications of fTCDS technique, for stroke rehabilitation and monitoring of drug effects.
NASA Astrophysics Data System (ADS)
Manalo, Lawrence B.
A comprehensive, non-equilibrium, two-domain (liquid and vapor), physics based, mathematical model is developed to investigate the onset and growth of the natural circulation and thermal stratification inside cryogenic propellant storage tanks due to heat transfer from the surroundings. A two-dimensional (planar) model is incorporated for the liquid domain while a lumped, thermodynamic model is utilized for the vapor domain. The mathematical model in the liquid domain consists of the conservation of mass, momentum, and energy equations and incorporates the Boussinesq approximation (constant fluid density except in the buoyancy term of the momentum equation). In addition, the vapor is assumed to behave like an ideal gas with uniform thermodynamic properties. Furthermore, the time-dependent nature of the heat leaks from the surroundings to the propellant (due to imperfect tank insulation) is considered. Also, heterogeneous nucleation, although not significant in the temperature range of study, has been included. The transport of mass and energy between the liquid and vapor domains leads to transient ullage vapor temperatures and pressures. (The latter of which affects the saturation temperature of the liquid at the liquid-vapor interface.) This coupling between the two domains is accomplished through an energy balance (based on a micro-layer concept) at the interface. The resulting governing, non-linear, partial differential equations (which include a Poisson's equation for determining the pressure distribution) in the liquid domain are solved by an implicit, finite-differencing technique utilizing a non-uniform (stretched) mesh (in both directions) for predicting the velocity and temperature fields. (The accuracy of the numerical scheme is validated by comparing the model's results to a benchmark numerical case as well as to available experimental data.) The mass, temperature, and pressure of the vapor is determined by using a simple explicit finite-differencing technique. With the model at hand, the effects of variable fluid transport/thermo-physical properties, levels of initial sub-cooling, operating pressure, and initial liquid aspect ratio on the natural circulation patterns and thermal stratification are numerically investigated. Liquid oxygen (LOx) is the primary working fluid in the study. However, a simulation with liquid nitrogen (LN2) as the propellant is also carried out for comparison purposes.
Challenges and Opportunities in Modeling of the Global Atmosphere
NASA Astrophysics Data System (ADS)
Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko
2016-04-01
Modeling paradigms on global scales may need to be reconsidered in order to better utilize the power of massively parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. Note that the described scenario strongly favors horizontally local discretizations. This is relatively easy to achieve in regional models. However, the spherical geometry complicates the problem. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of a reasonable size. However, the polar filtering requires transpositions involving extra communications as well as more computations. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for application of spectral representation. With some variations, such techniques are currently dominating in global models. Unfortunately, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with polar filtering is a step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances, such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids, were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Relaxing the hydrostatic approximation requieres careful reformulation of the model dynamics and more computations and communications. The unified Non-hydrostatic Multi-scale Model (NMMB) will be briefly discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable without modifying their amplitudes. The model has been successfully tested on various scales. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models, and its computational efficiency on parallel computers is good.
NASA Technical Reports Server (NTRS)
Goad, Clyde C.; Chadwell, C. David
1993-01-01
GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.
NASA Astrophysics Data System (ADS)
Rojali, Siahaan, Ida Sri Rejeki; Soewito, Benfano
2017-08-01
Steganography is the art and science of hiding the secret messages so the existence of the message cannot be detected by human senses. The data concealment is using the Multi Pixel Value Differencing (MPVD) algorithm, utilizing the difference from each pixel. The development was done by using six interval tables. The objective of this algorithm is to enhance the message capacity and to maintain the data security.
TLE uncertainty estimation using robust weighted differencing
NASA Astrophysics Data System (ADS)
Geul, Jacco; Mooij, Erwin; Noomen, Ron
2017-05-01
Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).
NASA Astrophysics Data System (ADS)
Brogan, D. J.; Nelson, P. A.; MacDonald, L. H.
2016-12-01
Considerable advances have been made in understanding post-wildfire runoff, erosion, and mass wasting at the hillslope and small watershed scale, but the larger-scale effects on flooding, water quality, and sedimentation are often the most significant impacts. The problem is that we have virtually no watershed-specific tools to quantify the proportion of eroded sediment that is stored or delivered from watersheds larger than about 2-5 km2. In this study we are quantifying how channel and valley bottom characteristics affect post-wildfire sediment storage and delivery. Our research is based on intensive monitoring of sediment storage over time in two 15 km2 watersheds (Skin Gulch and Hill Gulch) burned in the 2012 High Park Fire using repeated cross section and longitudinal surveys from fall 2012 through summer 2016, five airborne laser scanning (ALS) datasets from fall 2012 through summer 2015, and both radar and ground-based precipitation measurements. We have computed changes in sediment storage by differencing successive cross sections, and computed spatially explicit changes in successive ALS point clouds using the multiscale model to model cloud comparison (M3C2) algorithm. These channel changes are being related to potential morphometric controls, including valley width, valley slope, confinement, contributing area, valley expansion or contraction, topographic curvature (planform and profile), and estimated sediment inputs. We hypothesize that maximum rainfall intensity and lateral confinement will be the primary independent variables that describe observed patterns of erosion and deposition, and that the results can help predict post-wildfire sediment delivery and identify high priority areas for restoration.
On the effects of signal processing on sample entropy for postural control.
Lubetzky, Anat V; Harel, Daphna; Lubetzky, Eyal
2018-01-01
Sample entropy, a measure of time series regularity, has become increasingly popular in postural control research. We are developing a virtual reality assessment of sensory integration for postural control in people with vestibular dysfunction and wished to apply sample entropy as an outcome measure. However, despite the common use of sample entropy to quantify postural sway, we found lack of consistency in the literature regarding center-of-pressure signal manipulations prior to the computation of sample entropy. We therefore wished to investigate the effect of parameters choice and signal processing on participants' sample entropy outcome. For that purpose, we compared center-of-pressure sample entropy data between patients with vestibular dysfunction and age-matched controls. Within our assessment, participants observed virtual reality scenes, while standing on floor or a compliant surface. We then analyzed the effect of: modification of the radius of similarity (r) and the embedding dimension (m); down-sampling or filtering and differencing or detrending. When analyzing the raw center-of-pressure data, we found a significant main effect of surface in medio-lateral and anterior-posterior directions across r's and m's. We also found a significant interaction group × surface in the medio-lateral direction when r was 0.05 or 0.1 with a monotonic increase in p value with increasing r in both m's. These effects were maintained with down-sampling by 2, 3, and 4 and with detrending but not with filtering and differencing. Based on these findings, we suggest that for sample entropy to be compared across postural control studies, there needs to be increased consistency, particularly of signal handling prior to the calculation of sample entropy. Procedures such as filtering, differencing or detrending affect sample entropy values and could artificially alter the time series pattern. Therefore, if such procedures are performed they should be well justified.
Extended image differencing for change detection in UAV video mosaics
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang; Schumann, Arne
2014-03-01
Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.
Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.
Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald
2014-11-25
In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.
A numerical study of the steady scalar convective diffusion equation for small viscosity
NASA Technical Reports Server (NTRS)
Giles, M. B.; Rose, M. E.
1983-01-01
A time-independent convection diffusion equation is studied by means of a compact finite difference scheme and numerical solutions are compared to the analytic inviscid solutions. The correct internal and external boundary layer behavior is observed, due to an inherent feature of the scheme which automatically produces upwind differencing in inviscid regions and the correct viscous behavior in viscous regions.
Short-term change detection for UAV video
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2012-11-01
In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer IOSB, see Heinze et. al. 2010.1 In a further step we plan to incorporate more information from the video sequences to the change detection input images, e.g., by image enhancement or by along-track stereo which are available in the ABUL system.
NASA Astrophysics Data System (ADS)
Jacquet, J.; McCoy, S. W.; McGrath, D.; Nimick, D.; Friesen, B.; Fahey, M. J.; Leidich, J.; Okuinghttons, J.
2015-12-01
The Colonia river system, draining the eastern edge of the Northern Patagonia Icefield, Chile, has experienced a dramatic shift in flow regime from one characterized by seasonal discharge variability to one dominated by episodic glacial lake outburst floods (GLOFs). We use multi-temporal visible satellite images, high-resolution digital elevation models (DEMs) derived from stereo image pairs, and in situ observations to quantify sediment and water fluxes out of the dammed glacial lake, Lago Cachet Dos (LC2), as well as the concomitant downstream environmental change. GLOFs initiated in April 2008 and have since occurred, on average, two to three times a year. Differencing concurrent gage measurements made on the Baker River upstream and downstream of the confluence with the Colonia river finds peak GLOF discharges of ~ 3,000 m3s-1, which is ~ 4 times the median discharge of the Baker River and over 20 times the median discharge of the Colonia river. During each GLOF, ~ 200,000,000 m3 of water evacuates from the LC2, resulting in erosion of valley-fill sediments and the delta on the upstream end of LC2. Differencing DEMs between April 2008 and February 2014 revealed that ~ 2.5 x 107 m3 of sediment was eroded. Multi-temporal DEM differencing shows that erosion rates were highest initially, with > 20 vertical m of sediment removed between 2008 and 2012, and generally less than 5 m between 2012 and 2014. The downstream Colonia River Sandur also experienced geomorphic changes due to GLOFs. Using Landsat imagery to calculate the normalized difference water index (NDWI), we demonstrate that the Colonia River was in a stable configuration between 1984 and 2008. At the onset of GLOFs in April 2008, a change in channel location began and continued with each subsequent GLOF. Quantification of sediment and water fluxes due to GLOFs in the Colonia river valley provides insight on the geomorphic and environmental changes in river systems experiencing dramatic shifts in flow regime.
A study of pressure-based methodology for resonant flows in non-linear combustion instabilities
NASA Technical Reports Server (NTRS)
Yang, H. Q.; Pindera, M. Z.; Przekwas, A. J.; Tucker, K.
1992-01-01
This paper presents a systematic assessment of a large variety of spatial and temporal differencing schemes on nonstaggered grids by the pressure-based methods for the problems of fast transient flows. The observation from the present study is that for steady state flow problems, pressure-based methods can be very competitive with the density-based methods. For transient flow problems, pressure-based methods utilizing the same differencing scheme are less accurate, even though the wave speeds are correctly predicted.
Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment
NASA Astrophysics Data System (ADS)
Barnett, D. A., Jr.
1991-02-01
An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.
Development of a Global Multilayered Cloud Retrieval System
NASA Technical Reports Server (NTRS)
Huang, J.; Minnis, P.; Lin, B.; Yi, Y.; Ayers, J. K.; Khaiyer, M. M.; Arduini, R.; Fan, T.-F
2004-01-01
A more rigorous multilayered cloud retrieval system has been developed to improve the determination of high cloud properties in multilayered clouds. The MCRS attempts a more realistic interpretation of the radiance field than earlier methods because it explicitly resolves the radiative transfer that would produce the observed radiances. A two-layer cloud model was used to simulate multilayered cloud radiative characteristics. Despite the use of a simplified two-layer cloud reflectance parameterization, the MCRS clearly produced a more accurate retrieval of ice water path than simple differencing techniques used in the past. More satellite data and ground observation have to be used to test the MCRS. The MCRS methods are quite appropriate for interpreting the radiances when the high cloud has a relatively large optical depth (tau(sub I) greater than 2). For thinner ice clouds, a more accurate retrieval might be possible using infrared methods. Selection of an ice cloud retrieval and a variety of other issues must be explored before a complete global application of this technique can be implemented. Nevertheless, the initial results look promising.
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1994-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. Roe's approximate Riemann solution scheme or the computationally less expensive advection upstream splitting method (AUSM) flux-splitting scheme is used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passages and the distribution of flow variables in the stationary inlet port region.
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1993-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. The Roe approximate Riemann solution scheme or the computationally less expensive Advection Upstream Splitting Method (AUSM) flux-splitting scheme are used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passage and the distribution of flow variables in the stationary inlet port region.
Zhao, Qile; Wang, Guangxing; Liu, Zhizhao; Hu, Zhigang; Dai, Zhiqiang; Liu, Jingnan
2016-01-01
Using GNSS observable from some stations in the Asia-Pacific area, the carrier-to-noise ratio (CNR) and multipath combinations of BeiDou Navigation Satellite System (BDS), as well as their variations with time and/or elevation were investigated and compared with those of GPS and Galileo. Provided the same elevation, the CNR of B1 observables is the lowest among the three BDS frequencies, while B3 is the highest. The code multipath combinations of BDS inclined geosynchronous orbit (IGSO) and medium Earth orbit (MEO) satellites are remarkably correlated with elevation, and the systematic “V” shape trends could be eliminated through between-station-differencing or modeling correction. Daily periodicity was found in the geometry-free ionosphere-free (GFIF) combinations of both BDS geostationary Earth orbit (GEO) and IGSO satellites. The variation range of carrier phase GFIF combinations of GEO satellites is −2.0 to 2.0 cm. The periodicity of carrier phase GFIF combination could be significantly mitigated through between-station differencing. Carrier phase GFIF combinations of BDS GEO and IGSO satellites might also contain delays related to satellites. Cross-correlation suggests that the GFIF combinations’ time series of some GEO satellites might vary according to their relative geometries with the sun. PMID:26805831
Zhao, Qile; Wang, Guangxing; Liu, Zhizhao; Hu, Zhigang; Dai, Zhiqiang; Liu, Jingnan
2016-01-20
Using GNSS observable from some stations in the Asia-Pacific area, the carrier-to-noise ratio (CNR) and multipath combinations of BeiDou Navigation Satellite System (BDS), as well as their variations with time and/or elevation were investigated and compared with those of GPS and Galileo. Provided the same elevation, the CNR of B1 observables is the lowest among the three BDS frequencies, while B3 is the highest. The code multipath combinations of BDS inclined geosynchronous orbit (IGSO) and medium Earth orbit (MEO) satellites are remarkably correlated with elevation, and the systematic "V" shape trends could be eliminated through between-station-differencing or modeling correction. Daily periodicity was found in the geometry-free ionosphere-free (GFIF) combinations of both BDS geostationary Earth orbit (GEO) and IGSO satellites. The variation range of carrier phase GFIF combinations of GEO satellites is -2.0 to 2.0 cm. The periodicity of carrier phase GFIF combination could be significantly mitigated through between-station differencing. Carrier phase GFIF combinations of BDS GEO and IGSO satellites might also contain delays related to satellites. Cross-correlation suggests that the GFIF combinations' time series of some GEO satellites might vary according to their relative geometries with the sun.
NASA Astrophysics Data System (ADS)
Kwon, J.; Yang, H.
2006-12-01
Although GPS provides continuous and accurate position information, there are still some rooms for improvement of its positional accuracy, especially in the medium and long range baseline determination. In general, in case of more than 50 km baseline length, the effect of ionospheric delay is the one causing the largest degradation in positional accuracy. For example, the ionospheric delay in terms of double differenced mode easily reaches 10 cm with baseline length of 101 km. Therefore, many researchers have been tried to mitigate/reduce the effect using various modeling methods. In this paper, the optimal stochastic modeling of the ionospheric delay in terms of baseline length is presented. The data processing has been performed by constructing a Kalman filter with states of positions, ambiguities, and the ionospheric delays in the double differenced mode. Considering the long baseline length, both double differenced GPS phase and code observations are used as observables and LAMBDA has been applied to fix the ambiguities. Here, the ionospheric delay is stochastically modeled by well-known Gaussian, 1st and 3rd order Gauss-Markov process. The parameters required in those models such as correlation distance and time is determined by the least-square adjustment using ionosphere-only observables. Mainly the results and analysis from this study show the effect of stochastic models of the ionospheric delay in terms of the baseline length, models, and parameters used. In the above example with 101 km baseline length, it was found that the positional accuracy with appropriate ionospheric modeling (Gaussian) was about ±2 cm whereas it reaches about ±15 cm with no stochastic modeling. It is expected that the approach in this study contributes to improve positional accuracy, especially in medium and long range baseline determination.
Single-Receiver GPS Phase Bias Resolution
NASA Technical Reports Server (NTRS)
Bertiger, William I.; Haines, Bruce J.; Weiss, Jan P.; Harvey, Nathaniel E.
2010-01-01
Existing software has been modified to yield the benefits of integer fixed double-differenced GPS-phased ambiguities when processing data from a single GPS receiver with no access to any other GPS receiver data. When the double-differenced combination of phase biases can be fixed reliably, a significant improvement in solution accuracy is obtained. This innovation uses a large global set of GPS receivers (40 to 80 receivers) to solve for the GPS satellite orbits and clocks (along with any other parameters). In this process, integer ambiguities are fixed and information on the ambiguity constraints is saved. For each GPS transmitter/receiver pair, the process saves the arc start and stop times, the wide-lane average value for the arc, the standard deviation of the wide lane, and the dual-frequency phase bias after bias fixing for the arc. The second step of the process uses the orbit and clock information, the bias information from the global solution, and only data from the single receiver to resolve double-differenced phase combinations. It is called "resolved" instead of "fixed" because constraints are introduced into the problem with a finite data weight to better account for possible errors. A receiver in orbit has much shorter continuous passes of data than a receiver fixed to the Earth. The method has parameters to account for this. In particular, differences in drifting wide-lane values must be handled differently. The first step of the process is automated, using two JPL software sets, Longarc and Gipsy-Oasis. The resulting orbit/clock and bias information files are posted on anonymous ftp for use by any licensed Gipsy-Oasis user. The second step is implemented in the Gipsy-Oasis executable, gd2p.pl, which automates the entire process, including fetching the information from anonymous ftp
Exponential integrators in time-dependent density-functional calculations
NASA Astrophysics Data System (ADS)
Kidd, Daniel; Covington, Cody; Varga, Kálmán
2017-12-01
The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.
NASA Technical Reports Server (NTRS)
Thomas, S. D.; Holst, T. L.
1985-01-01
A full-potential steady transonic wing flow solver has been modified so that freestream density and residual are captured in regions of constant velocity. This numerically precise freestream consistency is obtained by slightly altering the differencing scheme without affecting the implicit solution algorithm. The changes chiefly affect the fifteen metrics per grid point, which are computed once and stored. With this new method, the outer boundary condition is captured accurately, and the smoothness of the solution is especially improved near regions of grid discontinuity.
Assessment of trend and seasonality in road accident data: an Iranian case study.
Razzaghi, Alireza; Bahrampour, Abbas; Baneshi, Mohammad Reza; Zolala, Farzaneh
2013-06-01
Road traffic accidents and their related deaths have become a major concern, particularly in developing countries. Iran has adopted a series of policies and interventions to control the high number of accidents occurring over the past few years. In this study we used a time series model to understand the trend of accidents, and ascertain the viability of applying ARIMA models on data from Taybad city. This study is a cross-sectional study. We used data from accidents occurring in Taybad between 2007 and 2011. We obtained the data from the Ministry of Health (MOH) and used the time series method with a time lag of one month. After plotting the trend, non-stationary data in mean and variance were removed using Box-Cox transformation and a differencing method respectively. The ACF and PACF plots were used to control the stationary situation. The traffic accidents in our study had an increasing trend over the five years of study. Based on ACF and PACF plots gained after applying Box-Cox transformation and differencing, data did not fit to a time series model. Therefore, neither ARIMA model nor seasonality were observed. Traffic accidents in Taybad have an upward trend. In addition, we expected either the AR model, MA model or ARIMA model to have a seasonal trend, yet this was not observed in this analysis. Several reasons may have contributed to this situation, such as uncertainty of the quality of data, weather changes, and behavioural factors that are not taken into account by time series analysis.
1980-07-01
FUNCTION ( t) CENTERED AT C WITH PERIOD n -nr 0 soTIME t FIGURE 3.4S RECTAPOOLAR PORN )=C FUNCTION g t) CENTERED AT 0 WITH PERIOD n n n 52n tI y I (h...of a typical family in Kabiria (a city in Northern Algeria) over the time period Jan.-Feb. 1975 through Nov.-Dec. 1977. We would like to obtain a...values of y .. .. ... -75- Table 4.2 The Average Bi-Monthly Expenses of a Family in Kabiria and Their Fourier Representation Fourier Coefficients x k
Solidification of a binary mixture
NASA Technical Reports Server (NTRS)
Antar, B. N.
1982-01-01
The time dependent concentration and temperature profiles of a finite layer of a binary mixture are investigated during solidification. The coupled time dependent Stefan problem is solved numerically using an implicit finite differencing algorithm with the method of lines. Specifically, the temporal operator is approximated via an implicit finite difference operator resulting in a coupled set of ordinary differential equations for the spatial distribution of the temperature and concentration for each time. Since the resulting differential equations set form a boundary value problem with matching conditions at an unknown spatial point, the method of invariant imbedding is used for its solution.
1988-10-01
meteorologists’ rule-of-thumb that climatic drift manifests itself in periods greater than 30 years. For a fractionally-differenced model with our...estimates in a univariate ARIMA (p, d, q) with I d I< 0.5 has been derived by Li and McLrjd (1986). The model used by I-Iaslett an Raftery can be viewed as...Reply to the Discussion of "Space-time Modelling with Long-mnmory cDependence: Assessing Ireland’s Wind Resource" cJohn Haslett Department of
A Time Domain Analysis of Gust-Cascade Interaction Noise
NASA Technical Reports Server (NTRS)
Nallasamy, M.; Hixon, R.; Sawyer, S. D.; Dyson, R. W.
2003-01-01
The gust response of a 2 D cascade is studied by solving the full nonlinear Euler equations employing higher order accurate spatial differencing and time stepping techniques. The solutions exhibit the exponential decay of the two circumferential mode orders of the cutoff blade passing frequency (BPF) tone and propagation of one circumferential mode order at 2BPF, as would be expected for the flow configuration considered. Two frequency excitations indicate that the interaction between the frequencies and the self interaction contribute to the amplitude of the propagating mode.
Discrete models for the numerical analysis of time-dependent multidimensional gas dynamics
NASA Technical Reports Server (NTRS)
Roe, P. L.
1984-01-01
A possible technique is explored for extending to multidimensional flows some of the upwind-differencing methods that are highly successful in the one-dimensional case. Emphasis is on the two-dimensional case, and the flow domain is assumed to be divided into polygonal computational elements. Inside each element, the flow is represented by a local superposition of elementary solutions consisting of plane waves not necessarily aligned with the element boundaries.
NASA Technical Reports Server (NTRS)
Warming, R. F.; Beam, R. M.
1978-01-01
Efficient, noniterative, implicit finite difference algorithms are systematically developed for nonlinear conservation laws including purely hyperbolic systems and mixed hyperbolic parabolic systems. Utilization of a rational fraction or Pade time differencing formulas, yields a direct and natural derivation of an implicit scheme in a delta form. Attention is given to advantages of the delta formation and to various properties of one- and two-dimensional algorithms.
Automatic differentiation evaluated as a tool for rotorcraft design and optimization
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.
1995-01-01
This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.
Prediction of the Thrust Performance and the Flowfield of Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Wang, T.-S.
1990-01-01
In an effort to improve the current solutions in the design and analysis of liquid propulsive engines, a computational fluid dynamics (CFD) model capable of calculating the reacting flows from the combustion chamber, through the nozzle to the external plume, was developed. The Space Shuttle Main Engine (SSME) fired at sea level, was investigated as a sample case. The CFD model, FDNS, is a pressure based, non-staggered grid, viscous/inviscid, ideal gas/real gas, reactive code. An adaptive upwinding differencing scheme is employed for the spatial discretization. The upwind scheme is based on fourth order central differencing with fourth order damping for smooth regions, and second order central differencing with second order damping for shock capturing. It is equipped with a CHMQGM equilibrium chemistry algorithm and a PARASOL finite rate chemistry algorithm using the point implicit method. The computed flow results and performance compared well with those of other standard codes and engine hot fire test data. In addition, the transient nozzle flowfield calculation was also performed to demonstrate the ability of FDNS in capturing the flow separation during the startup process.
Ice Sheet Change Detection by Satellite Image Differencing
NASA Technical Reports Server (NTRS)
Bindschadler, Robert A.; Scambos, Ted A.; Choi, Hyeungu; Haran, Terry M.
2010-01-01
Differencing of digital satellite image pairs highlights subtle changes in near-identical scenes of Earth surfaces. Using the mathematical relationships relevant to photoclinometry, we examine the effectiveness of this method for the study of localized ice sheet surface topography changes using numerical experiments. We then test these results by differencing images of several regions in West Antarctica, including some where changes have previously been identified in altimeter profiles. The technique works well with coregistered images having low noise, high radiometric sensitivity, and near-identical solar illumination geometry. Clouds and frosts detract from resolving surface features. The ETM(plus) sensor on Landsat-7, ALI sensor on EO-1, and MODIS sensor on the Aqua and Terra satellite platforms all have potential for detecting localized topographic changes such as shifting dunes, surface inflation and deflation features associated with sub-glacial lake fill-drain events, or grounding line changes. Availability and frequency of MODIS images favor this sensor for wide application, and using it, we demonstrate both qualitative identification of changes in topography and quantitative mapping of slope and elevation changes.
Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.
NASA Astrophysics Data System (ADS)
Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof
2014-05-01
A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.
NASA Astrophysics Data System (ADS)
Mertes, J. R.; Zant, C. N.; Gulley, J. D.; Thomsen, T. L.
2017-08-01
Monitoring, managing and preserving submerged cultural resources (SCR) such as shipwrecks can involve time consuming detailed physical surveys, expensive side-scan sonar surveys, the study of photomosaics and even photogrammetric analysis. In some cases, surveys of SCR have produced 3D models, though these models have not typically been used to document patterns of site degradation over time. In this study, we report a novel approach for quantifying degradation and changes to SCR that relies on diver-acquired video surveys, generation of 3D models from data acquired at different points in time using structure from motion, and differencing of these models. We focus our study on the shipwreck S.S. Wisconsin, which is located roughly 10.2 km southeast of Kenosha, Wisconsin, in Lake Michigan. We created two digital elevation models of the shipwreck using surveys performed during the summers of 2006 and 2015 and differenced these models to map spatial changes within the wreck. Using orthomosaics and difference map data, we identified a change in degradation patterns. Degradation was anecdotally believed to be caused by inward collapse, but maps indicated a pattern of outward collapse of the hull structure, which has resulted in large scale shifting of material in the central upper deck. In addition, comparison of the orthomosaics with the difference map clearly shows movement of objects, degradation of smaller pieces and in some locations, an increase in colonization of mussels.
NASA Technical Reports Server (NTRS)
Ross, Kenton; Graham, William; Prados, Don; Spruce, Joseph
2007-01-01
MVDI, which effectively involves the differencing of NDMI and NDVI, appears to display increased noise that is consistent with a differencing technique. This effect masks finer variations in vegetation moisture, preventing MVDI from fulfilling the requirement of giving decision makers insight into spatial variation of fire risk. MVDI shows dependencies on land cover and phenology which also argue against its use as a fire risk proxy in an area of diverse and fragmented land covers. The conclusion of the rapid prototyping effort is that MVDI should not be implemented for SSC decision support.
Relative motion using analytical differential gravity
NASA Technical Reports Server (NTRS)
Gottlieb, Robert G.
1988-01-01
This paper presents a new approach to the computation of the motion of one satellite relative to another. The trajectory of the reference satellite is computed accurately subject to geopotential perturbations. This precise trajectory is used as a reference in computing the position of a nearby body, or bodies. The problem that arises in this approach is differencing nearly equal terms in the geopotential model, especially as the separation of the reference and nearby bodies approaches zero. By developing closed form expressions for differences in higher order and degree geopotential terms, the numerical problem inherent in the differencing approach is eliminated.
SCISEAL: A CFD code for analysis of fluid dynamic forces in seals
NASA Technical Reports Server (NTRS)
Athavale, Mahesh; Przekwas, Andrzej
1994-01-01
A viewgraph presentation is made of the objectives, capabilities, and test results of the computer code SCISEAL. Currently, the seal code has: a finite volume, pressure-based integration scheme; colocated variables with strong conservation approach; high-order spatial differencing, up to third-order; up to second-order temporal differencing; a comprehensive set of boundary conditions; a variety of turbulence models and surface roughness treatment; moving grid formulation for arbitrary rotor whirl; rotor dynamic coefficients calculated by the circular whirl and numerical shaker methods; and small perturbation capabilities to handle centered and eccentric seals.
Three-dimensional simulation of vortex breakdown
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Salas, M. D.
1990-01-01
The integral form of the complete, unsteady, compressible, three-dimensional Navier-Stokes equations in the conservation form, cast in generalized coordinate system, are solved, numerically, to simulate the vortex breakdown phenomenon. The inviscid fluxes are discretized using Roe's upwind-biased flux-difference splitting scheme and the viscous fluxes are discretized using central differencing. Time integration is performed using a backward Euler ADI (alternating direction implicit) scheme. A full approximation multigrid is used to accelerate the convergence to steady state.
Computer program documentation: Raw-to-processed SINDA program (RTOPHS) user's guide
NASA Technical Reports Server (NTRS)
Damico, S. J.
1980-01-01
Use of the Raw to Processed SINDA(System Improved Numerical Differencing Analyzer) Program, RTOPHS, which provides a means of making the temperature prediction data on binary HSTFLO and HISTRY units generated by SINDA available to engineers in an easy to use format, is discussed. The program accomplishes this by reading the HISTRY unit and according to user input instructions, the desired times and temperature prediction data are extracted and written to a word addressable drum file.
Impacts of Ocean Waves on the Atmospheric Surface Layer: Simulations and Observations
2008-06-06
energy and pressure described in § 4 are solved using a mixed finite - difference pseudospectral scheme with a third-order Runge-Kutta time stepping with a...to that in our DNS code (Sullivan and McWilliams 2002; Sullivan et al. 2000). For our mixed finite - difference pseudospec- tral differencing scheme a...Poisson equation. The spatial discretization is pseu- dospectral along lines of constant or and second- order finite difference in the vertical
NASA Astrophysics Data System (ADS)
Cheong, Chin Wen
2008-02-01
This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.
Category 3: Sound Generation by Interacting with a Gust
NASA Technical Reports Server (NTRS)
Scott, James R.
2004-01-01
The cascade-gust interaction problem is solved employing a time-domain approach. The purpose of this problem is to test the ability of a CFD/CAA code to accurately predict the unsteady aerodynamic and aeroacoustic response of a single airfoil to a two-dimensional, periodic vortical gust.Nonlinear time dependent Euler equations are solved using higher order spatial differencing and time marching techniques. The solutions indicate the generation and propagation of expected mode orders for the given configuration and flow conditions. The blade passing frequency (BPF) is cut off for this cascade while higher harmonic, 2BPF and 3BPF, modes are cut on.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasperikova, E.; Smith, J.T.; Kappler, K.N.
2010-04-01
With prior funding (UX-1225, MM-0437, and MM-0838), we have successfully designed and built a cart-mounted Berkeley UXO Discriminator (BUD) and demonstrated its performance at various test sites (e.g., Gasperikova et al., 2007, 2009). It is a multi-transmitter multi-receiver active electromagnetic system that is able to discriminate UXO from scrap at a single measurement position, hence eliminates equirement of a very accurate sensor location. The cart-mounted system comprises of three orthogonal transmitters and eight pairs of differenced receivers (Smith et al., 2007). Receiver coils are located on ymmetry lines through the center of the system and see identical fields during themore » on-time of the pulse in all of the transmitter coils. They can then be wired in opposition to produce zero output during the n-ime of the pulses in three orthogonal transmitters. Moreover, this configuration dramatically reduces noise in the measurements by canceling the background electromagnetic fields (these fields are uniform ver the scale of the receiver array and are consequently nulled by the differencing operation), and by canceling the noise contributed by the tilt of the receivers in the Earth's magnetic field, and therefore reatly enhances receivers sensitivity to the gradients of the target.« less
Moderating Effects of Mathematics Anxiety on the Effectiveness of Explicit Timing
ERIC Educational Resources Information Center
Grays, Sharnita D.; Rhymer, Katrina N.; Swartzmiller, Melissa D.
2017-01-01
Explicit timing is an empirically validated intervention to increase problem completion rates by exposing individuals to a stopwatch and explicitly telling them of the time limit for the assignment. Though explicit timing has proven to be effective for groups of students, some students may not respond well to explicit timing based on factors such…
NASA Technical Reports Server (NTRS)
Desideri, J. A.; Steger, J. L.; Tannehill, J. C.
1978-01-01
The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.
Efficient entanglement distribution over 200 kilometers.
Dynes, J F; Takesue, H; Yuan, Z L; Sharpe, A W; Harada, K; Honjo, T; Kamada, H; Tadanaga, O; Nishida, Y; Asobe, M; Shields, A J
2009-07-06
Here we report the first demonstration of entanglement distribution over a record distance of 200 km which is of sufficient fidelity to realize secure communication. In contrast to previous entanglement distribution schemes, we use detection elements based on practical avalanche photodiodes (APDs) operating in a self-differencing mode. These APDs are low-cost, compact and easy to operate requiring only electrical cooling to achieve high single photon detection efficiency. The self-differencing APDs in combination with a reliable parametric down-conversion source demonstrate that entanglement distribution over ultra-long distances has become both possible and practical. Consequently the outlook is extremely promising for real world entanglement-based communication between distantly separated parties.
On the geodetic applications of simultaneous range-differencing to LAGEOS
NASA Technical Reports Server (NTRS)
Pablis, E. C.
1982-01-01
The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.
1987-06-01
number of series among the 63 which were identified as a particular ARIMA form and were "best" modeled by a particular technique. Figure 1 illustrates a...th time from xe’s. The integrbted autoregressive - moving average model , denoted by ARIMA (p,d,q) is a result of combining d-th differencing process...Experiments, (4) Data Analysis and Modeling , (5) Theory and Probablistic Inference, (6) Fuzzy Statistics, (7) Forecasting and Prediction, (8) Small Sample
CFD in the 1980's from one point of view
NASA Technical Reports Server (NTRS)
Lomax, Harvard
1991-01-01
The present interpretive treatment of the development history of CFD in the 1980s gives attention to advancements in such algorithmic techniques as flux Jacobian-based upwind differencing, total variation-diminishing and essentially nonoscillatory schemes, multigrid methods, unstructured grids, and nonrectangular structured grids. At the same time, computational turbulence research gave attention to turbulence modeling on the bases of increasingly powerful supercomputers and meticulously constructed databases. The major future developments in CFD will encompass such capabilities as structured and unstructured three-dimensional grids.
Computation of confined coflow jets with three turbulence models
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T. H.
1993-01-01
A numerical study of confined jets in a cylindrical duct is carried out to examine the performance of two recently proposed turbulence models: an RNG-based K-epsilon model and a realizable Reynolds stress algebraic equation model. The former is of the same form as the standard K-epsilon model but has different model coefficients. The latter uses an explicit quadratic stress-strain relationship to model the turbulent stresses and is capable of ensuring the positivity of each turbulent normal stress. The flow considered involves recirculation with unfixed separation and reattachment points and severe adverse pressure gradients, thereby providing a valuable test of the predictive capability of the models for complex flows. Calculations are performed with a finite-volume procedure. Numerical credibility of the solutions is ensured by using second-order accurate differencing schemes and sufficiently fine grids. Calculations with the standard K-epsilon model are also made for comparison. Detailed comparisons with experiments show that the realizable Reynolds stress algebraic equation model consistently works better than does the standard K-epsilon model in capturing the essential flow features, while the RNG-based K-epsilon model does not seem to give improvements over the standard K-epsilon model under the flow conditions considered.
NASA Technical Reports Server (NTRS)
Pollmeier, Vincent M.; Kallemeyn, Pieter H.; Thurman, Sam W.
1993-01-01
The application of high-accuracy S/S-band (2.1 GHz uplink/2.3 GHz downlink) ranging to orbit determination with relatively short data arcs is investigated for the approach phase of each of the Galileo spacecraft's two Earth encounters (8 December 1990 and 8 December 1992). Analysis of S-band ranging data from Galileo indicated that under favorable signal levels, meter-level precision was attainable. It is shown that ranginging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. Explicit modeling of ranging bias parameters for each station pass is used to largely remove systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle finding capabilities of the data. The accuracy achieved using the precision range filtering strategy proved markedly better when compared to post-flyby reconstructions than did solutions utilizing a traditional Doppler/range filter strategy. In addition, the navigation accuracy achieved with precision ranging was comparable to that obtained using delta-Differenced One-Way Range, an interferometric measurement of spacecraft angular position relative to a natural radio source, which was also used operationally.
Large Eddy Simulation (LES) of Particle-Laden Temporal Mixing Layers
NASA Technical Reports Server (NTRS)
Bellan, Josette; Radhakrishnan, Senthilkumaran
2012-01-01
High-fidelity models of plume-regolith interaction are difficult to develop because of the widely disparate flow conditions that exist in this process. The gas in the core of a rocket plume can often be modeled as a time-dependent, high-temperature, turbulent, reacting continuum flow. However, due to the vacuum conditions on the lunar surface, the mean molecular path in the outer parts of the plume is too long for the continuum assumption to remain valid. Molecular methods are better suited to model this region of the flow. Finally, granular and multiphase flow models must be employed to describe the dust and debris that are displaced from the surface, as well as how a crater is formed in the regolith. At present, standard commercial CFD (computational fluid dynamics) software is not capable of coupling each of these flow regimes to provide an accurate representation of this flow process, necessitating the development of custom software. This software solves the fluid-flow-governing equations in an Eulerian framework, coupled with the particle transport equations that are solved in a Lagrangian framework. It uses a fourth-order explicit Runge-Kutta scheme for temporal integration, an eighth-order central finite differencing scheme for spatial discretization. The non-linear terms in the governing equations are recast in cubic skew symmetric form to reduce aliasing error. The second derivative viscous terms are computed using eighth-order narrow stencils that provide better diffusion for the highest resolved wave numbers. A fourth-order Lagrange interpolation procedure is used to obtain gas-phase variable values at the particle locations.
NASA Technical Reports Server (NTRS)
Oaks, J.; Frank, A.; Falvey, S.; Lister, M.; Buisson, J.; Wardrip, C.; Warren, H.
1982-01-01
Time transfer equipment and techniques used with the Navigation Technology Satellites were modified and extended for use with the Global Positioning System (GPS) satellites. A prototype receiver was built and field tested. The receiver uses the GPS L1 link at 1575 MHz with C/A code only to resolve a measured range to the satellite. A theoretical range is computed from the satellite ephemeris transmitted in the data message and the user's coordinates. Results of user offset from GPS time are obtained by differencing the measured and theoretical ranges and applying calibration corrections. Results of the first field test evaluation of the receiver are presented.
Challenges in Modeling of the Global Atmosphere
NASA Astrophysics Data System (ADS)
Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko; Black, Tom
2015-04-01
The massively parallel computer architectures require that some widely adopted modeling paradigms be reconsidered in order to utilize more productively the power of parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. However, the described scenario implies that the discretization used in the model is horizontally local. The spherical geometry further complicates the problem. Various grid topologies will be discussed and examples will be shown. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of decent size. However, the polar filtering requires transpositions involving extra communications. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for a wide application of the spectral representation. With some variations, these techniques are used in most major centers. However, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with a fast Fourier transform represents a significant step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Having in mind the sensitivity of extended deterministic forecasts to small disturbances, we may need global non-hydrostatic models sooner than we think. The unified Non-hydrostatic Multi-scale Model (NMMB) that is being developed at the National Centers for Environmental Prediction (NCEP) as a part of the new NOAA Environmental Modeling System (NEMS) will be discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable. The model formulation has been successfully tested on various scales. A global forecasting system based on the NMMB has been run in order to test and tune the model. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models. The computational efficiency of the global NMMB on parallel computers is good.
NASA Astrophysics Data System (ADS)
Smith, J. Torquil; Morrison, H. Frank; Doolittle, Lawrence R.; Tseng, Hung-Wen
2007-03-01
Equivalent dipole polarizabilities are a succinct way to summarize the inductive response of an isolated conductive body at distances greater than the scale of the body. Their estimation requires measurement of secondary magnetic fields due to currents induced in the body by time varying magnetic fields in at least three linearly independent (e.g., orthogonal) directions. Secondary fields due to an object are typically orders of magnitude smaller than the primary inducing fields near the primary field sources (transmitters). Receiver coils may be oriented orthogonal to primary fields from one or two transmitters, nulling their response to those fields, but simultaneously nulling to fields of additional transmitters is problematic. If transmitter coils are constructed symmetrically with respect to inversion in a point, their magnetic fields are symmetric with respect to that point. If receiver coils are operated in pairs symmetric with respect to inversion in the same point, then their differenced output is insensitive to the primary fields of any symmetrically constructed transmitters, allowing nulling to three (or more) transmitters. With a sufficient number of receivers pairs, object equivalent dipole polarizabilities can be estimated in situ from measurements at a single instrument sitting, eliminating effects of inaccurate instrument location on polarizability estimates. The method is illustrated with data from a multi-transmitter multi-receiver system with primary field nulling through differenced receiver pairs, interpreted in terms of principal equivalent dipole polarizabilities as a function of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kupferman, R.
The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.
Response functions of free mass gravitational wave antennas
NASA Technical Reports Server (NTRS)
Estabrook, F. B.
1985-01-01
The work of Gursel, Linsay, Spero, Saulson, Whitcomb and Weiss (1984) on the response of a free-mass interferometric antenna is extended. Starting from first principles, the earlier work derived the response of a 2-arm gravitational wave antenna to plane polarized gravitational waves. Equivalent formulas (generalized slightly to allow for arbitrary elliptical polarization) are obtained by a simple differencing of the '3-pulse' Doppler response functions of two 1-arm antennas. A '4-pulse' response function is found, with quite complicated angular dependences for arbitrary incident polarization. The differencing method can as readily be used to write exact response functions ('3n+1 pulse') for antennas having multiple passes or more arms.
NASA Technical Reports Server (NTRS)
Jackson, James A.; Marr, Greg C.; Maher, Michael J.
1995-01-01
NASA GSFC VNS TSG personnel have proposed the use of TDRSS to obtain telemetry and/or S-band one-way return Doppler tracking data for spacecraft which do not have TDRSS-compatible transponders and therefore were never considered candidates for TDRSS support. For spacecraft with less stable local oscillators (LO), one-way return Doppler tracking data is typically of poor quality. It has been demonstrated using UARS, WIND, and NOAA-J tracking data that the simultaneous use of two TDRSS spacecraft can yield differenced one-way return Doppler data of high quality which is usable for orbit determination by differencing away the effects of oscillator instability.
The computation of dynamic fractional difference parameter for S&P500 index
NASA Astrophysics Data System (ADS)
Pei, Tan Pei; Cheong, Chin Wen; Galagedera, Don U. A.
2015-10-01
This study evaluates the time-varying long memory behaviors of the S&P500 volatility index using dynamic fractional difference parameters. Time-varying fractional difference parameter shows the dynamic of long memory in volatility series for the pre and post subprime mortgage crisis triggered by U.S. The results find an increasing trend in the S&P500 long memory volatility for the pre-crisis period. However, the onset of Lehman Brothers event reduces the predictability of volatility series following by a slight fluctuation of the factional differencing parameters. After that, the U.S. financial market becomes more informationally efficient and follows a non-stationary random process.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Yusuf, Abdullahi; Isa Aliyu, Aliyu; Baleanu, Dumitru
2018-03-01
This research analyzes the symmetry analysis, explicit solutions and convergence analysis to the time fractional Cahn-Allen (CA) and time-fractional Klein-Gordon (KG) equations with Riemann-Liouville (RL) derivative. The time fractional CA and time fractional KG are reduced to respective nonlinear ordinary differential equation of fractional order. We solve the reduced fractional ODEs using an explicit power series method. The convergence analysis for the obtained explicit solutions are investigated. Some figures for the obtained explicit solutions are also presented.
The GFZ real-time GNSS precise positioning service system and its adaption for COMPASS
NASA Astrophysics Data System (ADS)
Li, Xingxing; Ge, Maorong; Zhang, Hongping; Nischan, Thomas; Wickert, Jens
2013-03-01
Motivated by the IGS real-time Pilot Project, GFZ has been developing its own real-time precise positioning service for various applications. An operational system at GFZ is now broadcasting real-time orbits, clocks, global ionospheric model, uncalibrated phase delays and regional atmospheric corrections for standard PPP, PPP with ambiguity fixing, single-frequency PPP and regional augmented PPP. To avoid developing various algorithms for different applications, we proposed a uniform algorithm and implemented it into our real-time software. In the new processing scheme, we employed un-differenced raw observations with atmospheric delays as parameters, which are properly constrained by real-time derived global ionospheric model or regional atmospheric corrections and by the empirical characteristics of the atmospheric delay variation in time and space. The positioning performance in terms of convergence time and ambiguity fixing depends mainly on the quality of the received atmospheric information and the spatial and temporal constraints. The un-differenced raw observation model can not only integrate PPP and NRTK into a seamless positioning service, but also syncretize these two techniques into a unique model and algorithm. Furthermore, it is suitable for both dual-frequency and sing-frequency receivers. Based on the real-time data streams from IGS, EUREF and SAPOS reference networks, we can provide services of global precise point positioning (PPP) with 5-10 cm accuracy, PPP with ambiguity-fixing of 2-5 cm accuracy, PPP using single-frequency receiver with accuracy of better than 50 cm and PPP with regional augmentation for instantaneous ambiguity resolution of 1-3 cm accuracy. We adapted the system for current COMPASS to provide PPP service. COMPASS observations from a regional network of nine stations are used for precise orbit determination and clock estimation in simulated real-time mode, the orbit and clock products are applied for real-time precise point positioning. The simulated real-time PPP service confirms that real-time positioning services of accuracy at dm-level and even cm-level is achievable with COMPASS only.
Progress in multi-dimensional upwind differencing
NASA Technical Reports Server (NTRS)
Vanleer, Bram
1992-01-01
Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.
NASA Technical Reports Server (NTRS)
Radomski, M. S.; Doll, C. E.
1995-01-01
The Differenced Range (DR) Versus Integrated Doppler (ID) (DRVID) method exploits the opposition of high-frequency signal versus phase retardation by plasma media to obtain information about the plasma's corruption of simultaneous range and Doppler spacecraft tracking measurements. Thus, DR Plus ID (DRPID) is an observable independent of plasma refraction, while actual DRVID (DR minus ID) measures the time variation of the path electron content independently of spacecraft motion. The DRVID principle has been known since 1961. It has been used to observe interplanetary plasmas, is implemented in Deep Space Network tracking hardware, and has recently been applied to single-frequency Global Positioning System user navigation This paper discusses exploration at the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) of DRVID synthesized from simultaneous two-way range and Doppler tracking for low Earth-orbiting missions supported by the Tracking and Data Relay Satellite System (TDRSS) The paper presents comparisons of actual DR and ID residuals and relates those comparisons to predictions of the Bent model. The complications due to the pilot tone influence on relayed Doppler measurements are considered. Further use of DRVID to evaluate ionospheric models is discussed, as is use of DRPID in reducing dependence on ionospheric modeling in orbit determination.
Five-Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations: Beam Maps and Window Functions
NASA Technical Reports Server (NTRS)
Hill, R.S.; Weiland, J.L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C.L.; Halpern, M.; Kogut, A.; Page, L.;
2008-01-01
Cosmology and other scientific results from the WMAP mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of approximately 2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of approximately 1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of approximately 2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly. errors in the measured disk temperature are approximately 0.5%.
NASA Astrophysics Data System (ADS)
Chang, Guobin; Xu, Tianhe; Yao, Yifei; Wang, Qianxin
2018-01-01
In order to incorporate the time smoothness of ionospheric delay to aid the cycle slip detection, an adaptive Kalman filter is developed based on variance component estimation. The correlations between measurements at neighboring epochs are fully considered in developing a filtering algorithm for colored measurement noise. Within this filtering framework, epoch-differenced ionospheric delays are predicted. Using this prediction, the potential cycle slips are repaired for triple-frequency signals of global navigation satellite systems. Cycle slips are repaired in a stepwise manner; i.e., for two extra wide lane combinations firstly and then for the third frequency. In the estimation for the third frequency, a stochastic model is followed in which the correlations between the ionospheric delay prediction errors and the errors in the epoch-differenced phase measurements are considered. The implementing details of the proposed method are tabulated. A real BeiDou Navigation Satellite System data set is used to check the performance of the proposed method. Most cycle slips, no matter trivial or nontrivial, can be estimated in float values with satisfactorily high accuracy and their integer values can hence be correctly obtained by simple rounding. To be more specific, all manually introduced nontrivial cycle slips are correctly repaired.
Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E
2017-05-01
Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.
A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics
NASA Astrophysics Data System (ADS)
Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno
2017-07-01
In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.
Thermal modeling of a cryogenic turbopump for space shuttle applications.
NASA Technical Reports Server (NTRS)
Knowles, P. J.
1971-01-01
Thermal modeling of a cryogenic pump and a hot-gas turbine in a turbopump assembly proposed for the Space Shuttle is described in this paper. A model, developed by identifying the heat-transfer regimes and incorporating their dependencies into a turbopump system model, included heat transfer for two-phase cryogen, hot-gas (200 R) impingement on turbine blades, gas impingement on rotating disks and parallel plate fluid flow. The ?thermal analyzer' program employed to develop this model was the TRW Systems Improved Numerical Differencing Analyzer (SINDA). This program uses finite differencing with lumped parameter representation for each node. Also discussed are model development, simulations of turbopump startup/shutdown operations, and the effects of varying turbopump parameters on the thermal performance.
Analysis of airfoil transitional separation bubbles
NASA Technical Reports Server (NTRS)
Davis, R. L.; Carter, J. E.
1984-01-01
A previously developed local inviscid-viscous interaction technique for the analysis of airfoil transitional separation bubbles, ALESEP (Airfoil Leading Edge Separation) has been modified to utilize a more accurate windward finite difference procedure in the reversed flow region, and a natural transition/turbulence model has been incorporated for the prediction of transition within the separation bubble. Numerous calculations and experimental comparisons are presented to demonstrate the effects of the windward differencing scheme and the natural transition/turbulence model. Grid sensitivity and convergence capabilities of this inviscid-viscous interaction technique are briefly addressed. A major contribution of this report is that with the use of windward differencing, a second, counter-rotating eddy has been found to exist in the wall layer of the primary separation bubble.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Don, W-S; Gotllieb, D; Shu, C-W
2001-11-26
For flows that contain significant structure, high order schemes offer large advantages over low order schemes. Fundamentally, the reason comes from the truncation error of the differencing operators. If one examines carefully the expression for the truncation error, one will see that for a fixed computational cost that the error can be made much smaller by increasing the numerical order than by increasing the number of grid points. One can readily derive the following expression which holds for systems dominated by hyperbolic effects and advanced explicitly in time: flops = const * p{sup 2} * k{sup (d+1)(p+1)/p}/E{sup (d+1)/p} where flopsmore » denotes floating point operations, p denotes numerical order, d denotes spatial dimension, where E denotes the truncation error of the difference operator, and where k denotes the Fourier wavenumber. For flows that contain structure, such as turbulent flows or any calculation where, say, vortices are present, there will be significant energy in the high values of k. Thus, one can see that the rate of growth of the flops is very different for different values of p. Further, the constant in front of the expression is also very different. With a low order scheme, one quickly reaches the limit of the computer. With the high order scheme, one can obtain far more modes before the limit of the computer is reached. Here we examine the application of spectral methods and the Weighted Essentially Non-Oscillatory (WENO) scheme to the Richtmyer-Meshkov Instability. We show the intricate structure that these high order schemes can calculate and we show that the two methods, though very different, converge to the same numerical solution indicating that the numerical solution is very likely physically correct.« less
A high-order spatial filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-04-01
A high-order spatial filter is developed for the spectral-element-method dynamical core on the cubed-sphere grid which employs the Gauss-Lobatto Lagrange interpolating polynomials (GLLIP) as orthogonal basis functions. The filter equation is the high-order Helmholtz equation which corresponds to the implicit time-differencing of a diffusion equation employing the high-order Laplacian. The Laplacian operator is discretized within a cell which is a building block of the cubed sphere grid and consists of the Gauss-Lobatto grid. When discretizing a high-order Laplacian, due to the requirement of C0 continuity along the cell boundaries the grid-points in neighboring cells should be used for the target cell: The number of neighboring cells is nearly quadratically proportional to the filter order. Discrete Helmholtz equation yields a huge-sized and highly sparse matrix equation whose size is N*N with N the number of total grid points on the globe. The number of nonzero entries is also almost in quadratic proportion to the filter order. Filtering is accomplished by solving the huge-matrix equation. While requiring a significant computing time, the solution of global matrix provides the filtered field free of discontinuity along the cell boundaries. To achieve the computational efficiency and the accuracy at the same time, the solution of the matrix equation was obtained by only accounting for the finite number of adjacent cells. This is called as a local-domain filter. It was shown that to remove the numerical noise near the grid-scale, inclusion of 5*5 cells for the local-domain filter was found sufficient, giving the same accuracy as that obtained by global domain solution while reducing the computing time to a considerably lower level. The high-order filter was evaluated using the standard test cases including the baroclinic instability of the zonal flow. Results indicated that the filter performs better on the removal of grid-scale numerical noises than the explicit high-order viscosity. It was also presented that the filter can be easily implemented on the distributed-memory parallel computers with a desirable scalability.
Low Dissipative High Order Shock-Capturing Methods Using Characteristic-Based Filters
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sandham, N. D.; Djomehri, M. J.
1998-01-01
An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Oisson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.
Low Dissipative High Order Shock-Capturing Methods using Characteristic-Based Filters
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sandham, N. D.; Djomehri, M. J.
1998-01-01
An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Olsson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.
Thermal instability in post-flare plasmas
NASA Technical Reports Server (NTRS)
Antiochos, S. K.
1976-01-01
The cooling of post-flare plasmas is discussed and the formation of loop prominences is explained as due to a thermal instability. A one-dimensional model was developed for active loop prominences. Only the motion and heat fluxes parallel to the existing magnetic fields are considered. The relevant size scales and time scales are such that single-fluid MHD equations are valid. The effects of gravity, the geometry of the field and conduction losses to the chromosphere are included. A computer code was constructed to solve the model equations. Basically, the system is treated as an initial value problem (with certain boundary conditions at the chromosphere-corona transition region), and a two-step time differencing scheme is used.
Application of high-precision two-way ranging to Galileo Earth-1 encounter navigation
NASA Technical Reports Server (NTRS)
Pollmeier, V. M.; Thurman, S. W.
1992-01-01
The application of precision two-way ranging to orbit determination with relatively short data arcs is investigated for the Galileo spacecraft's approach to its first Earth encounter (December 8, 1990). Analysis of previous S-band (2.3-GHz) ranging data acquired from Galileo indicated that under good signal conditions submeter precision and 10-m ranging accuracy were achieved. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. A range data filtering technique, in which explicit modeling of range measurement bias parameters for each station pass is utilized, is shown to largely remove the systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle-finding capabilities of the data. The accuracy of the Galileo orbit solutions obtained with S-band Doppler and precision ranging were found to be consistent with simple theoretical calculations, which predicted that angular accuracies of 0.26-0.34 microrad were achievable. In addition, the navigation accuracy achieved with precision ranging was marginally better than that obtained using delta-differenced one-way range (delta DOR), the principal data type that was previously used to obtain spacecraft angular position measurements operationally.
On the effect of using the Shapiro filter to smooth winds on a sphere
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Balgovind, R. C.
1984-01-01
Spatial differencing schemes which are not enstrophy conserving nor implicitly damping require global filtering of short waves to eliminate the build-up of energy in the shortest wavelengths due to aliasing. Takacs and Balgovind (1983) have shown that filtering on a sphere with a latitude dependent damping function will cause spurious vorticity and divergence source terms to occur if care is not taken to ensure the irrotationality of the gradients of the stream function and velocity potential. Using a shallow water model with fourth-order energy-conserving spatial differencing, it is found that using a 16th-order Shapiro (1979) filter on the winds and heights to control nonlinear instability also creates spurious source terms when the winds are filtered in the meridional direction.
Black hole evolution by spectral methods
NASA Astrophysics Data System (ADS)
Kidder, Lawrence E.; Scheel, Mark A.; Teukolsky, Saul A.; Carlson, Eric D.; Cook, Gregory B.
2000-10-01
Current methods of evolving a spacetime containing one or more black holes are plagued by instabilities that prohibit long-term evolution. Some of these instabilities may be due to the numerical method used, traditionally finite differencing. In this paper, we explore the use of a pseudospectral collocation (PSC) method for the evolution of a spherically symmetric black hole spacetime in one dimension using a hyperbolic formulation of Einstein's equations. We demonstrate that our PSC method is able to evolve a spherically symmetric black hole spacetime forever without enforcing constraints, even if we add dynamics via a Klein-Gordon scalar field. We find that, in contrast with finite-differencing methods, black hole excision is a trivial operation using PSC applied to a hyperbolic formulation of Einstein's equations. We discuss the extension of this method to three spatial dimensions.
NASA Astrophysics Data System (ADS)
Koehler-Sidki, A.; Dynes, J. F.; Lucamarini, M.; Roberts, G. L.; Sharpe, A. W.; Yuan, Z. L.; Shields, A. J.
2018-04-01
Fast-gated avalanche photodiodes (APDs) are the most commonly used single photon detectors for high-bit-rate quantum key distribution (QKD). Their robustness against external attacks is crucial to the overall security of a QKD system, or even an entire QKD network. We investigate the behavior of a gigahertz-gated, self-differencing (In,Ga)As APD under strong illumination, a tactic Eve often uses to bring detectors under her control. Our experiment and modeling reveal that the negative feedback by the photocurrent safeguards the detector from being blinded through reducing its avalanche probability and/or strengthening the capacitive response. Based on this finding, we propose a set of best-practice criteria for designing and operating fast-gated APD detectors to ensure their practical security in QKD.
NASA Astrophysics Data System (ADS)
Candela, S. G.; Howat, I.; Noh, M. J.; Porter, C. C.; Morin, P. J.
2016-12-01
In the last decade, high resolution satellite imagery has become an increasingly accessible tool for geoscientists to quantify changes in the Arctic land surface due to geophysical, ecological and anthropomorphic processes. However, the trade off between spatial coverage and spatial-temporal resolution has limited detailed, process-level change detection over large (i.e. continental) scales. The ArcticDEM project utilized over 300,000 Worldview image pairs to produce a nearly 100% coverage elevation model (above 60°N) offering the first polar, high spatial - high resolution (2-8m by region) dataset, often with multiple repeats in areas of particular interest to geo-scientists. A dataset of this size (nearly 250 TB) offers endless new avenues of scientific inquiry, but quickly becomes unmanageable computationally and logistically for the computing resources available to the average scientist. Here we present TopoDiff, a framework for a generalized. automated workflow that requires minimal input from the end user about a study site, and utilizes cloud computing resources to provide a temporally sorted and differenced dataset, ready for geostatistical analysis. This hands-off approach allows the end user to focus on the science, without having to manage thousands of files, or petabytes of data. At the same time, TopoDiff provides a consistent and accurate workflow for image sorting, selection, and co-registration enabling cross-comparisons between research projects.
Study of structural change in volcanic and geothermal areas using seismic tomography
NASA Astrophysics Data System (ADS)
Mhana, Najwa; Foulger, Gillian; Julian, Bruce; peirce, Christine
2014-05-01
Long Valley caldera is a large silicic volcano. It has been in a state of volcanic and seismic unrest since 1978. Farther escalation of this unrest could pose a threat to the 5,000 residents and the tens of thousands of tourists who visit the area. We have studied the crustal structure beneath 28 km X 16 km area using seismic tomography. We performed tomographic inversions for the years 2009 and 2010 with a view to differencing it with the 1997 result to look for structural changes with time and whether repeat tomography is a capable of determining the changes in structure in volcanic and geothermal reservoirs. Thus, it might provide a useful tool to monitoring physical changes in volcanoes and exploited geothermal reservoirs. Up to 600 earthquakes, selected from the best-quality events, were used for the inversion. The inversions were performed using program simulps12 [Thurber, 1983]. Our initial results show that changes in both V p and V s were consistent with the migration of CO2 into the upper 2 km or so. Our ongoing work will also invert pairs of years simultaneously using a new program, tomo4d [Julian and Foulger, 2010]. This program inverts for the differences in structure between two epochs so it can provide a more reliable measure of structural change than simply differencing the results of individual years.
Prediction and control of chaotic processes using nonlinear adaptive networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.D.; Barnes, C.W.; Flake, G.W.
1990-01-01
We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.
A general algorithm using finite element method for aerodynamic configurations at low speeds
NASA Technical Reports Server (NTRS)
Balasubramanian, R.
1975-01-01
A finite element algorithm for numerical simulation of two-dimensional, incompressible, viscous flows was developed. The Navier-Stokes equations are suitably modelled to facilitate direct solution for the essential flow parameters. A leap-frog time differencing and Galerkin minimization of these model equations yields the finite element algorithm. The finite elements are triangular with bicubic shape functions approximating the solution space. The finite element matrices are unsymmetrically banded to facilitate savings in storage. An unsymmetric L-U decomposition is performed on the finite element matrices to obtain the solution for the boundary value problem.
Computational design of the basic dynamical processes of the UCLA general circulation model
NASA Technical Reports Server (NTRS)
Arakawa, A.; Lamb, V. R.
1977-01-01
The 12-layer UCLA general circulation model encompassing troposphere and stratosphere (and superjacent 'sponge layer') is described. Prognostic variables are: surface pressure, horizontal velocity, temperature, water vapor and ozone in each layer, planetary boundary layer (PBL) depth, temperature, moisture and momentum discontinuities at PBL top, ground temperature and water storage, and mass of snow on ground. Selection of space finite-difference schemes for homogeneous incompressible flow, with/without a free surface, nonlinear two-dimensional nondivergent flow, enstrophy conserving schemes, momentum advection schemes, vertical and horizontal difference schemes, and time differencing schemes are discussed.
NASA Technical Reports Server (NTRS)
Imlay, S. T.
1986-01-01
An implicit finite volume method is investigated for the solution of the compressible Navier-Stokes equations for flows within thrust reversing and thrust vectoring nozzles. Thrust reversing nozzles typically have sharp corners, and the rapid expansion and large turning angles near these corners are shown to cause unacceptable time step restrictions when conventional approximate factorization methods are used. In this investigation these limitations are overcome by using second-order upwind differencing and line Gauss-Siedel relaxation. This method is implemented with a zonal mesh so that flows through complex nozzle geometries may be efficiently calculated. Results are presented for five nozzle configurations including two with time varying geometries. Three cases are compared with available experimental data and the results are generally acceptable.
Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow
NASA Technical Reports Server (NTRS)
Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.
1977-01-01
An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.
A method of real-time detection for distant moving obstacles by monocular vision
NASA Astrophysics Data System (ADS)
Jia, Bao-zhi; Zhu, Ming
2013-12-01
In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.
Time-marching transonic flutter solutions including angle-of-attack effects
NASA Technical Reports Server (NTRS)
Edwards, J. W.; Bennett, R. M.; Whitlow, W., Jr.; Seidel, D. A.
1982-01-01
Transonic aeroelastic solutions based upon the transonic small perturbation potential equation were studied. Time-marching transient solutions of plunging and pitching airfoils were analyzed using a complex exponential modal identification technique, and seven alternative integration techniques for the structural equations were evaluated. The HYTRAN2 code was used to determine transonic flutter boundaries versus Mach number and angle-of-attack for NACA 64A010 and MBB A-3 airfoils. In the code, a monotone differencing method, which eliminates leading edge expansion shocks, is used to solve the potential equation. When the effect of static pitching moment upon the angle-of-attack is included, the MBB A-3 airfoil can have multiple flutter speeds at a given Mach number.
Generalized three-dimensional experimental lightning code (G3DXL) user's manual
NASA Technical Reports Server (NTRS)
Kunz, Karl S.
1986-01-01
Information concerning the programming, maintenance and operation of the G3DXL computer program is presented and the theoretical basis for the code is described. The program computes time domain scattering fields and surface currents and charges induced by a driving function on and within a complex scattering object which may be perfectly conducting or a lossy dielectric. This is accomplished by modeling the object with cells within a three-dimensional, rectangular problem space, enforcing the appropriate boundary conditions and differencing Maxwell's equations in time. In the present version of the program, the driving function can be either the field radiated by a lightning strike or a direct lightning strike. The F-106 B aircraft is used as an example scattering object.
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
Alphan, Hakan
2013-03-01
The aim of this study is (1) to quantify landscape changes in the easternmost Mediterranean deltas using bi-temporal binary change detection approach and (2) to analyze relationships between conservation/management designations and various categories of change that indicate type, degree and severity of human impact. For this purpose, image differencing and ratioing were applied to Landsat TM images of 1984 and 2006. A total of 136 candidate change images including normalized difference vegetation index (NDVI) and principal component analysis (PCA) difference images were tested to understand performance of bi-temporal pre-classification analysis procedures in the Mediterranean delta ecosystems. Results showed that visible image algebra provided high accuracies than did NDVI and PCA differencing. On the other hand, Band 5 differencing had one of the lowest change detection performances. Seven superclasses of change were identified using from/to change categories between the earlier and later dates. These classes were used to understand spatial character of anthropogenic impacts in the study area and derive qualitative and quantitative change information within and outside of the conservation/management areas. Change analysis indicated that natural site and wildlife reserve designations fell short of protecting sand dunes from agricultural expansion in the west. East of the study area, however, was exposed to least human impact owing to the fact that nature conservation status kept human interference at a minimum. Implications of these changes were discussed and solutions were proposed to deal with management problems leading to environmental change.
NASA Technical Reports Server (NTRS)
Folkner, W. M.; Border, J. S.; Nandi, S.; Zukor, K. S.
1993-01-01
A new radio metric positioning technique has demonstrated improved orbit determination accuracy for the Magellan and Pioneer Venus Orbiter orbiters. The new technique, known as Same-Beam Interferometry (SBI), is applicable to the positioning of multiple planetary rovers, landers, and orbiters which may simultaneously be observed in the same beamwidth of Earth-based radio antennas. Measurements of carrier phase are differenced between spacecraft and between receiving stations to determine the plane-of-sky components of the separation vector(s) between the spacecraft. The SBI measurements complement the information contained in line-of-sight Doppler measurements, leading to improved orbit determination accuracy. Orbit determination solutions have been obtained for a number of 48-hour data arcs using combinations of Doppler, differenced-Doppler, and SBI data acquired in the spring of 1991. Orbit determination accuracy is assessed by comparing orbit solutions from adjacent data arcs. The orbit solution differences are shown to agree with expected orbit determination uncertainties. The results from this demonstration show that the orbit determination accuracy for Magellan obtained by using Doppler plus SBI data is better than the accuracy achieved using Doppler plus differenced-Doppler by a factor of four and better than the accuracy achieved using only Doppler by a factor of eighteen. The orbit determination accuracy for Pioneer Venus Orbiter using Doppler plus SBI data is better than the accuracy using only Doppler data by 30 percent.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
Tidewater dynamics at Store Glacier, West Greenland from daily repeat UAV surveys
NASA Astrophysics Data System (ADS)
Ryan, Jonathan; Hubbard, Alun; Toberg, Nick; Box, Jason; Todd, Joe; Christoffersen, Poul; Neal, Snooke
2017-04-01
A significant component of the Greenland ice sheet's mass wasteage to sea level rise is attributed to the acceleration and dynamic thinning at its tidewater margins. To improve understanding of the rapid mass loss processes occurring at large tidewater glaciers, we conducted a suite of daily repeat aerial surveys across the terminus of Store Glacier, a large outlet draining the western Greenland Ice Sheet, from May to July 2014 (https://www.youtube.com/watch?v=-y8kauAVAfE). The unmanned aerial vehicles (UAVs) were equipped with digital cameras, which, in combination with onboard GPS, enabled production of high spatial resolution orthophotos and digital elevation models (DEMs) using standard structure-from-motion techniques. These data provide insight into the short-term dynamics of Store Glacier surrounding the break-up of the sea-ice mélange that occurred between 4 and 7 June. Feature tracking of the orthophotos reveals that mean speed of the terminus is 16 - 18 m per day, which was independently verified against a high temporal resolution time-series derived from an expendable/telemetric GPS deployed at the terminus. Differencing the surface area of successive orthophotos enable quantification of daily calving rates, which significantly increase just after melange break-up. Likewise, by differencing bulk freeboard volume of icebergs through time we could also constrain the magnitude and variation of submarine melt. We calculate a mean submarine melt rate of 0.18 m per day throughout the spring period with relatively little supraglacial runoff and no active meltwater plumes to stimulate fjord circulation and upwelling of deeper, warmer water masses. Finally, we relate calving rates to the zonation and depth of water-filled crevasses, which were prominent across parts of the terminus from June onwards.
NASA Technical Reports Server (NTRS)
Cullimore, B.
1994-01-01
SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow, capillary devices, user defined fluids, gravity and acceleration body forces on a fluid, and variable volumes. SINDA'85/FLUINT offers the following numerical solution techniques. The Finite difference formulation of the explicit method is the Forward-difference explicit approximation. The formulation of the implicit method is the Crank-Nicolson approximation. The program allows simulation of non-uniform heating and facilitates modeling thin-walled heat exchangers. The ability to model non-equilibrium behavior within two-phase volumes is included. Recent improvements to the program were made in modeling real evaporator-pumps and other capillary-assist evaporators. SINDA'85/FLUINT is available by license for a period of ten (10) years to approved licensees. The licensed program product includes the source code and one copy of the supporting documentation. Additional copies of the documentation may be purchased separately at any time. SINDA'85/FLUINT is written in FORTRAN 77. Version 2.3 has been implemented on Cray series computers running UNICOS, CONVEX computers running CONVEX OS, and DEC RISC computers running ULTRIX. Binaries are included with the Cray version only. The Cray version of SINDA'85/FLUINT also contains SINGE, an additional graphics program developed at Johnson Space Flight Center. Both source and executable code are provided for SINGE. Users wishing to create their own SINGE executable will also need the NASA Device Independent Graphics Library (NASADIG, previously known as SMDDIG; UNIX version, MSC-22001). The Cray and CONVEX versions of SINDA'85/FLUINT are available on 9-track 1600 BPI UNIX tar format magnetic tapes. The CONVEX version is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format. The DEC RISC ULTRIX version is available on a TK50 magnetic tape cartridge in UNIX tar format. SINDA was developed in 1971, and first had fluid capability added in 1975. SINDA'85/FLUINT version 2.3 was released in 1990.
Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X
2014-03-01
Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
NASA Astrophysics Data System (ADS)
Mangano, Joseph F.
A debris flow associated with the 2003 breach of Grand Ditch in Rocky Mountain National Park, Colorado provided an opportunity to determine controls on channel geomorphic responses following a large sedimentation event. Due to the remote site location and high spatial and temporal variability of processes controlling channel response, repeat airborne lidar surveys in 2004 and 2012 were used to capture conditions along the upper Colorado River and tributary Lulu Creek i) one year following the initial debris flow, and ii) following two bankfull flows (2009 and 2010) and a record-breaking long duration, high intensity snowmelt runoff season (2011). Locations and volumes of aggradation and degradation were determined using lidar differencing. Channel and valley metrics measured from the lidar surveys included water surface slope, valley slope, changes in bankfull width, sinuosity, braiding index, channel migration, valley confinement, height above the water surface along the floodplain, and longitudinal profiles. Reaches of aggradation and degradation along the upper Colorado River are influenced by valley confinement and local controls. Aggradational reaches occurred predominantly in locations where the valley was unconfined and valley slope remained constant through the length of the reach. Channel avulsions, migration, and changes in sinuosity were common in all unconfined reaches, whether aggradational or degradational. Bankfull width in both aggradational and degradational reaches showed greater changes closer to the sediment source, with the magnitude of change decreasing downstream. Local variations in channel morphology, site specific channel conditions, and the distance from the sediment source influence the balance of transport supply and capacity and, therefore, locations of aggradation, degradation, and associated morphologic changes. Additionally, a complex response initially seen in repeat cross-sections is broadly supported by lidar differencing, although the differencing captures only the net change over eight years and not annual changes. Lidar differencing shows great promise because it reveals vertical and horizontal trends in morphologic changes at a high resolution over a large area. Repeat lidar surveys were also used to create a sediment budget along the upper Colorado River by means of the morphologic inverse method. In addition to the geomorphic changes detected by lidar, several levels of attrition of the weak clasts within debris flow sediment were applied to the sediment budget to reduce gaps in expected inputs and outputs. Bed-material estimates using the morphologic inverse method were greater than field-measured transport estimates, but the two were within an order of magnitude. Field measurements and observations are critical for robust interpretation of the lidar-based analyses because applying lidar differencing without field control may not identify local controls on valley and channel geometry and sediment characteristics. The final sediment budget helps define variability in bed-material transport and constrain transport rates through the site, which will be beneficial for restoration planning. The morphologic inverse method approach using repeat lidar surveys appears promising, especially if lidar resolution is similar between sequential surveys.
Navier-Stokes Aerodynamic Simulation of the V-22 Osprey on the Intel Paragon MPP
NASA Technical Reports Server (NTRS)
Vadyak, Joseph; Shrewsbury, George E.; Narramore, Jim C.; Montry, Gary; Holst, Terry; Kwak, Dochan (Technical Monitor)
1995-01-01
The paper will describe the Development of a general three-dimensional multiple grid zone Navier-Stokes flowfield simulation program (ENS3D-MPP) designed for efficient execution on the Intel Paragon Massively Parallel Processor (MPP) supercomputer, and the subsequent application of this method to the prediction of the viscous flowfield about the V-22 Osprey tiltrotor vehicle. The flowfield simulation code solves the thin Layer or full Navier-Stoke's equation - for viscous flow modeling, or the Euler equations for inviscid flow modeling on a structured multi-zone mesh. In the present paper only viscous simulations will be shown. The governing difference equations are solved using a time marching implicit approximate factorization method with either TVD upwind or central differencing used for the convective terms and central differencing used for the viscous diffusion terms. Steady state or Lime accurate solutions can be calculated. The present paper will focus on steady state applications, although time accurate solution analysis is the ultimate goal of this effort. Laminar viscosity is calculated using Sutherland's law and the Baldwin-Lomax two layer algebraic turbulence model is used to compute the eddy viscosity. The Simulation method uses an arbitrary block, curvilinear grid topology. An automatic grid adaption scheme is incorporated which concentrates grid points in high density gradient regions. A variety of user-specified boundary conditions are available. This paper will present the application of the scalable and superscalable versions to the steady state viscous flow analysis of the V-22 Osprey using a multiple zone global mesh. The mesh consists of a series of sheared cartesian grid blocks with polar grids embedded within to better simulate the wing tip mounted nacelle. MPP solutions will be shown in comparison to equivalent Cray C-90 results and also in comparison to experimental data. Discussions on meshing considerations, wall clock execution time, load balancing, and scalability will be provided.
Earth orientation from lunar laser range-differencing. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Leick, A.
1978-01-01
For the optimal use of high precision lunar laser ranging (LLR), an investigation regarding a clear definition of the underlying coordinate systems, identification of estimable quantities, favorable station geometry and optimal observation schedule is given.
Combination of GPS and GLONASS IN PPP algorithms and its effect on site coordinates determination
NASA Astrophysics Data System (ADS)
Hefty, J.; Gerhatova, L.; Burgan, J.
2011-10-01
Precise Point Positioning (PPP) approach using the un-differenced code and phase GPS observations, precise orbits and satellite clocks is an important alternative to the analyses based on double differences. We examine the extension of the PPP method by introducing the GLONASS satellites into the processing algorithms. The procedures are demonstrated on the software package ABSOLUTE developed at the Slovak University of Technology. Partial results, like ambiguities and receiver clocks obtained from separate solutions of the two GNSS are mutually compared. Finally, the coordinate time series from combination of GPS and GLONASS observations are compared with GPS-only solutions.
Combining Thermal And Structural Analyses
NASA Technical Reports Server (NTRS)
Winegar, Steven R.
1990-01-01
Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGhee, J.M.; Roberts, R.M.; Morel, J.E.
1997-06-01
A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less
NASA Astrophysics Data System (ADS)
Jiang, Mu-Sheng; Sun, Shi-Hai; Tang, Guang-Zhao; Ma, Xiang-Chun; Li, Chun-Yan; Liang, Lin-Mei
2013-12-01
Thanks to the high-speed self-differencing single-photon detector (SD-SPD), the secret key rate of quantum key distribution (QKD), which can, in principle, offer unconditionally secure private communications between two users (Alice and Bob), can exceed 1 Mbit/s. However, the SD-SPD may contain loopholes, which can be exploited by an eavesdropper (Eve) to hack into the unconditional security of the high-speed QKD systems. In this paper, we analyze the fact that the SD-SPD can be remotely controlled by Eve in order to spy on full information without being discovered, then proof-of-principle experiments are demonstrated. Here, we point out that this loophole is introduced directly by the operating principle of the SD-SPD, thus, it cannot be removed, except for the fact that some active countermeasures are applied by the legitimate parties.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Roelke, R. J.; Steinthorsson, E.
1991-01-01
A numerical code is developed for computing three-dimensional, turbulent, compressible flow within coolant passages of turbine blades. The code is based on a formulation of the compressible Navier-Stokes equations in a rotating frame of reference in which the velocity dependent variable is specified with respect to the rotating frame instead of the inertial frame. The algorithm employed to obtain solutions to the governing equation is a finite-volume LU algorithm that allows convection, source, as well as diffusion terms to be treated implicitly. In this study, all convection terms are upwind differenced by using flux-vector splitting, and all diffusion terms are centrally differenced. This paper describes the formulation and algorithm employed in the code. Some computed solutions for the flow within a coolant passage of a radial turbine are also presented.
Shi, Junpeng; Hu, Guoping; Sun, Fenggang; Zong, Binfeng; Wang, Xin
2017-08-24
This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions.
Hu, Guoping; Zong, Binfeng; Wang, Xin
2017-01-01
This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions. PMID:28837115
NASA Astrophysics Data System (ADS)
Barry, Richard K.; Bennett, D. P.; Klaasen, K.; Becker, A. C.; Christiansen, J.; Albrow, M.
2014-01-01
We have worked to characterize two exoplanets newly detected from the ground: OGLE-2012-BLG-0406 and OGLE-2012-BLG-0838, using microlensing observations of the Galactic Bulge recently obtained by NASA’s Deep Impact (DI) spacecraft, in combination with ground data. These observations of the crowded Bulge fields from Earth and from an observatory at a distance of ~1 AU have permitted the extraction of a microlensing parallax signature - critical for breaking exoplanet model degeneracies. For this effort, we used DI’s High Resolution Instrument, launched with a permanent defocus aberration due to an error in cryogenic testing. We show how the effects of a very large, chromatic PSF can be reduced in differencing photometry. We also compare two approaches to differencing photometry - one of which employs the Bramich algorithm and another using the Fruchter & Hook drizzle algorithm.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah
2016-01-01
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093
Five-Year Wilkinson Microwave Anisotropy Probe Observations: Beam Maps and Window Functions
NASA Astrophysics Data System (ADS)
Hill, R. S.; Weiland, J. L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C. L.; Halpern, M.; Page, L.; Dunkley, J.; Gold, B.; Jarosik, N.; Kogut, A.; Limon, M.; Nolta, M. R.; Spergel, D. N.; Tucker, G. S.; Wright, E. L.
2009-02-01
Cosmology and other scientific results from the Wilkinson Microwave Anisotropy Probe (WMAP) mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of ~2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of ~1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of ~2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly, errors in the measured disk temperature are ~0.5%. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah
2016-01-01
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.
Analyzing millet price regimes and market performance in Niger with remote sensing data
NASA Astrophysics Data System (ADS)
Essam, Timothy Michael
This dissertation concerns the analysis of staple food prices and market performance in Niger using remotely sensed vegetation indices in the form of normalized differenced vegetation index (NDVI). By exploiting the link between weather-related vegetation production conditions, which serve as a proxy for spatially explicit millet yields and thus millet availability, this study analyzes the potential causal links between NDVI outcomes and millet market performance and presents an empirical approach for predicting changes in market performance based on NDVI outcomes. Overall, the thesis finds that inter-market price spreads and levels of market integration can be reasonably explained by deviations in vegetation index outcomes from the growing season. Negative (positive) NDVI shocks are associated with better (worse) than expected market performance as measured by converging inter-market price spreads. As the number of markets affected by negatively abnormal vegetation production conditions in the same month of the growing season increases, inter-market price dispersion declines. Positive NDVI shocks, however, do not mirror this pattern in terms of the magnitude of inter-market price divergence. Market integration is also found to be linked to vegetation index outcomes as below (above) average NDVI outcomes result in more integrated (segmented) markets. Climate change and food security policies and interventions should be guided by these findings and account for dynamic relationships among market structures and vegetation production outcomes.
An efficient method for solving the steady Euler equations
NASA Technical Reports Server (NTRS)
Liou, M. S.
1986-01-01
An efficient numerical procedure for solving a set of nonlinear partial differential equations is given, specifically for the steady Euler equations. Solutions of the equations were obtained by Newton's linearization procedure, commonly used to solve the roots of nonlinear algebraic equations. In application of the same procedure for solving a set of differential equations we give a theorem showing that a quadratic convergence rate can be achieved. While the domain of quadratic convergence depends on the problems studied and is unknown a priori, we show that firstand second-order derivatives of flux vectors determine whether the condition for quadratic convergence is satisfied. The first derivatives enter as an implicit operator for yielding new iterates and the second derivatives indicates smoothness of the flows considered. Consequently flows involving shocks are expected to require larger number of iterations. First-order upwind discretization in conjunction with the Steger-Warming flux-vector splitting is employed on the implicit operator and a diagonal dominant matrix results. However the explicit operator is represented by first- and seond-order upwind differencings, using both Steger-Warming's and van Leer's splittings. We discuss treatment of boundary conditions and solution procedures for solving the resulting block matrix system. With a set of test problems for one- and two-dimensional flows, we show detailed study as to the efficiency, accuracy, and convergence of the present method.
Statistical analysis of low level atmospheric turbulence
NASA Technical Reports Server (NTRS)
Tieleman, H. W.; Chen, W. W. L.
1974-01-01
The statistical properties of low-level wind-turbulence data were obtained with the model 1080 total vector anemometer and the model 1296 dual split-film anemometer, both manufactured by Thermo Systems Incorporated. The data obtained from the above fast-response probes were compared with the results obtained from a pair of Gill propeller anemometers. The digitized time series representing the three velocity components and the temperature were each divided into a number of blocks, the length of which depended on the lowest frequency of interest and also on the storage capacity of the available computer. A moving-average and differencing high-pass filter was used to remove the trend and the low frequency components in the time series. The calculated results for each of the anemometers used are represented in graphical or tabulated form.
Influence of flaps and engines on aircraft wake vortices
DOT National Transportation Integrated Search
1974-09-01
Although pervious investigations have shown that the nature of aircraft wake vortices depends on the aircraft type and flap configuration, the causes for these differences have not been clearly identified. In this Note we show that observed differenc...
A numerical differentiation library exploiting parallel architectures
NASA Astrophysics Data System (ADS)
Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.
2009-08-01
We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.
Monitoring of the permeable pavement demonstration site at Edison Environmental Center
The EPA’s Urban Watershed Management Branch has installed an instrumented, working full-scale 110-space pervious pavement parking lot and has been monitoring several environmental stressors and runoff. This parking lot demonstration site has allowed the investigation of differenc...
Combined orbits and clocks from IGS second reprocessing
NASA Astrophysics Data System (ADS)
Griffiths, Jake
2018-05-01
The Analysis Centers (ACs) of the International GNSS Service (IGS) have reprocessed a large global network of GPS tracking data from 1994.0 until 2014.0 or later. Each AC product time series was extended uniformly till early 2015 using their weekly operational IGS contributions so that the complete combined product set covers GPS weeks 730 through 1831. Three ACs also included GLONASS data from as early as 2002 but that was insufficient to permit combined GLONASS products. The reprocessed terrestrial frame combination procedures and results have been reported already, and those were incorporated into the ITRF2014 multi-technique global frame released in 2016. This paper describes the orbit and clock submissions and their multi-AC combinations and assessments. These were released to users in early 2017 in time for the adoption of IGS14 for generating the operational IGS products. While the reprocessing goal was to enable homogeneous modeling, consistent with the current operational procedures, to be applied retrospectively to the full history of observation data in order to achieve a more suitable reference for geophysical studies, that objective has only been partially achieved. Ongoing AC analysis changes and a lack of full participation limit the consistency and precision of the finished IG2 products. Quantitative internal measures indicate that the reprocessed orbits are somewhat less precise than current operational orbits or even the later orbits from the first IGS reprocessing campaign. That is even more apparent for the clocks where a lack of robust AC participation means that it was only possible to form combined 5-min clocks but not the 30-s satellite clocks published operationally. Therefore, retrospective precise point positioning solutions by users are not recommended using the orbits and clocks. Nevertheless, the orbits do support long-term stable user solutions when used with network processing with either double differencing or explicit clock estimation. Among the main benefits of the reprocessing effort is a more consistent long product set to analyze for sources of systematic error and accuracy. Work to do that is underway but the reprocessing experience already points to a number of ways future IGS performance and reprocessing campaigns can be improved.
Finite-Difference Algorithm for Simulating 3D Electromagnetic Wavefields in Conductive Media
NASA Astrophysics Data System (ADS)
Aldridge, D. F.; Bartel, L. C.; Knox, H. A.
2013-12-01
Electromagnetic (EM) wavefields are routinely used in geophysical exploration for detection and characterization of subsurface geological formations of economic interest. Recorded EM signals depend strongly on the current conductivity of geologic media. Hence, they are particularly useful for inferring fluid content of saturated porous bodies. In order to enhance understanding of field-recorded data, we are developing a numerical algorithm for simulating three-dimensional (3D) EM wave propagation and diffusion in heterogeneous conductive materials. Maxwell's equations are combined with isotropic constitutive relations to obtain a set of six, coupled, first-order partial differential equations governing the electric and magnetic vectors. An advantage of this system is that it does not contain spatial derivatives of the three medium parameters electric permittivity, magnetic permeability, and current conductivity. Numerical solution methodology consists of explicit, time-domain finite-differencing on a 3D staggered rectangular grid. Temporal and spatial FD operators have order 2 and N, where N is user-selectable. We use an artificially-large electric permittivity to maximize the FD timestep, and thus reduce execution time. For the low frequencies typically used in geophysical exploration, accuracy is not unduly compromised. Grid boundary reflections are mitigated via convolutional perfectly matched layers (C-PMLs) imposed at the six grid flanks. A shared-memory-parallel code implementation via OpenMP directives enables rapid algorithm execution on a multi-thread computational platform. Good agreement is obtained in comparisons of numerically-generated data with reference solutions. EM wavefields are sourced via point current density and magnetic dipole vectors. Spatially-extended inductive sources (current carrying wire loops) are under development. We are particularly interested in accurate representation of high-conductivity sub-grid-scale features that are common in industrial environments (borehole casing, pipes, railroad tracks). Present efforts are oriented toward calculating the EM responses of these objects via a First Born Approximation approach. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Technical Reports Server (NTRS)
Lie-Svendsen, O.; Leer, E.
1995-01-01
We have studied the evolution of the velocity distribution function of a test population of electrons in the solar corona and inner solar wind region, using a recently developed kinetic model. The model solves the time dependent, linear transport equation, with a Fokker-Planck collision operator to describe Coulomb collisions between the 'test population' and a thermal background of charged particles, using a finite differencing scheme. The model provides information on how non-Maxwellian features develop in the distribution function in the transition region from collision dominated to collisionless flow. By taking moments of the distribution the evolution of higher order moments, such as the heat flow, can be studied.
Artificial dissipation and central difference schemes for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1987-01-01
An artificial dissipation model, including boundary treatment, that is employed in many central difference schemes for solving the Euler and Navier-Stokes equations is discussed. Modifications of this model such as the eigenvalue scaling suggested by upwind differencing are examined. Multistage time stepping schemes with and without a multigrid method are used to investigate the effects of changes in the dissipation model on accuracy and convergence. Improved accuracy for inviscid and viscous airfoil flow is obtained with the modified eigenvalue scaling. Slower convergence rates are experienced with the multigrid method using such scaling. The rate of convergence is improved by applying a dissipation scaling function that depends on mesh cell aspect ratio.
NASA Astrophysics Data System (ADS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.
2009-09-01
The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.
Implicit and explicit motor sequence learning in children born very preterm.
Jongbloed-Pereboom, Marjolein; Janssen, Anjo J W M; Steiner, K; Steenbergen, Bert; Nijhuis-van der Sanden, Maria W G
2017-01-01
Motor skills can be learned explicitly (dependent on working memory (WM)) or implicitly (relatively independent of WM). Children born very preterm (VPT) often have working memory deficits. Explicit learning may be compromised in these children. This study investigated implicit and explicit motor learning and the role of working memory in VPT children and controls. Three groups (6-9 years) participated: 20 VPT children with motor problems, 20 VPT children without motor problems, and 20 controls. A nine button sequence was learned implicitly (pressing the lighted button as quickly as possible) and explicitly (discovering the sequence via trial-and-error). Children learned implicitly and explicitly, evidenced by decreased movement duration of the sequence over time. In the explicit condition, children also reduced the number of errors over time. Controls made more errors than VPT children without motor problems. Visual WM had positive effects on both explicit and implicit performance. VPT birth and low motor proficiency did not negatively affect implicit or explicit learning. Visual WM was positively related to both implicit and explicit performance, but did not influence learning curves. These findings question the theoretical difference between implicit and explicit learning and the proposed role of visual WM therein. Copyright © 2016 Elsevier Ltd. All rights reserved.
FBST for Cointegration Problems
NASA Astrophysics Data System (ADS)
Diniz, M.; Pereira, C. A. B.; Stern, J. M.
2008-11-01
In order to estimate causal relations, the time series econometrics has to be aware of spurious correlation, a problem first mentioned by Yule [21]. To solve the problem, one can work with differenced series or use multivariate models like VAR or VEC models. In this case, the analysed series are going to present a long run relation i.e. a cointegration relation. Even though the Bayesian literature about inference on VAR/VEC models is quite advanced, Bauwens et al. [2] highlight that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results." This paper presents the Full Bayesian Significance Test applied to cointegration rank selection tests in multivariate (VAR/VEC) time series models and shows how to implement it using available in the literature and simulated data sets. A standard non-informative prior is assumed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacon, D.P.
This review talk describes the OMEGA code, used for weather simulation and the modeling of aerosol transport through the atmosphere. Omega employs a 3D mesh of wedge shaped elements (triangles when viewed from above) that adapt with time. Because wedges are laid out in layers of triangular elements, the scheme can utilize structured storage and differencing techniques along the elevation coordinate, and is thus a hybrid of structured and unstructured methods. The utility of adaptive gridding in this moded, near geographic features such as coastlines, where material properties change discontinuously, is illustrated. Temporal adaptivity was used additionally to track movingmore » internal fronts, such as clouds of aerosol contaminants. The author also discusses limitations specific to this problem, including manipulation of huge data bases and fixed turn-around times. In practice, the latter requires a carefully tuned optimization between accuracy and computation speed.« less
Development of an efficient procedure for calculating the aerodynamic effects of planform variation
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Geller, E. W.
1981-01-01
Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.
Kusaka, A; Essinger-Hileman, T; Appel, J W; Gallardo, P; Irwin, K D; Jarosik, N; Nolta, M R; Page, L A; Parker, L P; Raghunathan, S; Sievers, J L; Simon, S M; Staggs, S T; Visnjic, K
2014-02-01
We evaluate the modulation of cosmic microwave background polarization using a rapidly rotating, half-wave plate (HWP) on the Atacama B-Mode Search. After demodulating the time-ordered-data (TOD), we find a significant reduction of atmospheric fluctuations. The demodulated TOD is stable on time scales of 500-1000 s, corresponding to frequencies of 1-2 mHz. This facilitates recovery of cosmological information at large angular scales, which are typically available only from balloon-borne or satellite experiments. This technique also achieves a sensitive measurement of celestial polarization without differencing the TOD of paired detectors sensitive to two orthogonal linear polarizations. This is the first demonstration of the ability to remove atmospheric contamination at these levels from a ground-based platform using a rapidly rotating HWP.
SINDA, Systems Improved Numerical Differencing Analyzer
NASA Technical Reports Server (NTRS)
Fink, L. C.; Pan, H. M. Y.; Ishimoto, T.
1972-01-01
Computer program has been written to analyze group of 100-node areas and then provide for summation of any number of 100-node areas to obtain temperature profile. SINDA program options offer user variety of methods for solution of thermal analog modes presented in network format.
NASA Astrophysics Data System (ADS)
He, Haizhen; Luo, Rongming; Hu, Zhenhua; Wen, Lei
2017-07-01
A current-mode field programmable analog array(FPAA) is presented in this paper. The proposed FPAA consists of 9 configurable analog blocks(CABs) which are based on current differencing transconductance amplifiers (CDTA) and trans-impedance amplifier (TIA). The proposed CABs interconnect through global lines. These global lines contain some bridge switches, which used to reduce the parasitic capacitance effectively. High-order current-mode low-pass and band-pass filter with transmission zeros based on the simulation of general passive RLC ladder prototypes is proposed and mapped into the FPAA structure in order to demonstrate the versatility of the FPAA. These filters exhibit good performance on bandwidth. Filter's cutoff frequency can be tuned from 1.2MHz to 40MHz.The proposed FPAA is simulated in a standard Charted 0.18μm CMOS process with +/-1.2V power supply to confirm the presented theory, and the results have good agreement with the theoretical analysis.
Lu, Dengsheng; Batistella, Mateus; Moran, Emilio
2009-01-01
Traditional change detection approaches have been proven to be difficult in detecting vegetation changes in the moist tropical regions with multitemporal images. This paper explores the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data for vegetation change detection in the Brazilian Amazon. A principal component analysis was used to integrate TM and HRG panchromatic data. Vegetation change/non-change was detected with the image differencing approach based on the TM and HRG fused image and the corresponding TM image. A rule-based approach was used to classify the TM and HRG multispectral images into thematic maps with three coarse land-cover classes: forest, non-forest vegetation, and non-vegetation lands. A hybrid approach combining image differencing and post-classification comparison was used to detect vegetation change trajectories. This research indicates promising vegetation change techniques, especially for vegetation gain and loss, even if very limited reference data are available. PMID:19789721
Method of resolving radio phase ambiguity in satellite orbit determination
NASA Technical Reports Server (NTRS)
Councelman, Charles C., III; Abbot, Richard I.
1989-01-01
For satellite orbit determination, the most accurate observable available today is microwave radio phase, which can be differenced between observing stations and between satellites to cancel both transmitter- and receiver-related errors. For maximum accuracy, the integer cycle ambiguities of the doubly differenced observations must be resolved. To perform this ambiguity resolution, a bootstrapping strategy is proposed. This strategy requires the tracking stations to have a wide ranging progression of spacings. By conventional 'integrated Doppler' processing of the observations from the most widely spaced stations, the orbits are determined well enough to permit resolution of the ambiguities for the most closely spaced stations. The resolution of these ambiguities reduces the uncertainty of the orbit determination enough to enable ambiguity resolution for more widely spaced stations, which further reduces the orbital uncertainty. In a test of this strategy with six tracking stations, both the formal and the true errors of determining Global Positioning System satellite orbits were reduced by a factor of 2.
NASA Technical Reports Server (NTRS)
Marr, Greg C.; Maher, Michael; Blizzard, Michael; Showell, Avanaugh; Asher, Mark; Devereux, Will
2004-01-01
Over an approximately 48-hour period from September 26 to 28,2002, the Thermosphere, Ionosphere, Mesosphere, Energetics and Dynamics (TIMED) mission was intensively supported by the Tracking and Data Relay Satellite System (TDRSS). The TIMED satellite is in a nearly circular low-Earth orbit with a semimajor axis of approximately 7000 km and an inclination of approximately 74 degrees. The objective was to provide TDRSS tracking support for orbit determination (OD) to generate a definitive ephemeris of 24-hour duration or more with a 3-sigma position error no greater than 100 meters, and this tracking campaign was successful. An ephemeris was generated by Goddard Space Flight Center (GSFC) personnel using the TDRSS tracking data and was compared with an ephemeris generated by the Johns Hopkins University's Applied Physics Lab (APL) using TIMED Global Positioning System (GPS) data. Prior to the tracking campaign OD error analysis was performed to justify scheduling the TDRSS support.
Analyzing a stochastic time series obeying a second-order differential equation.
Lehle, B; Peinke, J
2015-06-01
The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series.
NASA Astrophysics Data System (ADS)
Kubanek, J.; Raible, B.; Westerhaus, M.; Heck, B.
2017-12-01
High-resolution and up-to-date topographic data are of high value in volcanology and can be used in a variety of applications such as volcanic flow modeling or hazard assessment. Furthermore, time-series of topographic data can provide valuable insights into the dynamics of an ongoing eruption. Differencing topographic data acquired at different times enables to derive areal coverage of lava, flow volumes, and lava extrusion rates, the most important parameters during ongoing eruptions for estimating hazard potential, yet most difficult to determine. Anyhow, topographic data acquisition and provision is a challenge. Very often, high-resolution data only exists within a small spatial extension, or the available data is already outdated when the final product is provided. This is especially true for very dynamic landscapes, such as volcanoes. The bistatic TanDEM-X radar satellite mission enables for the first time to generate up-to-date and high-resolution digital elevation models (DEMs) repeatedly using the interferometric phase. The repeated acquisition of TanDEM-X data facilitates the generation of a time-series of DEMs. Differencing DEMs generated from bistatic TanDEM-X data over time can contribute to monitor topographic changes at active volcanoes, and can help to estimate magmatic ascent rates. Here, we use the bistatic TanDEM-X data to investigate the activity of Etna volcano in Sicily, Italy. Etna's activity is characterized by lava fountains and lava flows with ash plumes from four major summit crater areas. Especially the newest crater, the New South East Crater (NSEC) that was formed in 2011 has been highly active in recent years. Over one hundred bistatic TanDEM-X data pairs were acquired between January 2011 and March 2017 in StripMap mode, covering episodes of lava fountaining and lava flow emplacement at Etna's NSEC and its surrounding area. Generating DEMs of every bistatic data pair enables us to assess areal extension of the lava flows, to calculate lava flow volume, and lava extrusion rates. TanDEM-X data have been acquired at Etna during almost every overflight of the TanDEM-X satellite mission, resulting in a high-temporal resolution of DEMs giving highly valuable insights into Etna's volcanic activity of the last six years.
Space Monitoring of urban sprawl
NASA Astrophysics Data System (ADS)
Nole, G.; Lanorte, A.; Murgante, B.; Lasaponara, R.
2012-04-01
Space Monitoring of urban sprawl Gabriele Nolè (1,2), Antonio Lanorte (1), , Beniamino Murgante (2) and Rosa Lasaponara (1) , (1,2) Institute of Methodologies for Environmental Analysis, National Research Council, Italy (2) Laboratory of Urban and Territorial Systems, University of Basilicata, During the last few decades, in many regions throughout the world abandonment of agricultural land has induced a high concentration of people in densely populated urban areas. The deep social, economic and environmental changes have caused strong and extensive land cover changes. This is regarded as a pressing issue that calls for a clear understanding of the ongoing trends and future urban expansion. The main issue of great importance in modelling urban growth includes spatial and temporal dynamics, scale dynamics, man-induced land use changes. Although urban growth is perceived as necessary for a sustainable economy, uncontrolled or sprawling urban growth can cause various problems, such as, the loss of open space, landscape alteration, environmental pollution, traffic congestion, infrastructure pressure, and other social and economical issues. To face these drawbacks, a continuous monitoring of the urban growth evolution in terms of type and extent of changes over time are essential for supporting planners and decision makers in future urban planning. A critical point for the understanding and monitoring urban expansion processes is the availability of both (i) time-series data set and (ii) updated information relating to the current urban spatial structure a to define and locate the evolution trends. In such a context, an effective contribution can be offered by satellite remote sensing technologies, which are able to provide both historical data archive and up-to-date imagery. Satellite technologies represent a cost-effective mean for obtaining useful data that can be easily and systematically updated for the whole globe. Nowadays medium resolution satellite images, such as Landsat TM or ASTER can be downloaded free of charge from the NASA web site. The use of satellite imagery along with robust data analysis techniques can be used for the monitoring and planning purposes as these enable the reporting of ongoing trends of urban growth at a detailed level. Nevertheless, the exploitation of satellite Earth Observation in the field of the urban growth monitoring is a relatively new tool, although during the last three decades great efforts have been addressed to the application of remote sensing in detecting land use and land cover changes using a number of data analyses, such as: (i) Spectral enhancement based on vegetation index differencing, principal component analysis, Image differencing and visual interpretation and/or classification, (ii) post-classification change differencing and a combination of image enhancement and post-classification comparison, (iii) mixture analysis, (iv) artificial neural networks, (v) landscape metrics (patchiness and map density) and (vi) the integration of geographical information system and remote sensing data. In this paper a comparison of the methods listed before is carried out using satellite time series made up of Landsat MSS, TM, ETM+ASTER for some test areas selected in South of Italy and Cairo in order to extract and quantify urban sprawl and its spatial and temporal feature patterns.
Lichen ecology and diversity of a sagebrush steppe in Oregon: 1977 to the present
USDA-ARS?s Scientific Manuscript database
A lichen checklist is presented of 141 species from the Lawrence Memorial Grassland Preserve and nearby lands in Wasco County, Oregon, based on collections made in the 1970s and 1990s. Collections include epiphytic, lignicolous, saxicolous, muscicolous and terricolous species. To evaluate differenc...
Interdependence of PRECIS Role Operators: A Quantitative Analysis of Their Associations.
ERIC Educational Resources Information Center
Mahapatra, Manoranjan; Biswas, Subal Chandra
1986-01-01
Analyzes associations among different role operators quantitatively by taking input strings from 200 abstracts, each related to subject fields of taxation, genetic psychology, and Shakespearean drama, and subjecting them to the Chi-square test. Significant associations by other differencing operators and connectives are discussed. A schema of role…
Comparing fire severity models from post-fire and pre/post-fire differenced imagery
USDA-ARS?s Scientific Manuscript database
Wildland fires are common in rangelands worldwide. The potential for high severity fires to affect long-term changes in rangelands is considerable, and for this reason assessing fire severity shortly after the fire is critical. Such assessments are typically carried out following Burned Area Emergen...
Electrocardiography (ECG) is one of the standard technologies used to monitor and assess cardiac function, and provide insight into the mechanisms driving myocardial pathology. Increased understanding of the effects of cardiovascular disease on rat ECG may help make ECG assessmen...
due to the dangers of utilizing convoy operations. However, enemy actions, austere conditions, and inclement weather pose a significant risk to a...squares temporal differencing for policy evaluation. We construct a representative problem instance based on an austere combat environment in order to
A fourth order accurate finite difference scheme for the computation of elastic waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.
1986-01-01
A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.
Forrest, Lauren N; Smith, April R; Fussner, Lauren M; Dodd, Dorian R; Clerkin, Elise M
2016-01-01
"Fast" (i.e., implicit) processing is relatively automatic; "slow" (i.e., explicit) processing is relatively controlled and can override automatic processing. These different processing types often produce different responses that uniquely predict behaviors. In the present study, we tested if explicit, self-reported symptoms of exercise dependence and an implicit association of exercise as important predicted exercise behaviors and change in problematic exercise attitudes. We assessed implicit attitudes of exercise importance and self-reported symptoms of exercise dependence at Time 1. Participants reported daily exercise behaviors for approximately one month, and then completed a Time 2 assessment of self-reported exercise dependence symptoms. Undergraduate males and females (Time 1, N = 93; Time 2, N = 74) tracked daily exercise behaviors for one month and completed an Implicit Association Test assessing implicit exercise importance and subscales of the Exercise Dependence Questionnaire (EDQ) assessing exercise dependence symptoms. Implicit attitudes of exercise importance and Time 1 EDQ scores predicted Time 2 EDQ scores. Further, implicit exercise importance and Time 1 EDQ scores predicted daily exercise intensity while Time 1 EDQ scores predicted the amount of days exercised. Implicit and explicit processing appear to uniquely predict exercise behaviors and attitudes. Given that different implicit and explicit processes may drive certain exercise factors (e.g., intensity and frequency, respectively), these behaviors may contribute to different aspects of exercise dependence.
Forrest, Lauren N.; Smith, April R.; Fussner, Lauren M.; Dodd, Dorian R.; Clerkin, Elise M.
2015-01-01
Objectives ”Fast” (i.e., implicit) processing is relatively automatic; “slow” (i.e., explicit) processing is relatively controlled and can override automatic processing. These different processing types often produce different responses that uniquely predict behaviors. In the present study, we tested if explicit, self-reported symptoms of exercise dependence and an implicit association of exercise as important predicted exercise behaviors and change in problematic exercise attitudes. Design We assessed implicit attitudes of exercise importance and self-reported symptoms of exercise dependence at Time 1. Participants reported daily exercise behaviors for approximately one month, and then completed a Time 2 assessment of self-reported exercise dependence symptoms. Method Undergraduate males and females (Time 1, N = 93; Time 2, N = 74) tracked daily exercise behaviors for one month and completed an Implicit Association Test assessing implicit exercise importance and subscales of the Exercise Dependence Questionnaire (EDQ) assessing exercise dependence symptoms. Results Implicit attitudes of exercise importance and Time 1 EDQ scores predicted Time 2 EDQ scores. Further, implicit exercise importance and Time 1 EDQ scores predicted daily exercise intensity while Time 1 EDQ scores predicted the amount of days exercised. Conclusion Implicit and explicit processing appear to uniquely predict exercise behaviors and attitudes. Given that different implicit and explicit processes may drive certain exercise factors (e.g., intensity and frequency, respectively), these behaviors may contribute to different aspects of exercise dependence. PMID:26195916
Assessment of an Unstructured-Grid Method for Predicting 3-D Turbulent Viscous Flows
NASA Technical Reports Server (NTRS)
Frink, Neal T.
1996-01-01
A method Is presented for solving turbulent flow problems on three-dimensional unstructured grids. Spatial discretization Is accomplished by a cell-centered finite-volume formulation using an accurate lin- ear reconstruction scheme and upwind flux differencing. Time is advanced by an implicit backward- Euler time-stepping scheme. Flow turbulence effects are modeled by the Spalart-Allmaras one-equation model, which is coupled with a wall function to reduce the number of cells in the sublayer region of the boundary layer. A systematic assessment of the method is presented to devise guidelines for more strategic application of the technology to complex problems. The assessment includes the accuracy In predictions of skin-friction coefficient, law-of-the-wall behavior, and surface pressure for a flat-plate turbulent boundary layer, and for the ONERA M6 wing under a high Reynolds number, transonic, separated flow condition.
Assessment of an Unstructured-Grid Method for Predicting 3-D Turbulent Viscous Flows
NASA Technical Reports Server (NTRS)
Frink, Neal T.
1996-01-01
A method is presented for solving turbulent flow problems on three-dimensional unstructured grids. Spatial discretization is accomplished by a cell-centered finite-volume formulation using an accurate linear reconstruction scheme and upwind flux differencing. Time is advanced by an implicit backward-Euler time-stepping scheme. Flow turbulence effects are modeled by the Spalart-Allmaras one-equation model, which is coupled with a wall function to reduce the number of cells in the sublayer region of the boundary layer. A systematic assessment of the method is presented to devise guidelines for more strategic application of the technology to complex problems. The assessment includes the accuracy in predictions of skin-friction coefficient, law-of-the-wall behavior, and surface pressure for a flat-plate turbulent boundary layer, and for the ONERA M6 wing under a high Reynolds number, transonic, separated flow condition.
Weak associations between the daily number of suicide cases and amount of daily sunlight.
Seregi, Bernadett; Kapitány, Balázs; Maróti-Agóts, Ákos; Rihmer, Zoltán; Gonda, Xénia; Döme, Péter
2017-02-06
Several environmental factors with periodic changes in intensity during the calendar year have been put forward to explain the increase in suicide frequency during spring and summer. In the current study we investigated the effect of averaged daily sunshine duration of periods with different lengths and 'lags' (i.e. the number of days between the last day of the period for which the averaged sunshine duration was calculated and the day of suicide) on suicide risk. We obtained data on daily numbers of suicide cases and daily sunshine duration in Hungary from 1979 to 2013. In order to remove the seasonal components from the two time series (i.e. numbers of suicide and sunshine hours) we used the differencing method. Pearson correlations (n=22,950) were calculated to reveal associations between sunshine duration and suicide risk. The final sample consisted of 122,116 suicide cases. Regarding the entire investigated period, after differencing, sunshine duration and number of suicides on the same days showed a distinctly weak, but highly significant positive correlation in the total sample (r=0.067; p=1.17*10 -13 ). Positive significant correlations (p˂0.0001) between suicide risk on the index day and averaged sunshine duration in the previous days (up to 11days) were also found in the total sample. Our results from a large sample strongly support the hypothesis that sunshine has a prompt, but very weak increasing effect on the risk of suicide (especially violent cases among males). The main limitation is that possible confounding factors were not controlled for. Copyright © 2016 Elsevier Inc. All rights reserved.
Array-based satellite phase bias sensing: theory and GPS/BeiDou/QZSS results
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2014-09-01
Single-receiver integer ambiguity resolution (IAR) is a measurement concept that makes use of network-derived non-integer satellite phase biases (SPBs), among other corrections, to recover and resolve the integer ambiguities of the carrier-phase data of a single GNSS receiver. If it is realized, the very precise integer ambiguity-resolved carrier-phase data would then contribute to the estimation of the receiver’s position, thus making (near) real-time precise point positioning feasible. Proper definition and determination of the SPBs take a leading part in developing the idea of single-receiver IAR. In this contribution, the concept of array-based between-satellite single-differenced (SD) SPB determination is introduced, which is aimed to reduce the code-dominated precision of the SD-SPB corrections. The underlying model is realized by giving the role of the local reference network to an array of antennas, mounted on rigid platforms, that are separated by short distances so that the same ionospheric delay is assumed to be experienced by all the antennas. To that end, a closed-form expression of the array-aided SD-SPB corrections is presented, thereby proposing a simple strategy to compute the SD-SPBs. After resolving double-differenced ambiguities of the array’s data, the variance of the SD-SPB corrections is shown to be reduced by a factor equal to the number of antennas. This improvement in precision is also affirmed by numerical results of the three GNSSs GPS, BeiDou and QZSS. Experimental results demonstrate that the integer-recovered ambiguities converge to integers faster, upon increasing the number of antennas aiding the SD-SPB corrections.
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2018-06-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2017-11-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
CFD analyses of combustor and nozzle flowfields
NASA Astrophysics Data System (ADS)
Tsuei, Hsin-Hua; Merkle, Charles L.
1993-11-01
The objectives of the research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustion processes. A Computational Fluid Dynamic (CFD) technique is employed to model the flowfields within the combustor, nozzle, and near plume field. The computational modeling of the rocket engine flowfields requires the application of the complete Navier-Stokes equations, coupled with species diffusion equations. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an ongoing experimental program at NASA LeRC. The numerical procedure is performed on both time-marching and time-accurate algorithms, using an LU approximate factorization in time, flux split upwinding differencing in space. The integrity of fuel film cooling along the wall, its effectiveness in the mixing with the core flow including unsteady large scale effects, the resultant impact on performance and the assessment of the near plume flow expansion to finite pressure altitude chamber are addressed.
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
Discrete Variational Approach for Modeling Laser-Plasma Interactions
NASA Astrophysics Data System (ADS)
Reyes, J. Paxon; Shadwick, B. A.
2014-10-01
The traditional approach for fluid models of laser-plasma interactions begins by approximating fields and derivatives on a grid in space and time, leading to difference equations that are manipulated to create a time-advance algorithm. In contrast, by introducing the spatial discretization at the level of the action, the resulting Euler-Lagrange equations have particular differencing approximations that will exactly satisfy discrete versions of the relevant conservation laws. For example, applying a spatial discretization in the Lagrangian density leads to continuous-time, discrete-space equations and exact energy conservation regardless of the spatial grid resolution. We compare the results of two discrete variational methods using the variational principles from Chen and Sudan and Brizard. Since the fluid system conserves energy and momentum, the relative errors in these conserved quantities are well-motivated physically as figures of merit for a particular method. This work was supported by the U. S. Department of Energy under Contract No. DE-SC0008382 and by the National Science Foundation under Contract No. PHY-1104683.
Fatty acid composition of intramuscular fat from pastoral yak and Tibetan sheep
USDA-ARS?s Scientific Manuscript database
Fatty acid (FA) composition of intramuscular fat from mature male yak (n=6) and mature Tibetan sheep (n=6) grazed on the same pasture in the Qinghai-Tibetan Plateau was analyzed by gas chromatograph/mass spectrometer to characterize fat composition of these species and to evaluate possible differenc...
NASA Astrophysics Data System (ADS)
Rojali, Salman, Afan Galih; George
2017-08-01
Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.
Detection of urban expansion in an urban-rural landscape with multitemporal QuickBird images
Lu, Dengsheng; Hetrick, Scott; Moran, Emilio; Li, Guiying
2011-01-01
Accurately detecting urban expansion with remote sensing techniques is a challenge due to the complexity of urban landscapes. This paper explored methods for detecting urban expansion with multitemporal QuickBird images in Lucas do Rio Verde, Mato Grosso, Brazil. Different techniques, including image differencing, principal component analysis (PCA), and comparison of classified impervious surface images with the matched filtering method, were used to examine urbanization detection. An impervious surface image classified with the hybrid method was used to modify the urbanization detection results. As a comparison, the original multispectral image and segmentation-based mean-spectral images were used during the detection of urbanization. This research indicates that the comparison of classified impervious surface images with matched filtering method provides the best change detection performance, followed by the image differencing method based on segmentation-based mean spectral images. The PCA is not a good method for urban change detection in this study. Shadows and high spectral variation within the impervious surfaces represent major challenges to the detection of urban expansion when high spatial resolution images are used. PMID:21799706
Mass Loss of Larsen B Tributary Glaciers (Antarctic Peninsula) Unabated Since 2002
NASA Technical Reports Server (NTRS)
Berthier, Etienne; Scambos, Ted; Shuman, Christopher A.
2012-01-01
Ice mass loss continues at a high rate among the large glacier tributaries of the Larsen B Ice Shelf following its disintegration in 2002. We evaluate recent mass loss by mapping elevation changes between 2006 and 201011 using differencing of digital elevation models (DEMs). The measurement accuracy of these elevation changes is confirmed by a null test, subtracting DEMs acquired within a few weeks. The overall 2006201011 mass loss rate (9.0 2.1 Gt a-1) is similar to the 2001022006 rate (8.8 1.6 Gt a-1), derived using DEM differencing and laser altimetry. This unchanged overall loss masks a varying pattern of thinning and ice loss for individual glacier basins. On Crane Glacier, the thinning pulse, initially greatest near the calving front, is now broadening and migrating upstream. The largest losses are now observed for the HektoriaGreen glacier basin, having increased by 33 since 2006. Our method has enabled us to resolve large residual uncertainties in the Larsen B sector and confirm its state of ongoing rapid mass loss.
NASA Technical Reports Server (NTRS)
Rawson, R. F.; Hamilton, R. E.; Liskow, C. L.; Dias, A. R.; Jackson, P. L.
1981-01-01
An analysis of synthetic aperture radar data of SP Mountain was undertaken to demonstrate the use of digital image processing techniques to aid in geologic interpretation of SAR data. These data were collected with the ERIM X- and L-band airborne SAR using like- and cross-polarizations. The resulting signal films were used to produce computer compatible tapes, from which four-channel imagery was generated. Slant range-to-ground range and range-azimuth-scale corrections were made in order to facilitate image registration; intensity corrections were also made. Manual interpretation of the imagery showed that L-band represented the geology of the area better than X-band. Several differences between the various images were also noted. Further digital analysis of the corrected data was done for enhancement purposes. This analysis included application of an MSS differencing routine and development of a routine for removal of relief displacement. It was found that accurate registration of the SAR channels is critical to the effectiveness of the differencing routine. Use of the relief displacement algorithm on the SP Mountain data demonstrated the feasibility of the technique.
NASA Technical Reports Server (NTRS)
Yung, Chain Nan
1988-01-01
A method for predicting turbulent flow in combustors and diffusers is developed. The Navier-Stokes equations, incorporating a turbulence kappa-epsilon model equation, were solved in a nonorthogonal curvilinear coordinate system. The solution applied the finite volume method to discretize the differential equations and utilized the SIMPLE algorithm iteratively to solve the differenced equations. A zonal grid method, wherein the flow field was divided into several subsections, was developed. This approach permitted different computational schemes to be used in the various zones. In addition, grid generation was made a more simple task. However, treatment of the zonal boundaries required special handling. Boundary overlap and interpolating techniques were used and an adjustment of the flow variables was required to assure conservation of mass, momentum and energy fluxes. The numerical accuracy was assessed using different finite differencing methods, i.e., hybrid, quadratic upwind and skew upwind, to represent the convection terms. Flows in different geometries of combustors and diffusers were simulated and results compared with experimental data and good agreement was obtained.
New insights into soil temperature time series modeling: linear or nonlinear?
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram
2018-03-01
Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.
Performance Assessment of Two GPS Receivers on Space Shuttle
NASA Technical Reports Server (NTRS)
Schroeder, Christine A.; Schutz, Bob E.
1996-01-01
Space Shuttle STS-69 was launched on September 7, 1995, carrying the Wake Shield Facility (WSF-02) among its payloads. The mission included two GPS receivers: a Collins 3M receiver onboard the Endeavour and an Osborne flight TurboRogue, known as the TurboStar, onboard the WSF-02. Two of the WSF-02 GPS Experiment objectives were to: (1) assess the ability to use GPS in a relative satellite positioning mode using the receivers on Endeavour and WSF-02; and (2) assess the performance of the receivers to support high precision orbit determination at the 400 km altitude. Three ground tests of the receivers were conducted in order to characterize the respective receivers. The analysis of the tests utilized the Double Differencing technique. A similar test in orbit was conducted during STS-69 while the WSF-02 was held by the Endeavour robot arm for a one hour period. In these tests, biases were observed in the double difference pseudorange measurements, implying that biases up to 140 m exist which do not cancel in double differencing. These biases appear to exist in the Collins receiver, but their effect can be mitigated by including measurement bias parameters to accommodate them in an estimation process. An additional test was conducted in which the orbit of the combined Endeavour/WSF-02 was determined independently with each receiver. These one hour arcs were based on forming double differences with 13 TurboRogue receivers in the global IGS network and estimating pseudorange biases for the Collins. Various analyses suggest the TurboStar overall orbit accuracy is about one to two meters for this period, based on double differenced phase residuals of 34 cm. These residuals indicate the level of unmodeled forces on Endeavour produced by gravitational and nongravitational effects. The rms differences between the two independently determined orbits are better than 10 meters, thereby demonstrating the accuracy of the Collins-determined orbit at this level as well as the accuracy of the relative positioning using these two receivers.
NASA Astrophysics Data System (ADS)
Sisson, James B.; van Genuchten, Martinus Th.
1991-04-01
The unsaturated hydraulic properties are important parameters in any quantitative description of water and solute transport in partially saturated soils. Currently, most in situ methods for estimating the unsaturated hydraulic conductivity (K) are based on analyses that require estimates of the soil water flux and the pressure head gradient. These analyses typically involve differencing of field-measured pressure head (h) and volumetric water content (θ) data, a process that can significantly amplify instrumental and measurement errors. More reliable methods result when differencing of field data can be avoided. One such method is based on estimates of the gravity drainage curve K'(θ) = dK/dθ which may be computed from observations of θ and/or h during the drainage phase of infiltration drainage experiments assuming unit gradient hydraulic conditions. The purpose of this study was to compare estimates of the unsaturated soil hydraulic functions on the basis of different combinations of field data θ, h, K, and K'. Five different data sets were used for the analysis: (1) θ-h, (2) K-θ, (3) K'-θ (4) K-θ-h, and (5) K'-θ-h. The analysis was applied to previously published data for the Norfolk, Troup, and Bethany soils. The K-θ-h and K'-θ-h data sets consistently produced nearly identical estimates of the hydraulic functions. The K-θ and K'-θ data also resulted in similar curves, although results in this case were less consistent than those produced by the K-θ-h and K'-θ-h data sets. We conclude from this study that differencing of field data can be avoided and hence that there is no need to calculate soil water fluxes and pressure head gradients from inherently noisy field-measured θ and h data. The gravity drainage analysis also provides results over a much broader range of hydraulic conductivity values than is possible with the more standard instantaneous profile analysis, especially when augmented with independently measured soil water retention data.
Precise Point Positioning Using Triple GNSS Constellations in Various Modes
Afifi, Akram; El-Rabbany, Ahmed
2016-01-01
This paper introduces a new dual-frequency precise point positioning (PPP) model, which combines the observations from three different global navigation satellite system (GNSS) constellations, namely GPS, Galileo, and BeiDou. Combining measurements from different GNSS systems introduces additional biases, including inter-system bias and hardware delays, which require rigorous modelling. Our model is based on the un-differenced and between-satellite single-difference (BSSD) linear combinations. BSSD linear combination cancels out some receiver-related biases, including receiver clock error and non-zero initial phase bias of the receiver oscillator. Forming the BSSD linear combination requires a reference satellite, which can be selected from any of the GPS, Galileo, and BeiDou systems. In this paper three BSSD scenarios are tested; each considers a reference satellite from a different GNSS constellation. Natural Resources Canada’s GPSPace PPP software is modified to enable a combined GPS, Galileo, and BeiDou PPP solution and to handle the newly introduced biases. A total of four data sets collected at four different IGS stations are processed to verify the developed PPP model. Precise satellite orbit and clock products from the International GNSS Service Multi-GNSS Experiment (IGS-MGEX) network are used to correct the GPS, Galileo, and BeiDou measurements in the post-processing PPP mode. A real-time PPP solution is also obtained, which is referred to as RT-PPP in the sequel, through the use of the IGS real-time service (RTS) for satellite orbit and clock corrections. However, only GPS and Galileo observations are used for the RT-PPP solution, as the RTS-IGS satellite products are not presently available for BeiDou system. All post-processed and real-time PPP solutions are compared with the traditional un-differenced GPS-only counterparts. It is shown that combining the GPS, Galileo, and BeiDou observations in the post-processing mode improves the PPP convergence time by 25% compared with the GPS-only counterpart, regardless of the linear combination used. The use of BSSD linear combination improves the precision of the estimated positioning parameters by about 25% in comparison with the GPS-only PPP solution. Additionally, the solution convergence time is reduced to 10 minutes for the BSSD model, which represents about 50% reduction, in comparison with the GPS-only PPP solution. The GNSS RT-PPP solution, on the other hand, shows a similar convergence time and precision to the GPS-only counterpart. PMID:27240376
Precise Point Positioning Using Triple GNSS Constellations in Various Modes.
Afifi, Akram; El-Rabbany, Ahmed
2016-05-28
This paper introduces a new dual-frequency precise point positioning (PPP) model, which combines the observations from three different global navigation satellite system (GNSS) constellations, namely GPS, Galileo, and BeiDou. Combining measurements from different GNSS systems introduces additional biases, including inter-system bias and hardware delays, which require rigorous modelling. Our model is based on the un-differenced and between-satellite single-difference (BSSD) linear combinations. BSSD linear combination cancels out some receiver-related biases, including receiver clock error and non-zero initial phase bias of the receiver oscillator. Forming the BSSD linear combination requires a reference satellite, which can be selected from any of the GPS, Galileo, and BeiDou systems. In this paper three BSSD scenarios are tested; each considers a reference satellite from a different GNSS constellation. Natural Resources Canada's GPSPace PPP software is modified to enable a combined GPS, Galileo, and BeiDou PPP solution and to handle the newly introduced biases. A total of four data sets collected at four different IGS stations are processed to verify the developed PPP model. Precise satellite orbit and clock products from the International GNSS Service Multi-GNSS Experiment (IGS-MGEX) network are used to correct the GPS, Galileo, and BeiDou measurements in the post-processing PPP mode. A real-time PPP solution is also obtained, which is referred to as RT-PPP in the sequel, through the use of the IGS real-time service (RTS) for satellite orbit and clock corrections. However, only GPS and Galileo observations are used for the RT-PPP solution, as the RTS-IGS satellite products are not presently available for BeiDou system. All post-processed and real-time PPP solutions are compared with the traditional un-differenced GPS-only counterparts. It is shown that combining the GPS, Galileo, and BeiDou observations in the post-processing mode improves the PPP convergence time by 25% compared with the GPS-only counterpart, regardless of the linear combination used. The use of BSSD linear combination improves the precision of the estimated positioning parameters by about 25% in comparison with the GPS-only PPP solution. Additionally, the solution convergence time is reduced to 10 minutes for the BSSD model, which represents about 50% reduction, in comparison with the GPS-only PPP solution. The GNSS RT-PPP solution, on the other hand, shows a similar convergence time and precision to the GPS-only counterpart.
NASA Astrophysics Data System (ADS)
Caplan, R. M.
2013-04-01
We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.
Treatment of late time instabilities in finite difference EMP scattering codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, L.T.; Arman, S.; Holland, R.
1982-12-01
Time-domain solutions to the finite-differenced Maxwell's equations give rise to several well-known nonphysical propagation anomalies. In particular, when a radiative electric-field look back scheme is employed to terminate the calculation, a high-frequency, growing, numerical instability is introduced. This paper describes the constraints made on the mesh to minimize this instability, and a technique of applying an absorbing sheet to damp out this instability without altering the early time solution. Also described are techniques to extend the data record in the presence of high-frequency noise through application of a low-pass digital filter and the fitting of a damped sinusoid to themore » late-time tail of the data record. An application of these techniques is illustrated with numerical models of the FB-111 aircraft and the B-52 aircraft in the in-flight refueling configuration using the THREDE finite difference computer code. Comparisons are made with experimental scale model measurements with agreement typically on the order of 3 to 6 dB near the fundamental resonances.« less
NASA Astrophysics Data System (ADS)
Zhu, Zhe
2017-08-01
The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.
NASA Technical Reports Server (NTRS)
Kreider, Kevin L.; Baumeister, Kenneth J.
1996-01-01
An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
Satellite mapping of Nile Delta coastal changes
NASA Technical Reports Server (NTRS)
Blodget, H. W.; Taylor, P. T.; Roark, J. H.
1989-01-01
Multitemporal, multispectral scanner (MSS) landsat data have been used to monitor erosion and sedimentation along the Rosetta Promontory of the Nile Delta. These processes have accelerated significantly since the completion of the Aswan High Dam in 1964. Digital differencing of four MSS data sets, using standard algorithms, show that changes observed over a single year period generally occur as strings of single mixed pixels along the coast. Therefore, these can only be used qualitatively to indicate areas where changes occur. Areas of change recorded over a multi-year period are generally larger and thus identified by clusters of pixels; this reduces errors introduced by mixed pixels. Satellites provide a synoptic perspective utilizing data acquired at frequent time intervals. This permits multiple year monitoring of delta evolution on a regional scale.
NASA Technical Reports Server (NTRS)
Kaushik, Dinesh K.; Baysal, Oktay
1997-01-01
Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.
ERIC Educational Resources Information Center
Liberto, Giuliana
2016-01-01
Research into the impact of non-consultative home education regulatory change in New South Wales (NSW), Australia, identified clear benefits of a child-led, interest-inspired approach to learning and a negative impact on student learning and well-being outcomes, particularly for learning-differenced children, of restricted practice freedom.…
Grain size is a physical measurement commonly made in the analysis of many benthic systems. Grain size influences benthic community composition, can influence contaminant loading and can indicate the energy regime of a system. We have recently investigated the relationship betw...
DOT National Transportation Integrated Search
2009-01-01
As part of the Innovative Bridge Research and Construction Program (IBRCP), this study was conducted to use the full-scale construction project of the Route 123 Bridge over the Occoquan River in Northern Virginia to identify and compare any differenc...
USDA-ARS?s Scientific Manuscript database
Differences in wing size in geographical races of Heliconius erato distributed over the western and eastern sides of the Andes are reported on here. Individuals from the eastern side of the Andes are statistically larger in size than the ones on the western side of the Andes. A statistical differenc...
Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G
NASA Technical Reports Server (NTRS)
Gaski, J. D.; Lewis, D. R.; Thompson, L. R.
1972-01-01
New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.
Implicit and Explicit Memory for Affective Passages in Temporal Lobectomy Patients
ERIC Educational Resources Information Center
Burton, Leslie A.; Rabin, Laura; Vardy, Susan Bernstein; Frohlich, Jonathan; Porter, Gwinne Wyatt; Dimitri, Diana; Cofer, Lucas; Labar, Douglas
2008-01-01
Eighteen temporal lobectomy patients (9 left, LTL; 9 right, RTL) were administered four verbal tasks, an Affective Implicit Task, a Neutral Implicit Task, an Affective Explicit Task, and a Neutral Explicit Task. For the Affective and Neutral Implicit Tasks, participants were timed while reading aloud passages with affective or neutral content,…
Implicit timing activates the left inferior parietal cortex.
Wiener, Martin; Turkeltaub, Peter E; Coslett, H Branch
2010-11-01
Coull and Nobre (2008) suggested that tasks that employ temporal cues might be divided on the basis of whether these cues are explicitly or implicitly processed. Furthermore, they suggested that implicit timing preferentially engages the left cerebral hemisphere. We tested this hypothesis by conducting a quantitative meta-analysis of eleven neuroimaging studies of implicit timing using the activation-likelihood estimation (ALE) algorithm (Turkeltaub, Eden, Jones, & Zeffiro, 2002). Our analysis revealed a single but robust cluster of activation-likelihood in the left inferior parietal cortex (supramarginal gyrus). This result is in accord with the hypothesis that the left hemisphere subserves implicit timing mechanisms. Furthermore, in conjunction with a previously reported meta-analysis of explicit timing tasks, our data support the claim that implicit and explicit timing are supported by at least partially distinct neural structures. Copyright © 2010 Elsevier Ltd. All rights reserved.
The time course of explicit and implicit categorization.
Smith, J David; Zakrzewski, Alexandria C; Herberger, Eric R; Boomer, Joseph; Roeder, Jessica L; Ashby, F Gregory; Church, Barbara A
2015-10-01
Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Yusuf, Abdullahi; Aliyu, Aliyu Isa; Baleanu, Dumitru
2018-04-01
This paper studies the symmetry analysis, explicit solutions, convergence analysis, and conservation laws (Cls) for two different space-time fractional nonlinear evolution equations with Riemann-Liouville (RL) derivative. The governing equations are reduced to nonlinear ordinary differential equation (ODE) of fractional order using their Lie point symmetries. In the reduced equations, the derivative is in Erdelyi-Kober (EK) sense, power series technique is applied to derive an explicit solutions for the reduced fractional ODEs. The convergence of the obtained power series solutions is also presented. Moreover, the new conservation theorem and the generalization of the Noether operators are developed to construct the nonlocal Cls for the equations . Some interesting figures for the obtained explicit solutions are presented.
Numerical study of chemically reacting viscous flow relevant to pulsed detonation engines
NASA Astrophysics Data System (ADS)
Yi, Tae-Hyeong
2005-11-01
A computational fluid dynamics code for two-dimensional, multi-species, laminar Navier-Stokes equations is developed to simulate a recently proposed engine concept for a pulsed detonation based propulsion system and to investigate the feasibility of the engine of the concept. The governing equations that include transport phenomena such as viscosity, thermal conduction and diffusion are coupled with chemical reactions. The gas is assumed to be thermally perfect and in chemically non-equilibrium. The stiffness due to coupling the fluid dynamics and the chemical kinetics is properly taken care of by using a time-operator splitting method and a variable coefficient ordinary differential equation solver. A second-order Roe scheme with a minmod limiter is explicitly used for space descretization, while a second-order, two-step Runge-Kutta method is used for time descretization. In space integration, a finite volume method and a cell-centered scheme are employed. The first-order derivatives in the equations of transport properties are discretized by a central differencing with Green's theorem. Detailed chemistry is involved in this study. Two chemical reaction mechanisms are extracted from GRI-Mech, which are forty elementary reactions with thirteen species for a hydrogen-air mixture and twenty-seven reactions with eight species for a hydrogen-oxygen mixture. The code is ported to a high-performance parallel machine with Message-Passing Interface. Code validation is performed with chemical kinetic modeling for a stoichiometric hydrogen-air mixture, an one-dimensional detonation tube, a two-dimensional, inviscid flow over a wedge and a viscous flow over a flat plate. Detonation is initiated using a numerically simulated arc-ignition or shock-induced ignition system. Various freestream conditions are utilized to study the propagation of the detonation in the proposed concept of the engine. Investigation of the detonation propagation is performed for a pulsed detonation rocket and a supersonic combustion chamber. For a pulsed detonation rocket case, the detonation tube is embedded in a mixing chamber where an initiator is added to the main detonation chamber. Propagating detonation waves in a supersonic combustion chamber is investigated for one- and two-dimensional cases. The detonation initiated by an arc and a shock wave is studied in the inviscid and viscous flow, respectively. Various features including a detonation-shock interaction, a detonation diffraction, a base flow and a vortex are observed.
Ewolds, Harald E; Bröker, Laura; de Oliveira, Rita F; Raab, Markus; Künzell, Stefan
2017-01-01
The goal of this study was to investigate the effect of predictability on dual-task performance in a continuous tracking task. Participants practiced either informed (explicit group) or uninformed (implicit group) about a repeated segment in the curves they had to track. In Experiment 1 participants practices the tracking task only, dual-task performance was assessed after by combining the tracking task with an auditory reaction time task. Results showed both groups learned equally well and tracking performance on a predictable segment in the dual-task condition was better than on random segments. However, reaction times did not benefit from a predictable tracking segment. To investigate the effect of learning under dual-task situation participants in Experiment 2 practiced the tracking task while simultaneously performing the auditory reaction time task. No learning of the repeated segment could be demonstrated for either group during the training blocks, in contrast to the test-block and retention test, where participants performed better on the repeated segment in both dual-task and single-task conditions. Only the explicit group improved from test-block to retention test. As in Experiment 1, reaction times while tracking a predictable segment were no better than reaction times while tracking a random segment. We concluded that predictability has a positive effect only on the predictable task itself possibly because of a task-shielding mechanism. For dual-task training there seems to be an initial negative effect of explicit instructions, possibly because of fatigue, but the advantage of explicit instructions was demonstrated in a retention test. This might be due to the explicit memory system informing or aiding the implicit memory system.
Ewolds, Harald E.; Bröker, Laura; de Oliveira, Rita F.; Raab, Markus; Künzell, Stefan
2017-01-01
The goal of this study was to investigate the effect of predictability on dual-task performance in a continuous tracking task. Participants practiced either informed (explicit group) or uninformed (implicit group) about a repeated segment in the curves they had to track. In Experiment 1 participants practices the tracking task only, dual-task performance was assessed after by combining the tracking task with an auditory reaction time task. Results showed both groups learned equally well and tracking performance on a predictable segment in the dual-task condition was better than on random segments. However, reaction times did not benefit from a predictable tracking segment. To investigate the effect of learning under dual-task situation participants in Experiment 2 practiced the tracking task while simultaneously performing the auditory reaction time task. No learning of the repeated segment could be demonstrated for either group during the training blocks, in contrast to the test-block and retention test, where participants performed better on the repeated segment in both dual-task and single-task conditions. Only the explicit group improved from test-block to retention test. As in Experiment 1, reaction times while tracking a predictable segment were no better than reaction times while tracking a random segment. We concluded that predictability has a positive effect only on the predictable task itself possibly because of a task-shielding mechanism. For dual-task training there seems to be an initial negative effect of explicit instructions, possibly because of fatigue, but the advantage of explicit instructions was demonstrated in a retention test. This might be due to the explicit memory system informing or aiding the implicit memory system. PMID:29312083
High-Order Space-Time Methods for Conservation Laws
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2013-01-01
Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown
Total Motion Across the East African Rift Viewed From the Southwest Indian Ridge
NASA Astrophysics Data System (ADS)
Royer, J.; Gordon, R. G.
2005-05-01
The Nubian plate is known to have been separating from the Somalian plate along the East African Rift since Oligocene time. Recent works have shown that the spreading rates and spreading directions since 11 Ma along the Southwest Indian Ridge (SWIR) record Nubia-Antarctica motion west of the Andrew Bain Fracture Zone complex (ABFZ; between 25E and 35E) and Somalia-Antarctica motion east of it. Nubia-Somalia motion can be determined by differencing Nubia-Antarctica and Somalia-Antarctica motion. To estimate the total motion across the East African Rift, we estimated and differenced Nubia-Antarctica motion and Somalia-Antarctica motion for times that preceded the initiation of Nubia-Somalia motion. We analyze anomalies 24n.3o (53 Ma), 21o (48 Ma), 18o (40 Ma) and 13o (34 Ma). Preliminary results show that the poles of the finite rotations that describe the Nubia-Somalia motions cluster near 30E, 42S. Angles of rotation range from 2.7 to 4.0 degrees. The uncertainty regions are large. The lower estimate predicts a total extension of 245 km at the latitude of the Ethiopian rift (41E, 9N) in a direction N104, perpendicular to the mean trend of the rift. Assuming an age of 34 Ma for the initiation of rifting, the average rate of motion would be 7 mm/a, near the 9 mm/a deduced from present-day geodetic measurements [e.g. synthesis of Fernandes et al., 2004]. Although these results require further analysis, particularly on the causes of the large uncertainties, they represent the first independent estimate of the total extension across the rift. Among other remaining questions are the following: How significant are the differences between these estimates and those for younger chrons (5 or 6 ; respectively 11 and 20 Ma), i.e. is the start of extension datable? Is the region east of the ABFZ part of the Somalian plate or does it form a distinct component plate of Somalia, as postulated by Hartnady (2004)? How has motion between two or more component plates within the African composite plate affected estimates of India-Eurasia motion and of Pacific-North America motion?
Verburgh, L; Scherder, E J A; van Lange, P A M; Oosterlaan, J
2016-09-01
In sports, fast and accurate execution of movements is required. It has been shown that implicitly learned movements might be less vulnerable than explicitly learned movements to stressful and fast changing circumstances that exist at the elite sports level. The present study provides insight in explicit and implicit motor learning in youth soccer players with different expertise levels. Twenty-seven youth elite soccer players and 25 non-elite soccer players (aged 10-12) performed a serial reaction time task (SRTT). In the SRTT, one of the sequences must be learned explicitly, the other was implicitly learned. No main effect of group was found for implicit and explicit learning on mean reaction time (MRT) and accuracy. However, for MRT, an interaction was found between learning condition, learning phase and group. Analyses showed no group effects for the explicit learning condition, but youth elite soccer players showed better learning in the implicit learning condition. In particular, during implicit motor learning youth elite soccer showed faster MRTs in the early learning phase and earlier reached asymptote performance in terms of MRT. Present findings may be important for sports because children with superior implicit learning abilities in early learning phases may be able to learn more (durable) motor skills in a shorter time period as compared to other children.
Elizabeth E. Hoy; Nancy H.F. French; Merritt R. Turetsky; Simon N. Trigg; Eric S. Kasischke
2008-01-01
Satellite remotely sensed data of fire disturbance offers important information; however, current methods to study fire severity may need modifications for boreal regions. We assessed the potential of the differenced Normalized Burn Ratio (dNBR) and other spectroscopic indices and image transforms derived from Landsat TM/ETM+ data for mapping fire severity in Alaskan...
Viking S-band Doppler RMS phase fluctuations used to calibrate the mean 1976 equatorial corona
NASA Technical Reports Server (NTRS)
Berman, A. L.; Wackley, J. A.
1977-01-01
Viking S-band Doppler RMS phase fluctuations (noise) and comparisons of Viking Doppler noise to Viking differenced S-X range measurements are used to construct a mean equatorial electron density model for 1976. Using Pioneer Doppler noise results (at high heliographic latitudes, also from 1976), an equivalent nonequatorial electron density model is approximated.
Finite difference methods for the solution of unsteady potential flows
NASA Technical Reports Server (NTRS)
Caradonna, F. X.
1982-01-01
Various problems which are confronted in the development of an unsteady finite difference potential code are reviewed mainly in the context of what is done for a typical small disturbance and full potential method. The issues discussed include choice of equations, linearization and conservation, differencing schemes, and algorithm development. A number of applications, including unsteady three dimensional rotor calculations, are demonstrated.
Modeling of multi-strata forest fire severity using Landsat TM data
Q. Meng; R.K. Meentemeyer
2011-01-01
Most of fire severity studies use field measures of composite burn index (CBI) to represent forest fire severity and fit the relationships between CBI and Landsat imagery derived differenced normalized burn ratio (dNBR) to predict and map fire severity at unsampled locations. However, less attention has been paid on the multi-strata forest fire severity, which...
Donovan S. Birch; Penelope Morgan; Crystal A. Kolden; John T. Abatzoglou; Gregory K. Dillon; Andrew T. Hudak; Alistair M. S. Smith
2015-01-01
Burn severity as inferred from satellite-derived differenced Normalized Burn Ratio (dNBR) is useful for evaluating fire impacts on ecosystems but the environmental controls on burn severity across large forest fires are both poorly understood and likely to be different than those influencing fire extent. We related dNBR to environmental variables including vegetation,...
ERIC Educational Resources Information Center
Belfield, Clive; Bailey, Thomas
2017-01-01
Recently, studies have adopted fixed effects modeling to identify the returns to college. This method has the advantage over ordinary least squares estimates in that unobservable, individual-level characteristics that may bias the estimated returns are differenced out. But the method requires extensive longitudinal data and involves complex…
NASA Technical Reports Server (NTRS)
Yang, Cheng I.; Guo, Yan-Hu; Liu, C.- H.
1996-01-01
The analysis and design of a submarine propulsor requires the ability to predict the characteristics of both laminar and turbulent flows to a higher degree of accuracy. This report presents results of certain benchmark computations based on an upwind, high-resolution, finite-differencing Navier-Stokes solver. The purpose of the computations is to evaluate the ability, the accuracy and the performance of the solver in the simulation of detailed features of viscous flows. Features of interest include flow separation and reattachment, surface pressure and skin friction distributions. Those features are particularly relevant to the propulsor analysis. Test cases with a wide range of Reynolds numbers are selected; therefore, the effects of the convective and the diffusive terms of the solver can be evaluated separately. Test cases include flows over bluff bodies, such as circular cylinders and spheres, at various low Reynolds numbers, flows over a flat plate with and without turbulence effects, and turbulent flows over axisymmetric bodies with and without propulsor effects. Finally, to enhance the iterative solution procedure, a full approximation scheme V-cycle multigrid method is implemented. Preliminary results indicate that the method significantly reduces the computational effort.
An RGB colour image steganography scheme using overlapping block-based pixel-value differencing
Pal, Arup Kumar
2017-01-01
This paper presents a steganographic scheme based on the RGB colour cover image. The secret message bits are embedded into each colour pixel sequentially by the pixel-value differencing (PVD) technique. PVD basically works on two consecutive non-overlapping components; as a result, the straightforward conventional PVD technique is not applicable to embed the secret message bits into a colour pixel, since a colour pixel consists of three colour components, i.e. red, green and blue. Hence, in the proposed scheme, initially the three colour components are represented into two overlapping blocks like the combination of red and green colour components, while another one is the combination of green and blue colour components, respectively. Later, the PVD technique is employed on each block independently to embed the secret data. The two overlapping blocks are readjusted to attain the modified three colour components. The notion of overlapping blocks has improved the embedding capacity of the cover image. The scheme has been tested on a set of colour images and satisfactory results have been achieved in terms of embedding capacity and upholding the acceptable visual quality of the stego-image. PMID:28484623
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, S.; Gezari, S.; Heinis, S.
2015-03-20
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and anmore » analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.« less
The Time Course of Explicit and Implicit Categorization
Zakrzewski, Alexandria C.; Herberger, Eric; Boomer, Joseph; Roeder, Jessica; Ashby, F. Gregory; Church, Barbara A.
2015-01-01
Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization. PMID:26025556
Observing Bridge Dynamic Deflection in Green Time by Information Technology
NASA Astrophysics Data System (ADS)
Yu, Chengxin; Zhang, Guojian; Zhao, Yongqian; Chen, Mingzhi
2018-01-01
As traditional surveying methods are limited to observe bridge dynamic deflection; information technology is adopted to observe bridge dynamic deflection in Green time. Information technology used in this study means that we use digital cameras to photograph the bridge in red time as a zero image. Then, a series of successive images are photographed in green time. Deformation point targets are identified and located by Hough transform. With reference to the control points, the deformation values of these deformation points are obtained by differencing the successive images with a zero image, respectively. Results show that the average measurement accuracies of C0 are 0.46 pixels, 0.51 pixels and 0.74 pixels in X, Z and comprehensive direction. The average measurement accuracies of C1 are 0.43 pixels, 0.43 pixels and 0.67 pixels in X, Z and comprehensive direction in these tests. The maximal bridge deflection is 44.16mm, which is less than 75mm (Bridge deflection tolerance value). Information technology in this paper can monitor bridge dynamic deflection and depict deflection trend curves of the bridge in real time. It can provide data support for the site decisions to the bridge structure safety.
Real-time kinematic PPP GPS for structure monitoring applied on the Severn Suspension Bridge, UK
NASA Astrophysics Data System (ADS)
Tang, Xu; Roberts, Gethin Wyn; Li, Xingxing; Hancock, Craig Matthew
2017-09-01
GPS is widely used for monitoring large civil engineering structures in real time or near real time. In this paper the use of PPP GPS for monitoring large structures is investigated. The bridge deformation results estimated using double differenced measurements is used as the truth against which the performance of kinematic PPP in a real-time scenario for bridge monitoring is assessed. The towers' datasets with millimetre level movement and suspension cable dataset with centimetre/decimetre level movement were processed by both PPP and DD data processing methods. The consistency of tower PPP time series indicated that the wet tropospheric delay is the major obstacle for small deflection extraction. The results of suspension cable survey points indicate that an ionospheric-free linear measurement is competent for bridge deformation by PPP kinematic model, the frequency domain analysis yields very similar results using either PPP or DD. This gives evidence that PPP can be used as an alternative method to DD for large structure monitoring when DD is difficult or impossible because of large baseline lengths, power outages or natural disasters. The PPP residual tropospheric wet delays can be applied to improve the capacity of small movement extraction.
The Deep Lens Survey : Real--time Optical Transient and Moving Object Detection
NASA Astrophysics Data System (ADS)
Becker, Andy; Wittman, David; Stubbs, Chris; Dell'Antonio, Ian; Loomba, Dinesh; Schommer, Robert; Tyson, J. Anthony; Margoniner, Vera; DLS Collaboration
2001-12-01
We report on the real-time optical transient program of the Deep Lens Survey (DLS). Meeting the DLS core science weak-lensing objective requires repeated visits to the same part of the sky, 20 visits for 63 sub-fields in 4 filters, on a 4-m telescope. These data are reduced in real-time, and differenced against each other on all available timescales. Our observing strategy is optimized to allow sensitivity to transients on several minute, one day, one month, and one year timescales. The depth of the survey allows us to detect and classify both moving and stationary transients down to ~ 25th magnitude, a relatively unconstrained region of astronomical variability space. All transients and moving objects, including asteroids, Kuiper belt (or trans-Neptunian) objects, variable stars, supernovae, 'unknown' bursts with no apparent host, orphan gamma-ray burst afterglows, as well as airplanes, are posted on the web in real-time for use by the community. We emphasize our sensitivity to detect and respond in real-time to orphan afterglows of gamma-ray bursts, and present one candidate orphan in the field of Abell 1836. See http://dls.bell-labs.com/transients.html.
NASA Technical Reports Server (NTRS)
Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong
1993-01-01
The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.
The art of spacecraft design: A multidisciplinary challenge
NASA Technical Reports Server (NTRS)
Abdi, F.; Ide, H.; Levine, M.; Austel, L.
1989-01-01
Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.
A single-stage flux-corrected transport algorithm for high-order finite-volume methods
Chaplin, Christopher; Colella, Phillip
2017-05-08
We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less
Human detection in sensitive security areas through recognition of omega shapes using MACH filters
NASA Astrophysics Data System (ADS)
Rehman, Saad; Riaz, Farhan; Hassan, Ali; Liaquat, Muwahida; Young, Rupert
2015-03-01
Human detection has gained considerable importance in aggravated security scenarios over recent times. An effective security application relies strongly on detailed information regarding the scene under consideration. A larger accumulation of humans than the number of personal authorized to visit a security controlled area must be effectively detected, amicably alarmed and immediately monitored. A framework involving a novel combination of some existing techniques allows an immediate detection of an undesirable crowd in a region under observation. Frame differencing provides a clear visibility of moving objects while highlighting those objects in each frame acquired by a real time camera. Training of a correlation pattern recognition based filter on desired shapes such as elliptical representations of human faces (variants of an Omega Shape) yields correct detections. The inherent ability of correlation pattern recognition filters caters for angular rotations in the target object and renders decision regarding the existence of the number of persons exceeding an allowed figure in the monitored area.
Computation of incompressible viscous flows through artificial heart devices with moving boundaries
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Rogers, Stuart; Kwak, Dochan; Chang, I.-DEE
1991-01-01
The extension of computational fluid dynamics techniques to artificial heart flow simulations is illustrated. Unsteady incompressible Navier-Stokes equations written in 3-D generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. The efficiency and robustness of the time accurate formulation of the algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated with experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapping grid embedding scheme are used, respectively. Steady state solutions for the flow through a tilting disk heart valve was compared against experimental measurements. Good agreement was obtained. The flow computation during the valve opening and closing is carried out to illustrate the moving boundary capability.
Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.
2016-12-01
The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187
Berry, Tanya R; Rodgers, Wendy M; Divine, Alison; Hall, Craig
2018-06-19
Discrepancies between automatically activated associations (i.e., implicit evaluations) and explicit evaluations of motives (measured with a questionnaire) could lead to greater information processing to resolve discrepancies or self-regulatory failures that may affect behavior. This research examined the relationship of health and appearance exercise-related explicit-implicit evaluative discrepancies, the interaction between implicit and explicit evaluations, and the combined value of explicit and implicit evaluations (i.e., the summed scores) to dropout from a yearlong exercise program. Participants (N = 253) completed implicit health and appearance measures and explicit health and appearance motives at baseline, prior to starting the exercise program. The sum of implicit and explicit appearance measures was positively related to weeks in the program, and discrepancy between the implicit and explicit health measures was negatively related to length of time in the program. Implicit exercise evaluations and their relationships to oft-cited motives such as appearance and health may inform exercise dropout.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Fitzgerald, Michael G.; Karlinger, Michael R.
1983-01-01
Time-series models were constructed for analysis of daily runoff and sediment discharge data from selected rivers of the Eastern United States. Logarithmic transformation and first-order differencing of the data sets were necessary to produce second-order, stationary time series and remove seasonal trends. Cyclic models accounted for less than 42 percent of the variance in the water series and 31 percent in the sediment series. Analysis of the apparent oscillations of given frequencies occurring in the data indicates that frequently occurring storms can account for as much as 50 percent of the variation in sediment discharge. Components of the frequency analysis indicate that a linear representation is reasonable for the water-sediment system. Models that incorporate lagged water discharge as input prove superior to univariate techniques in modeling and prediction of sediment discharges. The random component of the models includes errors in measurement and model hypothesis and indicates no serial correlation. An index of sediment production within or between drain-gage basins can be calculated from model parameters.
An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.
En Route to Depression: Self-Esteem Discrepancies and Habitual Rumination.
Phillips, Wendy J; Hine, Donald W
2016-02-01
Dual-process models of cognitive vulnerability to depression suggest that some individuals possess discrepant implicit and explicit self-views, such as high explicit and low implicit self-esteem (fragile self-esteem) or low explicit and high implicit self-esteem (damaged self-esteem). This study investigated whether individuals with discrepant self-esteem may employ depressive rumination in an effort to reduce discrepancy-related dissonance, and whether the relationship between self-esteem discrepancy and future depressive symptoms varies as a function of rumination tendencies. Hierarchical regressions examined whether self-esteem discrepancy was associated with rumination in an Australian undergraduate sample at Time 1 (N = 306; M(age) = 29.9), and whether rumination tendencies moderated the relationship between self-esteem discrepancy and depressive symptoms assessed 3 months later (n = 160). Damaged self-esteem was associated with rumination at Time 1. As hypothesized, rumination moderated the relationship between self-esteem discrepancy and depressive symptoms at Time 2, where fragile self-esteem and high rumination tendencies at Time 1 predicted the highest levels of subsequent dysphoria. Results are consistent with dual-process propositions that (a) explicit self-regulation strategies may be triggered when explicit and implicit self-beliefs are incongruent, and (b) rumination may increase the likelihood of depression by expending cognitive resources and/or amplifying negative implicit biases. © 2014 Wiley Periodicals, Inc.
A point implicit time integration technique for slow transient flow problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-05-01
We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less
GPS receiver CODE bias estimation: A comparison of two methods
NASA Astrophysics Data System (ADS)
McCaffrey, Anthony M.; Jayachandran, P. T.; Themens, D. R.; Langley, R. B.
2017-04-01
The Global Positioning System (GPS) is a valuable tool in the measurement and monitoring of ionospheric total electron content (TEC). To obtain accurate GPS-derived TEC, satellite and receiver hardware biases, known as differential code biases (DCBs), must be estimated and removed. The Center for Orbit Determination in Europe (CODE) provides monthly averages of receiver DCBs for a significant number of stations in the International Global Navigation Satellite Systems Service (IGS) network. A comparison of the monthly receiver DCBs provided by CODE with DCBs estimated using the minimization of standard deviations (MSD) method on both daily and monthly time intervals, is presented. Calibrated TEC obtained using CODE-derived DCBs, is accurate to within 0.74 TEC units (TECU) in differenced slant TEC (sTEC), while calibrated sTEC using MSD-derived DCBs results in an accuracy of 1.48 TECU.
Salient features of dependence in daily US stock market indices
NASA Astrophysics Data System (ADS)
Gil-Alana, Luis A.; Cunado, Juncal; de Gracia, Fernando Perez
2013-08-01
This paper deals with the analysis of long range dependence in the US stock market. We focus first on the log-values of the Dow Jones Industrial Average, Standard and Poors 500 and Nasdaq indices, daily from February, 1971 to February, 2007. The volatility processes are examined based on the squared and the absolute values of the returns series, and the stability of the parameters across time is also investigated in both the level and the volatility processes. A method that permits us to estimate fractional differencing parameters in the context of structural breaks is conducted in this paper. Finally, the “day of the week” effect is examined by looking at the order of integration for each day of the week, providing also a new modeling approach to describe the dependence in this context.
Combustion chamber analysis code
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-01-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
CFD applications in hypersonic flight
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1992-01-01
Design studies are underway for a variety of hypersonic flight vehicles. The National Aero-Space Plane will provide a reusable, single-stage-to-orbit capability for routine access to low earth orbit. Flight-capable satellites will dip into the atmosphere to maneuver to new orbits, while planetary probes will decelerate at their destination by atmospheric aerobraking. To supplement limited experimental capabilities in the hypersonic regime, CFD is being used to analyze the flow about these configurations. The governing equations include fluid dynamic as well as chemical species equations, which are solved with robust upwind differencing schemes. Examples of CFD applications to hypersonic vehicles suggest an important role this technology will play in the development of future aerospace systems. The computational resources needed to obtain solutions are large, but various strategies are being exploited to reduce the time required for complete vehicle simulations.
NASA Technical Reports Server (NTRS)
Montes, Carlo; Jacob, Frederic
2017-01-01
We compared the capabilities of Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) imageries for mapping daily evapotranspiration (ET) within a Mediterranean vineyard watershed. We used Landsat and ASTER data simultaneously collected on four dates in 2007 and 2008, along with the simplified surface energy balance index (S-SEBI) model. We used previously ground-validated good quality ASTER estimates as reference, and we analyzed the differences with Landsat retrievals in light of the instrumental factors and methodology. Although Landsat and ASTER retrievals of S-SEBI inputs were different, estimates of daily ET from the two imageries were similar. This is ascribed to the S-SEBI spatial differencing in temperature, and opens the path for using historical Landsat time series over vineyards.
Explicit asymmetric bounds for robust stability of continuous and discrete-time systems
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang; Antsaklis, Panos J.
1993-01-01
The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Implicit and explicit social mentalizing: dual processes driven by a shared neural network
Van Overwalle, Frank; Vandekerckhove, Marie
2013-01-01
Recent social neuroscientific evidence indicates that implicit and explicit inferences on the mind of another person (i.e., intentions, attributions or traits), are subserved by a shared mentalizing network. Under both implicit and explicit instructions, ERP studies reveal that early inferences occur at about the same time, and fMRI studies demonstrate an overlap in core mentalizing areas, including the temporo-parietal junction (TPJ) and the medial prefrontal cortex (mPFC). These results suggest a rapid shared implicit intuition followed by a slower explicit verification processes (as revealed by additional brain activation during explicit vs. implicit inferences). These data provide support for a default-adjustment dual-process framework of social mentalizing. PMID:24062663
Atmospheric cloud physics thermal systems analysis
NASA Technical Reports Server (NTRS)
1977-01-01
Engineering analyses performed on the Atmospheric Cloud Physics (ACPL) Science Simulator expansion chamber and associated thermal control/conditioning system are reported. Analyses were made to develop a verified thermal model and to perform parametric thermal investigations to evaluate systems performance characteristics. Thermal network representations of solid components and the complete fluid conditioning system were solved simultaneously using the Systems Improved Numerical Differencing Analyzer (SINDA) computer program.
Interior Fluid Dynamics of Liquid-Filled Projectiles
1989-12-01
the Sandia code. The previous codes are primarily based on finite-difference approximations with relatively coarse grid and were designed without...exploits Chorin’s method of artificial compressibility. The steady solution at 11 X 24 X 21 grid points in r, 0, z-direction is obtained by integrating...differences in radial and axial direction and pseudoepectral differencing in the azimuthal direction. Nonuniform grids are introduced for increased
Domain Derivatives in Dielectric Rough Surface Scattering
2015-01-01
and require the gradient of the objective function in the unknown model parameter vector at each stage of iteration. For large N, finite...differencing becomes numerically intensive, and an efficient alternative is domain differentiation in which the full gradient is obtained by solving a single...derivative calculation of the gradient for a locally perturbed dielectric interface. The method is non-variational, and algebraic in nature in that it
Wave Current Interactions and Wave-blocking Predictions Using NHWAVE Model
2013-03-01
Navier-Stokes equation. In this approach, as with previous modeling techniques, there is difficulty in simulating the free surface that inhibits accurate...hydrostatic, free - surface , rotational flows in multiple dimensions. It is useful in predicting transformations of surface waves and rapidly varied...Stelling, G., and M. Zijlema, 2003: An accurate and efficient finite-differencing algorithm for non-hydrostatic free surface flow with application to
A. M. S. Smith; L. B. Lenilte; A. T. Hudak; P. Morgan
2007-01-01
The Differenced Normalized Burn Ratio (deltaNBR) is widely used to map post-fire effects in North America from multispectral satellite imagery, but has not been rigorously validated across the great diversity in vegetation types. The importance of these maps to fire rehabilitation crews highlights the need for continued assessment of alternative remote sensing...
Relating fire-caused change in forest structure to remotely sensed estimates of fire severity
Jamie M. Lydersen; Brandon M. Collins; Jay D. Miller; Danny L. Fry; Scott L. Stephens
2016-01-01
Fire severity maps are an important tool for understanding fire effects on a landscape. The relative differenced normalized burn ratio (RdNBR) is a commonly used severity index in California forests, and is typically divided into four categories: unchanged, low, moderate, and high. RdNBR is often calculated twice--from images collected the year of the fire (initial...
Progress in Multi-Dimensional Upwind Differencing
1992-09-01
Fligure 4a a shiock less t raii~so ilc soliit ion is reached from Ii itial val ies conitaining 1shiocks anid S sonic poinits: agarin. thle residiial...8217 j~ vi are chu i.ewil thtii( lealst alliilied \\wit the conivectiloll direct nwi. in1 .3 tlimeIros alj’uled. (hiul *IS a treg h11f,1r p( ir~to1to .3
1994-02-01
numerical treatment. An explicit numerical procedure based on Runqe-Kutta time stepping for cell-centered, hexahedral finite volumes is...An explicit numerical procedure based on Runge-Kutta time stepping for cell-centered, hexahedral finite volumes is outlined for the approximate...Discretization 16 3.1 Cell-Centered Finite -Volume Discretization in Space 16 3.2 Artificial Dissipation 17 3.3 Time Integration 21 3.4 Convergence
Real-time optimizations for integrated smart network camera
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois
2005-02-01
We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.
Geomorphic Response to Significant Sediment Loading Along Tahoma Creek on Mount Rainier, WA
NASA Astrophysics Data System (ADS)
Anderson, S.; Kennard, P.; Pitlick, J.
2012-12-01
Increased sediment loading in streams draining the flanks of Mt. Rainier has caused significant damage to National Park Service infrastructure and has prompted concern in downstream communities. The processes driving sedimentation and the controls on downstream response are explored in the 37 km2 Tahoma Creek basin, using repeat LiDAR surveys supplemented with additional topographic datasets. DEM differencing between 2003 and 2008 LiDAR datasets shows that over 2.2 million cubic meters of material was evacuated from the upper reaches of the basin, predominately in the form of debris flows. These debris flows were sourced in recently exposed lateral moraines, bulking through the broad collapse of unstable hillslopes. 40% of this material was deposited in the historic debris fan 4-6 km downstream of the terminus, while 55% completely exited the system at the downstream point of the surveys. Distinct zones of aggradation and incision of up to one meter are present along the lower channel and appear to be controlled by valley constrictions and inflections. However, the lower channel has shown remarkable long-term stability in the face of significant sediment loads. Alder ages suggest fluvial high stands in the late 70's and early 90's, immediately following periods of significant debris flow activity, yet the river quickly returned to pre-disturbance elevations. On longer time scales, the presence of old-growth forest on adjacent floodplain/terrace surfaces indicates broad stability on both vertical and horizontal planes. More than a passive indicator, these forested surfaces play a significant role in maintaining channel stability through increased overbank roughness and the formation of bank-armoring log jams. Sediment transport mechanics along this lower reach are explored using the TomSED sediment transport model, driven by data from an extensive sediment sampling and stream gaging effort. In its current state, the model is able to replicate the stability of the channel but significantly under predicts total loads when compared to the LiDAR differencing.
ERIC Educational Resources Information Center
Callens, Andy M.; Atchison, Timothy B.; Engler, Rachel R.
2009-01-01
Instructions for the Matrix Reasoning Test (MRT) of the Wechsler Adult Intelligence Scale-Third Edition were modified by explicitly stating that the subtest was untimed or that a per-item time limit would be imposed. The MRT was administered within one of four conditions: with (a) standard administration instructions, (b) explicit instructions…
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas
2015-04-01
Implicit/explicit (IMEX) Runge-Kutta (RK) schemes are effective for time-marching ODE systems with both stiff and nonstiff terms on the RHS; such schemes implement an (often A-stable or better) implicit RK scheme for the stiff part of the ODE, which is often linear, and, simultaneously, a (more convenient) explicit RK scheme for the nonstiff part of the ODE, which is often nonlinear. Low-storage RK schemes are especially effective for time-marching high-dimensional ODE discretizations of PDE systems on modern (cache-based) computational hardware, in which memory management is often the most significant computational bottleneck. In this paper, we develop and characterize eight new low-storage implicit/explicit RK schemes which have higher accuracy and better stability properties than the only low-storage implicit/explicit RK scheme available previously, the venerable second-order Crank-Nicolson/Runge-Kutta-Wray (CN/RKW3) algorithm that has dominated the DNS/LES literature for the last 25 years, while requiring similar storage (two, three, or four registers of length N) and comparable floating-point operations per timestep.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...
2018-04-17
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models
NASA Astrophysics Data System (ADS)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.
2018-04-01
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
On Feeling Torn About One’s Sexuality
Windsor-Shellard, Ben
2014-01-01
Three studies offer novel evidence addressing the consequences of explicit–implicit sexual orientation (SO) ambivalence. In Study 1, self-identified straight females completed explicit and implicit measures of SO. The results revealed that participants with greater SO ambivalence took longer responding to explicit questions about their sexual preferences, an effect moderated by the direction of ambivalence. Study 2 replicated this effect using a different paradigm. Study 3 included self-identified straight and gay female and male participants; participants completed explicit and implicit measures of SO, plus measures of self-esteem and affect regarding their SO. Among straight participants, the response time results replicated the findings of Studies 1 and 2. Among gay participants, trends suggested that SO ambivalence influenced time spent deliberating on explicit questions relevant to sexuality, but in a different way. Furthermore, the amount and direction of SO ambivalence was related to self-esteem. PMID:24972940
Fire frequency, area burned, and severity: A quantitative approach to defining a normal fire year
Lutz, J.A.; Key, C.H.; Kolden, C.A.; Kane, J.T.; van Wagtendonk, J.W.
2011-01-01
Fire frequency, area burned, and fire severity are important attributes of a fire regime, but few studies have quantified the interrelationships among them in evaluating a fire year. Although area burned is often used to summarize a fire season, burned area may not be well correlated with either the number or ecological effect of fires. Using the Landsat data archive, we examined all 148 wildland fires (prescribed fires and wildfires) >40 ha from 1984 through 2009 for the portion of the Sierra Nevada centered on Yosemite National Park, California, USA. We calculated mean fire frequency and mean annual area burned from a combination of field- and satellite-derived data. We used the continuous probability distribution of the differenced Normalized Burn Ratio (dNBR) values to describe fire severity. For fires >40 ha, fire frequency, annual area burned, and cumulative severity were consistent in only 13 of 26 years (50 %), but all pair-wise comparisons among these fire regime attributes were significant. Borrowing from long-established practice in climate science, we defined "fire normals" to be the 26 year means of fire frequency, annual area burned, and the area under the cumulative probability distribution of dNBR. Fire severity normals were significantly lower when they were aggregated by year compared to aggregation by area. Cumulative severity distributions for each year were best modeled with Weibull functions (all 26 years, r2 ??? 0.99; P < 0.001). Explicit modeling of the cumulative severity distributions may allow more comprehensive modeling of climate-severity and area-severity relationships. Together, the three metrics of number of fires, size of fires, and severity of fires provide land managers with a more comprehensive summary of a given fire year than any single metric.
Algorithms and software for nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.
1989-01-01
The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.
NASA Astrophysics Data System (ADS)
Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa
2018-02-01
Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.
Finite difference methods for the solution of unsteady potential flows
NASA Technical Reports Server (NTRS)
Caradonna, F. X.
1985-01-01
A brief review is presented of various problems which are confronted in the development of an unsteady finite difference potential code. This review is conducted mainly in the context of what is done for a typical small disturbance and full potential methods. The issues discussed include choice of equation, linearization and conservation, differencing schemes, and algorithm development. A number of applications including unsteady three-dimensional rotor calculation, are demonstrated.
Finite Difference Methods for the Solution of Unsteady Potential Flows.
1982-06-01
prediction of loads on helicopter rotors in forward flight. Although aeroelastic effects are important, in this case the main source of unsteadiness is in the...and conservation, differencing schemes, and algorithm development. A number of applications, including unsteady three-dimensional rotor calculations...concerning tunnel turbulence, wall and scaling effects , and sepa- ration. We now know that many of these problems are magnified by the inherent susceptibility
SCISEAL: A CFD Code for Analysis of Fluid Dynamic Forces in Seals
NASA Technical Reports Server (NTRS)
Althavale, Mahesh M.; Ho, Yin-Hsing; Przekwas, Andre J.
1996-01-01
A 3D CFD code, SCISEAL, has been developed and validated. Its capabilities include cylindrical seals, and it is employed on labyrinth seals, rim seals, and disc cavities. State-of-the-art numerical methods include colocated grids, high-order differencing, and turbulence models which account for wall roughness. SCISEAL computes efficient solutions for complicated flow geometries and seal-specific capabilities (rotor loads, torques, etc.).
2010-04-01
structure design showed that we could achieve both of these goals with a 14-in (0.35 m) sensor cube. To avoid the reliance on accurate multiple...differenced pair receiver. 4. Conclusions We have designed and built a sensor package of a 14-in (0.35 m) cube based on the...funding (UX-1225, MM-0437, and MM-0838), we have successfully designed and built a cart-mounted Berkeley UXO Discriminator (BUD) and demonstrated its
Fast Optical Hazard Detection for Planetary Rovers Using Multiple Spot Laser Triangulation
NASA Technical Reports Server (NTRS)
Matthies, L.; Balch, T.; Wilcox, B.
1997-01-01
A new laser-based optical sensor system that provides hazard detection for planetary rovers is presented. It is anticipated that the sensor can support safe travel at speeds up to 6cm/second for large (1m) rovers in full sunlight on Earth or Mars. The system overcomes limitations in an older design that require image differencing ot detect a laser stripe in full sun.
Alphan, Hakan
2011-11-01
The aim of this study is to compare various image algebra procedures for their efficiency in locating and identifying different types of landscape changes on the margin of a Mediterranean coastal plain, Cukurova, Turkey. Image differencing and ratioing were applied to the reflective bands of Landsat TM datasets acquired in 1984 and 2006. Normalized Difference Vegetation index (NDVI) and Principal Component Analysis (PCA) differencing were also applied. The resulting images were tested for their capacity to detect nine change phenomena, which were a priori defined in a three-level classification scheme. These change phenomena included agricultural encroachment, sand dune afforestation, coastline changes and removal/expansion of reed beds. The percentage overall accuracies of different algebra products for each phenomenon were calculated and compared. The results showed that some of the changes such as sand dune afforestation and reed bed expansion were detected with accuracies varying between 85 and 97% by the majority of the algebra operations, while some other changes such as logging could only be detected by mid-infrared (MIR) ratioing. For optimizing change detection in similar coastal landscapes, underlying causes of these changes were discussed and the guidelines for selecting band and algebra operations were provided. Copyright © 2011 Elsevier Ltd. All rights reserved.
CFD Sensitivity Analysis of a Modern Civil Transport Near Buffet-Onset Conditions
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Allison, Dennis O.; Biedron, Robert T.; Buning, Pieter G.; Gainer, Thomas G.; Morrison, Joseph H.; Rivers, S. Melissa; Mysko, Stephen J.; Witkowski, David P.
2001-01-01
A computational fluid dynamics (CFD) sensitivity analysis is conducted for a modern civil transport at several conditions ranging from mostly attached flow to flow with substantial separation. Two different Navier-Stokes computer codes and four different turbulence models are utilized, and results are compared both to wind tunnel data at flight Reynolds number and flight data. In-depth CFD sensitivities to grid, code, spatial differencing method, aeroelastic shape, and turbulence model are described for conditions near buffet onset (a condition at which significant separation exists). In summary, given a grid of sufficient density for a given aeroelastic wing shape, the combined approximate error band in CFD at conditions near buffet onset due to code, spatial differencing method, and turbulence model is: 6% in lift, 7% in drag, and 16% in moment. The biggest two contributers to this uncertainty are turbulence model and code. Computed results agree well with wind tunnel surface pressure measurements both for an overspeed 'cruise' case as well as a case with small trailing edge separation. At and beyond buffet onset, computed results agree well over the inner half of the wing, but shock location is predicted too far aft at some of the outboard stations. Lift, drag, and moment curves are predicted in good agreement with experimental results from the wind tunnel.
Personal computer (PC) based image processing applied to fluid mechanics research
NASA Technical Reports Server (NTRS)
Cho, Y.-C.; Mclachlan, B. G.
1987-01-01
A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processsed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes commputation.
Personal Computer (PC) based image processing applied to fluid mechanics
NASA Technical Reports Server (NTRS)
Cho, Y.-C.; Mclachlan, B. G.
1987-01-01
A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.
A Least-Squares Finite Element Method for Electromagnetic Scattering Problems
NASA Technical Reports Server (NTRS)
Wu, Jie; Jiang, Bo-nan
1996-01-01
The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.
NASA Technical Reports Server (NTRS)
Moore, James; Marty, Dave; Cody, Joe
2000-01-01
SRS and NASA/MSFC have developed software with unique capabilities to couple bearing kinematic modeling with high fidelity thermal modeling. The core thermomechanical modeling software was developed by SRS and others in the late 1980's and early 1990's under various different contractual efforts. SRS originally developed software that enabled SHABERTH (Shaft Bearing Thermal Model) and SINDA (Systems Improved Numerical Differencing Analyzer) to exchange data and autonomously allowing bearing component temperature effects to propagate into the steady state bearing mechanical model. A separate contract was issued in 1990 to create a personal computer version of the software. At that time SRS performed major improvements to the code. Both SHABERTH and SINDA were independently ported to the PC and compiled. SRS them integrated the two programs into a single program that was named SINSHA. This was a major code improvement.
NASA Technical Reports Server (NTRS)
Moore, James; Marty, Dave; Cody, Joe
2000-01-01
SRS and NASA/MSFC have developed software with unique capabilities to couple bearing kinematic modeling with high fidelity thermal modeling. The core thermomechanical modeling software was developed by SRS and others in the late 1980's and early 1990's under various different contractual efforts. SRS originally developed software that enabled SHABERTH (Shaft Bearing Thermal Model) and SINDA (Systems Improved Numerical Differencing Analyzer) to exchange data and autonomously allowing bearing component temperature effects to propagate into the steady state bearing mechanical model. A separate contract was issued in 1990 to create a personal computer version of the software. At that time SRS performed major improvements to the code. Both SHABERTH and SINDA were independently ported to the PC and compiled. SRS them integrated the two programs into a single program that was named SINSHA. This was a major code improvement.
Progress toward Consensus Estimates of Regional Glacier Mass Balances for IPCC AR5
NASA Astrophysics Data System (ADS)
Arendt, A. A.; Gardner, A. S.; Cogley, J. G.
2011-12-01
Glaciers are potentially large contributors to rising sea level. Since the last IPCC report in 2007 (AR4), there has been a widespread increase in the use of geodetic observations from satellite and airborne platforms to complement field observations of glacier mass balance, as well as significant improvements in the global glacier inventory. Here we summarize our ongoing efforts to integrate data from multiple sources to arrive at a consensus estimate for each region, and to quantify uncertainties in those estimates. We will use examples from Alaska to illustrate methods for combining Gravity Recovery and Climate Experiment (GRACE), elevation differencing and field observations into a single time series with related uncertainty estimates. We will pay particular attention to reconciling discrepancies between GRACE estimates from multiple processing centers. We will also investigate the extent to which improvements in the glacier inventory affect the accuracy of our regional mass balances.
The Mars Observer differential one-way range demonstration
NASA Technical Reports Server (NTRS)
Kroger, P. M.; Border, J. S.; Nandi, S.
1994-01-01
Current methods of angular spacecraft positioning using station differenced range data require an additional observation of an extragalactic radio source (quasar) to estimate the timing offset between the reference clocks at the two Deep Space Stations. The quasar observation is also used to reduce the effects of instrumental and media delays on the radio metric observable by forming a difference with the spacecraft observation (delta differential one-way range, delta DOR). An experiment has been completed using data from the Global Positioning System satellites to estimate the station clock offset, eliminating the need for the quasar observation. The requirements for direct measurement of the instrumental delays that must be made in the absence of a quasar observation are assessed. Finally, the results of the 'quasar-free' differential one-way range, or DOR, measurements of the Mars Observer spacecraft are compared with those of simultaneous conventional delta DOR measurements.
Efficient field-theoretic simulation of polymer solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106
2014-12-14
We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less
Incompressible viscous flow computations for the pump components and the artificial heart
NASA Technical Reports Server (NTRS)
Kiris, Cetin
1992-01-01
A finite-difference, three-dimensional incompressible Navier-Stokes formulation to calculate the flow through turbopump components is utilized. The solution method is based on the pseudocompressibility approach and uses an implicit-upwind differencing scheme together with the Gauss-Seidel line relaxation method. Both steady and unsteady flow calculations can be performed using the current algorithm. In this work, the equations are solved in steadily rotating reference frames by using the steady-state formulation in order to simulate the flow through a turbopump inducer. Eddy viscosity is computed by using an algebraic mixing-length turbulence model. Numerical results are compared with experimental measurements and a good agreement is found between the two. Included in the appendix is a paper on incompressible viscous flow through artificial heart devices with moving boundaries. Time-accurate calculations, such as impeller and diffusor interaction, will be reported in future work.
Design for and efficient dynamic climate model with realistic geography
NASA Technical Reports Server (NTRS)
Suarez, M. J.; Abeles, J.
1984-01-01
The long term climate sensitivity which include realistic atmospheric dynamics are severely restricted by the expense of integrating atmospheric general circulation models are discussed. Taking as an example models used at GSFC for this dynamic model is an alternative which is of much lower horizontal or vertical resolution. The model of Heid and Suarez uses only two levels in the vertical and, although it has conventional grid resolution in the meridional direction, horizontal resolution is reduced by keeping only a few degrees of freedom in the zonal wavenumber spectrum. Without zonally asymmetric forcing this model simulates a day in roughly 1/2 second on a CRAY. The model under discussion is a fully finite differenced, zonally asymmetric version of the Heid-Suarez model. It is anticipated that speeds can be obtained a few seconds a day roughly 50 times faster than moderate resolution, multilayer GCM's.
Explicit reference governor for linear systems
NASA Astrophysics Data System (ADS)
Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo
2018-06-01
The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.
Changes of Explicit and Implicit Stigma in Medical Students during Psychiatric Clerkship.
Wang, Peng-Wei; Ko, Chih-Hung; Chen, Cheng-Sheng; Yang, Yi-Hsin Connine; Lin, Huang-Chi; Cheng, Cheng-Chung; Tsang, Hin-Yeung; Wu, Ching-Kuan; Yen, Cheng-Fang
2016-04-01
This study examines the differences in explicit and implicit stigma between medical and non-medical undergraduate students at baseline; the changes of explicit and implicit stigma in medical undergraduate and non-medical undergraduate students after a 1-month psychiatric clerkship and 1-month follow-up period; and the differences in the changes of explicit and implicit stigma between medical and non-medical undergraduate students. Seventy-two medical undergraduate students and 64 non-medical undergraduate students were enrolled. All participants were interviewed at intake and after 1 month. The Taiwanese version of the Stigma Assessment Scale and the Implicit Association Test were used to measure the participants' explicit and implicit stigma. Neither explicit nor implicit stigma differed between two groups at baseline. The medical, but not the non-medical, undergraduate students had a significant decrease in explicit stigma during the 1-month period of follow-up. Neither the medical nor the non-medical undergraduate students exhibited a significant change in implicit stigma during the one-month of follow-up, however. There was an interactive effect between group and time on explicit stigma but not on implicit stigma. Explicit but not implicit stigma toward mental illness decreased in the medical undergraduate students after a psychiatric clerkship. Further study is needed to examine how to improve implicit stigma toward mental illness.
Subliminal mere exposure and explicit and implicit positive affective responses.
Hicks, Joshua A; King, Laura A
2011-06-01
Research suggests that repeated subliminal exposure to environmental stimuli enhances positive affective responses. To date, this research has primarily concentrated on the effects of repeated exposure on explicit measures of positive affect (PA). However, recent research suggests that repeated subliminal presentations may increase implicit PA as well. The present study tested this hypothesis. Participants were either subliminally primed with repeated presentations of the same stimuli or only exposed to each stimulus one time. Results confirmed predictions showing that repeated exposure to the same stimuli increased both explicit and implicit PA. Implications for the role of explicit and implicit PA in attitudinal judgements are discussed.
Robust Real-Time Wide-Area Differential GPS Navigation
NASA Technical Reports Server (NTRS)
Yunck, Thomas P. (Inventor); Bertiger, William I. (Inventor); Lichten, Stephen M. (Inventor); Mannucci, Anthony J. (Inventor); Muellerschoen, Ronald J. (Inventor); Wu, Sien-Chong (Inventor)
1998-01-01
The present invention provides a method and a device for providing superior differential GPS positioning data. The system includes a group of GPS receiving ground stations covering a wide area of the Earth's surface. Unlike other differential GPS systems wherein the known position of each ground station is used to geometrically compute an ephemeris for each GPS satellite. the present system utilizes real-time computation of satellite orbits based on GPS data received from fixed ground stations through a Kalman-type filter/smoother whose output adjusts a real-time orbital model. ne orbital model produces and outputs orbital corrections allowing satellite ephemerides to be known with considerable greater accuracy than from die GPS system broadcasts. The modeled orbits are propagated ahead in time and differenced with actual pseudorange data to compute clock offsets at rapid intervals to compensate for SA clock dither. The orbital and dock calculations are based on dual frequency GPS data which allow computation of estimated signal delay at each ionospheric point. These delay data are used in real-time to construct and update an ionospheric shell map of total electron content which is output as part of the orbital correction data. thereby allowing single frequency users to estimate ionospheric delay with an accuracy approaching that of dual frequency users.
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.
NASA Astrophysics Data System (ADS)
Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca
2017-12-01
An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.
Class of self-limiting growth models in the presence of nonlinear diffusion
NASA Astrophysics Data System (ADS)
Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar
2002-06-01
The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.
ERIC Educational Resources Information Center
Shintani, Natsuko
2017-01-01
This study examines the effects of the timing of explicit instruction (EI) on grammatical accuracy. A total of 123 learners were divided into two groups: those with some productive knowledge of past-counterfactual conditionals (+Prior Knowledge) and those without such knowledge (-Prior Knowledge). Each group was divided into four conditions. Two…
Finite Element Modeling of Coupled Flexible Multibody Dynamics and Liquid Sloshing
2006-09-01
tanks is presented. The semi-discrete combined solid and fluid equations of motions are integrated using a time- accurate parallel explicit solver...Incompressible fluid flow in a moving/deforming container including accurate modeling of the free-surface, turbulence, and viscous effects ...paper, a single computational code which uses a time- accurate explicit solution procedure is used to solve both the solid and fluid equations of
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.
2016-01-01
Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444
Generalized Abstract Symbolic Summaries
NASA Technical Reports Server (NTRS)
Person, Suzette; Dwyer, Matthew B.
2009-01-01
Current techniques for validating and verifying program changes often consider the entire program, even for small changes, leading to enormous V&V costs over a program s lifetime. This is due, in large part, to the use of syntactic program techniques which are necessarily imprecise. Building on recent advances in symbolic execution of heap manipulating programs, in this paper, we develop techniques for performing abstract semantic differencing of program behaviors that offer the potential for improved precision.
NASA Astrophysics Data System (ADS)
Cavalli, Marco; Goldin, Beatrice; Comiti, Francesco; Brardinoni, Francesco; Marchi, Lorenzo
2017-08-01
Digital elevation models (DEMs) built from repeated topographic surveys permit producing DEM of Difference (DoD) that enables assessment of elevation variations and estimation of volumetric changes through time. In the framework of sediment transport studies, DEM differencing enables quantitative and spatially-distributed representation of erosion and deposition within the analyzed time window, at both the channel reach and the catchment scale. In this study, two high-resolution Digital Terrain Models (DTMs) derived from airborne LiDAR data (2 m resolution) acquired in 2005 and 2011 were used to characterize the topographic variations caused by sediment erosion, transport and deposition in two adjacent mountain basins (Gadria and Strimm, Vinschgau - Venosta valley, Eastern Alps, Italy). These catchments were chosen for their contrasting morphology and because they feature different types and intensity of sediment transfer processes. A method based on fuzzy logic, which takes into account spatially variable DTMs uncertainty, was used to derive the DoD of the study area. Volumes of erosion and deposition calculated from the DoD were then compared with post-event field surveys to test the consistency of two independent estimates. Results show an overall agreement between the estimates, with differences due to the intrinsic approximations of the two approaches. The consistency of DoD with post-event estimates encourages the integration of these two methods, whose combined application may permit to overcome the intrinsic limitations of the two estimations. The comparison between 2005 and 2011 DTMs allowed to investigate the relationships between topographic changes and geomorphometric parameters expressing the role of topography on sediment erosion and deposition (i.e., slope and contributing area) and describing the morphology influenced by debris flows and fluvial processes (i.e., curvature). Erosion and deposition relations in the slope-area space display substantial differences between the Gadria and the Strimm basins. While in the former erosion and deposition clusters are reasonably well discriminated, in the latter, characterized by a complex stepped structure, we observe substantial overlapping. Erosion mostly occurred in areas that show persistency of concavity or transformation from convex and flat to concave surfaces, whereas deposition prevailingly took place on convex morphologies. Less expected correspondences between curvature and topographic changes can be explained by the variable sediment transport processes, which are often characterized by alternation of erosion and deposition between different events and even during the same event.
NASA Technical Reports Server (NTRS)
Pollack, James B.; Rind, David; Lacis, Andrew; Hansen, James E.; Sato, Makiko; Ruedy, Reto
1993-01-01
The response of the climate system to a temporally and spatially constant amount of volcanic particles is simulated using a general circulation model (GCM). The optical depth of the aerosols is chosen so as to produce approximately the same amount of forcing as results from doubling the present CO2 content of the atmosphere and from the boundary conditions associated with the peak of the last ice age. The climate changes produced by long-term volcanic aerosol forcing are obtained by differencing this simulation and one made for the present climate with no volcanic aerosol forcing. The simulations indicate that a significant cooling of the troposphere and surface can occur at times of closely spaced multiple sulfur-rich volcanic explosions that span time scales of decades to centuries. The steady-state climate response to volcanic forcing includes a large expansion of sea ice, especially in the Southern Hemisphere; a resultant large increase in surface and planetary albedo at high latitudes; and sizable changes in the annually and zonally averaged air temperature.
Summary of extensometric measurements in El Paso, Texas
Heywood, Charles E.
2003-01-01
Two counter-weighted-pipe borehole extensometers were installed on the left bank of the Rio Grande between El Paso, Texas, and Ciudad Juarez, Chihuahua, Mexico, in 1992. A shallow extensometer measures vertical compaction in the 6- to 100-meter aquifer-system depth interval. A deep extensometer measures vertical compaction in the 6- to 305-meter aquifer-system depth interval. Both extensometers are referenced to the same surface datum, which allows time-series differencing to determine vertical compaction in the depth interval between 100 and 305 meters. From April 2, 1993, through June 13, 2002, 1.6 centimeters of compaction occurred in the 6-to 305-m depth interval. Until February 1999, most aquifer-system compaction occurred in the deeper aquifer-system interval between 100 and 305 meters, from which ground water was extracted. After that time, compaction in the shallow interval from 6 to 100 meters was predominant and attained a maximum of 7.6 millimeters by June 13, 2002. Minor residual compaction is expected to continue; continued maintenance of the El Paso extensometers would document this process.
NASA Astrophysics Data System (ADS)
Muniandy, Sithi V.; Uning, Rosemary
2006-11-01
Foreign currency exchange rate policies of ASEAN member countries have undergone tremendous changes following the 1997 Asian financial crisis. In this paper, we study the fractal and long-memory characteristics in the volatility of five ASEAN founding members’ exchange rates with respect to US dollar. The impact of exchange rate policies implemented by the ASEAN-5 countries on the currency fluctuations during pre-, mid- and post-crisis are briefly discussed. The time series considered are daily price returns, absolute returns and aggregated absolute returns, each partitioned into three segments based on the crisis regimes. These time series are then modeled using fractional Gaussian noise, fractionally integrated ARFIMA (0,d,0) and generalized Cauchy process. The first two stationary models provide the description of long-range dependence through Hurst and fractional differencing parameter, respectively. Meanwhile, the generalized Cauchy process offers independent estimation of fractal dimension and long memory exponent. In comparison, among the three models we found that the generalized Cauchy process showed greater sensitivity to transition of exchange rate regimes that were implemented by ASEAN-5 countries.
Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
2012-01-01
Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.
A Numerical Method of Calculating Propeller Noise Including Acoustic Nonlinear Effects
NASA Technical Reports Server (NTRS)
Korkan, K. D.
1985-01-01
Using the transonic flow fields(s) generated by the NASPROP-E computer code for an eight blade SR3-series propeller, a theoretical method is investigated to calculate the total noise values and frequency content in the acoustic near and far field without using the Ffowcs Williams - Hawkings equation. The flow field is numerically generated using an implicit three dimensional Euler equation solver in weak conservation law form. Numerical damping is required by the differencing method for stability in three dimensions, and the influence of the damping on the calculated acoustic values is investigated. The acoustic near field is solved by integrating with respect to time the pressure oscillations induced at a stationary observer location. The acoustic far field is calculated from the near field primitive variables as generated by NASPROP-E computer code using a method involving a perturbation velocity potential as suggested by Hawkings in the calculation of the acoustic pressure time-history at a specified far field observed location. the methodologies described are valid for calculating total noise levels and are applicable to any propeller geometry for which a flow field solution is available.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 3: Programmer's reference
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the 2-D or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating-direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 3 is the Programmer's Reference, and describes the program structure, the FORTRAN variables stored in common blocks, and the details of each subprogram.
Neoclassical Simulation of Tokamak Plasmas using Continuum Gyrokinetc Code TEMPEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, X Q
We present gyrokinetic neoclassical simulations of tokamak plasmas with self-consistent electric field for the first time using a fully nonlinear (full-f) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five dimensional computational grid in phase space. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving gyrokinetic Poisson equation with self-consistent poloidal variation. Withmore » our 4D ({psi}, {theta}, {epsilon}, {mu}) version of the TEMPEST code we compute radial particle and heat flux, the Geodesic-Acoustic Mode (GAM), and the development of neoclassical electric field, which we compare with neoclassical theory with a Lorentz collision model. The present work provides a numerical scheme and a new capability for self-consistently studying important aspects of neoclassical transport and rotations in toroidal magnetic fusion devices.« less
The explicit form of the rate function for semi-Markov processes and its contractions
NASA Astrophysics Data System (ADS)
Sughiyama, Yuki; Kobayashi, Testuya J.
2018-03-01
We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.
Full versus divided attention and implicit memory performance.
Wolters, G; Prinsen, A
1997-11-01
Effects of full and divided attention during study on explicit and implicit memory performance were investigated in two experiments. Study time was manipulated in a third experiment. Experiment 1 showed that both similar and dissociative effects can be found in the two kinds of memory test, depending on the difficulty of the concurrent tasks used in the divided-attention condition. In this experiment, however, standard implicit memory tests were used and contamination by explicit memory influences cannot be ruled out. Therefore, in Experiments 2 and 3 the process dissociation procedure was applied. Manipulations of attention during study and of study time clearly affected the controlled (explicit) memory component, but had no effect on the automatic (implicit) memory component. Theoretical implications of these findings are discussed.
NASA Astrophysics Data System (ADS)
Kang, S.; Muralikrishnan, S.; Bui-Thanh, T.
2017-12-01
We propose IMEX HDG-DG schemes for Euler systems on cubed sphere. Of interest is subsonic flow, where the speed of the acoustic wave is faster than that of the nonlinear advection. In order to simulate these flows efficiently, we split the governing system into stiff part describing the fast waves and non-stiff part associated with nonlinear advection. The former is discretized implicitly with HDG method while explicit Runge-Kutta DG discretization is employed for the latter. The proposed IMEX HDG-DG framework: 1) facilitates high-order solution both in time and space; 2) avoids overly small time stepsizes; 3) requires only one linear system solve per time step; and 4) relatively to DG generates smaller and sparser linear system while promoting further parallelism owing to HDG discretization. Numerical results for various test cases demonstrate that our methods are comparable to explicit Runge-Kutta DG schemes in terms of accuracy, while allowing for much larger time stepsizes.
Batterink, Laura; Neville, Helen
2011-11-01
The vast majority of word meanings are learned simply by extracting them from context rather than by rote memorization or explicit instruction. Although this skill is remarkable, little is known about the brain mechanisms involved. In the present study, ERPs were recorded as participants read stories in which pseudowords were presented multiple times, embedded in consistent, meaningful contexts (referred to as meaning condition, M+) or inconsistent, meaningless contexts (M-). Word learning was then assessed implicitly using a lexical decision task and explicitly through recall and recognition tasks. Overall, during story reading, M- words elicited a larger N400 than M+ words, suggesting that participants were better able to semantically integrate M+ words than M- words throughout the story. In addition, M+ words whose meanings were subsequently correctly recognized and recalled elicited a more positive ERP in a later time window compared with M+ words whose meanings were incorrectly remembered, consistent with the idea that the late positive component is an index of encoding processes. In the lexical decision task, no behavioral or electrophysiological evidence for implicit priming was found for M+ words. In contrast, during the explicit recognition task, M+ words showed a robust N400 effect. The N400 effect was dependent upon recognition performance, such that only correctly recognized M+ words elicited an N400. This pattern of results provides evidence that the explicit representations of word meanings can develop rapidly, whereas implicit representations may require more extensive exposure or more time to emerge.
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
Time irreversibility in reversible shell models of turbulence.
De Pietro, Massimo; Biferale, Luca; Boffetta, Guido; Cencini, Massimo
2018-04-06
Turbulent flows governed by the Navier-Stokes equations (NSE) generate an out-of-equilibrium time irreversible energy cascade from large to small scales. In the NSE, the energy transfer is due to the nonlinear terms that are formally symmetric under time reversal. As for the dissipative term: first, it explicitly breaks time reversibility; second, it produces a small-scale sink for the energy transfer that remains effective even in the limit of vanishing viscosity. As a result, it is not clear how to disentangle the time irreversibility originating from the non-equilibrium energy cascade from the explicit time-reversal symmetry breaking due to the viscous term. To this aim, in this paper we investigate the properties of the energy transfer in turbulent shell models by using a reversible viscous mechanism, avoiding any explicit breaking of the [Formula: see text] symmetry. We probe time irreversibility by studying the statistics of Lagrangian power, which is found to be asymmetric under time reversal also in the time-reversible model. This suggests that the turbulent dynamics converges to a strange attractor where time reversibility is spontaneously broken and whose properties are robust for what concerns purely inertial degrees of freedoms, as verified by the anomalous scaling behavior of the velocity structure functions.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Toward real-time performance benchmarks for Ada
NASA Technical Reports Server (NTRS)
Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy
1986-01-01
The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.
NASA Astrophysics Data System (ADS)
Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries
2017-08-01
This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.
A MULTIPLE GRID APPROACH FOR OPEN CHANNEL FLOWS WITH STRONG SHOCKS. (R825200)
Explicit finite difference schemes are being widely used for modeling open channel flows accompanied with shocks. A characteristic feature of explicit schemes is the small time step, which is limited by the CFL stability condition. To overcome this limitation,...
What do we know about implicit false-belief tracking?
Schneider, Dana; Slaughter, Virginia P; Dux, Paul E
2015-02-01
There is now considerable evidence that neurotypical individuals track the internal cognitions of others, even in the absence of instructions to do so. This finding has prompted the suggestion that humans possess an implicit mental state tracking system (implicit Theory of Mind, ToM) that exists alongside a system that allows the deliberate and explicit analysis of the mental states of others (explicit ToM). Here we evaluate the evidence for this hypothesis and assess the extent to which implicit and explicit ToM operations are distinct. We review evidence showing that adults can indeed engage in ToM processing even without being conscious of doing so. However, at the same time, there is evidence that explicit and implicit ToM operations share some functional features, including drawing on executive resources. Based on the available evidence, we propose that implicit and explicit ToM operations overlap and should only be considered partially distinct.
Higher-order hybrid implicit/explicit FDTD time-stepping
NASA Astrophysics Data System (ADS)
Tierens, W.
2016-12-01
Both partially implicit FDTD methods, and symplectic FDTD methods of high temporal accuracy (3rd or 4th order), are well documented in the literature. In this paper we combine them: we construct a conservative FDTD method which is fourth order accurate in time and is partially implicit. We show that the stability condition for this method depends exclusively on the explicit part, which makes it suitable for use in e.g. modelling wave propagation in plasmas.
Three-dimensional compact explicit-finite difference time domain scheme with density variation
NASA Astrophysics Data System (ADS)
Tsuchiya, Takao; Maruta, Naoki
2018-07-01
In this paper, the density variation is implemented in the three-dimensional compact-explicit finite-difference time-domain (CE-FDTD) method. The formulation is first developed based on the continuity equation and the equation of motion, which include the density. Some numerical demonstrations are performed for the three-dimensional sound wave propagation in a two density layered medium. The numerical results are compared with the theoretical results to verify the proposed formulation.
Temporal and long-term trend analysis of class C notifiable diseases in China from 2009 to 2014
Zhang, Xingyu; Hou, Fengsu; Qiao, Zhijiao; Li, Xiaosong; Zhou, Lijun; Liu, Yuanyuan; Zhang, Tao
2016-01-01
Objectives Time series models are effective tools for disease forecasting. This study aims to explore the time series behaviour of 11 notifiable diseases in China and to predict their incidence through effective models. Settings and participants The Chinese Ministry of Health started to publish class C notifiable diseases in 2009. The monthly reported case time series of 11 infectious diseases from the surveillance system between 2009 and 2014 was collected. Methods We performed a descriptive and a time series study using the surveillance data. Decomposition methods were used to explore (1) their seasonality expressed in the form of seasonal indices and (2) their long-term trend in the form of a linear regression model. Autoregressive integrated moving average (ARIMA) models have been established for each disease. Results The number of cases and deaths caused by hand, foot and mouth disease ranks number 1 among the detected diseases. It occurred most often in May and July and increased, on average, by 0.14126/100 000 per month. The remaining incidence models show good fit except the influenza and hydatid disease models. Both the hydatid disease and influenza series become white noise after differencing, so no available ARIMA model can be fitted for these two diseases. Conclusion Time series analysis of effective surveillance time series is useful for better understanding the occurrence of the 11 types of infectious disease. PMID:27797981
Improved method for detecting local discontinuities in CMB data by finite differencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowyer, Jude; Jaffe, Andrew H.
2011-01-15
An unexpected distribution of temperatures in the CMB could be a sign of new physics. In particular, the existence of cosmic defects could be indicated by temperature discontinuities via the Kaiser-Stebbins effect. In this paper, we show how performing finite differences on a CMB map, with the noise regularized in harmonic space, may expose such discontinuities, and we report the results of this process on the 7-year Wilkinson Microwave Anisotropy Probe data.
Application of a Chimera Full Potential Algorithm for Solving Aerodynamic Problems
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
1997-01-01
A numerical scheme utilizing a chimera zonal grid approach for solving the three dimensional full potential equation is described. Special emphasis is placed on describing the spatial differencing algorithm around the chimera interface. Results from two spatial discretization variations are presented; one using a hybrid first-order/second-order-accurate scheme and the second using a fully second-order-accurate scheme. The presentation is highlighted with a number of transonic wing flow field computations.
Interprofessional Collaboration and Turf Wars How Prevalent Are Hidden Attitudes?*
Chung, Chadwick L. R.; Manga, Jasmin; McGregor, Marion; Michailidis, Christos; Stavros, Demetrios; Woodhouse, Linda J.
2012-01-01
Purpose: Interprofessional collaboration in health care is believed to enhance patient outcomes. However, where professions have overlapping scopes of practice (eg, chiropractors and physical therapists), "turf wars" can hinder effective collaboration. Deep-rooted beliefs, identified as implicit attitudes, provide a potential explanation. Even with positive explicit attitudes toward a social group, negative stereotypes may be influential. Previous studies on interprofessional attitudes have mostly used qualitative research methodologies. This study used quantitative methods to evaluate explicit and implicit attitudes of physical therapy students toward chiropractic. Methods: A paper-and-pencil instrument was developed and administered to 49 individuals (students and faculty) associated with a Canadian University master's entry-level physical therapy program after approval by the Research Ethics Board. The instrument evaluated explicit and implicit attitudes toward the chiropractic profession. Implicit attitudes were determined by comparing response times of chiropractic paired with positive versus negative descriptors. Results: Mean time to complete a word association task was significantly longer (t = 4.75, p =.00) when chiropractic was associated with positive rather than negative words. Explicit and implicit attitudes were not correlated (r = 0.13, p =.38). Conclusions: While little explicit bias existed, individuals associated with a master's entry-level physical therapy program appeared to have a significant negative implicit bias toward chiropractic PMID:22778528
Interprofessional collaboration and turf wars how prevalent are hidden attitudes?
Chung, Chadwick L R; Manga, Jasmin; McGregor, Marion; Michailidis, Christos; Stavros, Demetrios; Woodhouse, Linda J
2012-01-01
Interprofessional collaboration in health care is believed to enhance patient outcomes. However, where professions have overlapping scopes of practice (eg, chiropractors and physical therapists), "turf wars" can hinder effective collaboration. Deep-rooted beliefs, identified as implicit attitudes, provide a potential explanation. Even with positive explicit attitudes toward a social group, negative stereotypes may be influential. Previous studies on interprofessional attitudes have mostly used qualitative research methodologies. This study used quantitative methods to evaluate explicit and implicit attitudes of physical therapy students toward chiropractic. A paper-and-pencil instrument was developed and administered to 49 individuals (students and faculty) associated with a Canadian University master's entry-level physical therapy program after approval by the Research Ethics Board. The instrument evaluated explicit and implicit attitudes toward the chiropractic profession. Implicit attitudes were determined by comparing response times of chiropractic paired with positive versus negative descriptors. Mean time to complete a word association task was significantly longer (t = 4.75, p =.00) when chiropractic was associated with positive rather than negative words. Explicit and implicit attitudes were not correlated (r = 0.13, p =.38). While little explicit bias existed, individuals associated with a master's entry-level physical therapy program appeared to have a significant negative implicit bias toward chiropractic.
NASA Astrophysics Data System (ADS)
Vincent, C.; Ramanathan, A.; Wagnon, P.; Dobhal, D. P.; Linda, A.; Berthier, E.; Sharma, P.; Arnaud, Y.; Azam, M. F.; Jose, P. G.; Gardelle, J.
2012-09-01
The volume change of Chhota Shigri Glacier (India, 32° N) between 1988 and 2010 has been determined using in-situ geodetic measurements. This glacier has experienced only a slight mass loss over the last 22 yr (-3.8 ± 1.8 m w.e.). Using satellite digital elevation models (DEM) differencing and field measurements, we measure a negative mass balance (MB) between 1999 and 2011 (-4.7 ± 1.8 m w.e.). Thus, we deduce a positive MB between 1988 and 1999 (+1.0 ± 2.5 m w.e.). Furthermore, satellite DEM differencing reveals a good correspondence between the MB of Chhota Shigri Glacier and the MB of an over 2000 km2 glaciarized area in the Lahaul and Spiti region during 1999-2011. We conclude that there has been no large ice wastage in this region over the last 22 yr, ice mass loss being limited to the last decade. This contrasts to the most recent compilation of MB data in the Himalayan range that indicates ice wastage since 1975, accelerating after 1990. For the rest of western Himalaya, available observations of glacier MBs are too sparse and discontinuous to provide a clear and relevant regional pattern of glacier volume change over the last two decades.
Finite Element Simulations of Kaikoura, NZ Earthquake using DInSAR and High-Resolution DSMs
NASA Astrophysics Data System (ADS)
Barba, M.; Willis, M. J.; Tiampo, K. F.; Glasscoe, M. T.; Clark, M. K.; Zekkos, D.; Stahl, T. A.; Massey, C. I.
2017-12-01
Three-dimensional displacements from the Kaikoura, NZ, earthquake in November 2016 are imaged here using Differential Interferometric Synthetic Aperture Radar (DInSAR) and high-resolution Digital Surface Model (DSM) differencing and optical pixel tracking. Full-resolution co- and post-seismic interferograms of Sentinel-1A/B images are constructed using the JPL ISCE software. The OSU SETSM software is used to produce repeat 0.5 m posting DSMs from commercial satellite imagery, which are supplemented with UAV derived DSMs over the Kaikoura fault rupture on the eastern South Island, NZ. DInSAR provides long-wavelength motions while DSM differencing and optical pixel tracking provides both horizontal and vertical near fault motions, improving the modeling of shallow rupture dynamics. JPL GeoFEST software is used to perform finite element modeling of the fault segments and slip distributions and, in turn, the associated asperity distribution. The asperity profile is then used to simulate event rupture, the spatial distribution of stress drop, and the associated stress changes. Finite element modeling of slope stability is accomplished using the ultra high-resolution UAV derived DSMs to examine the evolution of post-earthquake topography, landslide dynamics and volumes. Results include new insights into shallow dynamics of fault slip and partitioning, estimates of stress change, and improved understanding of its relationship with the associated seismicity, deformation, and triggered cascading hazards.
Perignon, M. C.; Tucker, G.E.; Griffin, Eleanor R.; Friedman, Jonathan M.
2013-01-01
The spatial distribution of riparian vegetation can strongly influence the geomorphic evolution of dryland rivers during large floods. We present the results of an airborne lidar differencing study that quantifies the topographic change that occurred along a 12 km reach of the Lower Rio Puerco, New Mexico, during an extreme event in 2006. Extensive erosion of the channel banks took place immediately upstream of the study area, where tamarisk and sandbar willow had been removed. Within the densely vegetated study reach, we measure a net volumetric change of 578,050 ± ∼ 490,000 m3, with 88.3% of the total aggradation occurring along the floodplain and channel and 76.7% of the erosion focusing on the vertical valley walls. The sediment derived from the devegetated reach deposited within the first 3.6 km of the study area, with depth decaying exponentially with distance downstream. Elsewhere, floodplain sediments were primarily sourced from the erosion of valley walls. Superimposed on this pattern are the effects of vegetation and valley morphology on sediment transport. Sediment thickness is seen to be uniform among sandbar willows and highly variable within tamarisk groves. These reach-scale patterns of sedimentation observed in the lidar differencing likely reflect complex interactions of vegetation, flow, and sediment at the scale of patches to individual plants.
Lee, Eun Sook; Kim, Sung Hyo; Kim, Sun Mi; Sun, Jeong Ju
2005-12-01
The purpose of this study was to determine the effect of EPMLM (educational program of manual lymph massage) on the arm functioning and QOL (quality of life) in breast cancer patients with lymphedema. Subjects in the experimental group (n=20) participated in EPMLM for 6 weeks from June to July, 2005. The EPMLM consisted of training of lymph massage for 2 weeks and encourage and support of self-care using lymph massage for 4 weeks. The arm functioning assessed at pre-treatment, 2 weeks, and 6 weeks using Arm functioning questionnaire. The QOL assessed at pre-treatment and 6 weeks using SF-36. The outcome data of experimental group was compared with control group (n=20). The collected data was analyzed by using SPSS 10.0 statistical program. The arm functioning of experimental group was increased from 2 weeks after (W=.224, p=.011) and statistically differenced with control group at 2 weeks (Z=-2.241, p=.024) and 6 weeks (Z=-2.453, p=.013). Physical function of QOL domain increased in experimental group (Z=-1.162, p=.050), also statistically differenced with control group (Z=-2.182, p= .030) at 6 weeks. The results suggest that the educational program of manual lymph massage can improve arm functioning and physical function of QOL domain in breast cancer patients with lymphedema.
Detection of Deforestation and Land Conversion in Rondonia, Brazil Using Change Detection Techniques
NASA Technical Reports Server (NTRS)
Guild, Liane S.; Cohen, Warren B,; Kauffman, J. Boone; Peterson, David L. (Technical Monitor)
2001-01-01
Fires associated with tropical deforestation, land conversion, and land use greatly contribute to emissions as well as the depletion of carbon and nutrient pools. The objective of this research was to compare change detection techniques for identifying deforestation and cattle pasture formation during a period of early colonization and agricultural expansion in the vicinity of Jamari, Rond6nia. Multi-date Landsat Thematic Mapper (TM) data between 1984 and 1992 was examined in a 94 370-ha area of active deforestation to map land cover change. The Tasseled Cap (TC) transformation was used to enhance the contrast between forest, cleared areas, and regrowth. TC images were stacked into a composite multi-date TC and used in a principal components (PC) transformation to identify change components. In addition, consecutive TC image pairs were differenced and stacked into a composite multi-date differenced image. A maximum likelihood classification of each image composite was compared for identification of land cover change. The multi-date TC composite classification had the best accuracy of 78.1% (kappa). By 1984, only 5% of the study area had been cleared, but by 1992, 11% of the area had been deforested, primarily for pasture and 7% lost due to hydroelectric dam flooding. Finally, discrimination of pasture versus cultivation was improved due to the ability to detect land under sustained clearing opened to land exhibiting regrowth with infrequent clearing.
NASA Astrophysics Data System (ADS)
Prokešová, Roberta; Kardoš, Miroslav; Tábořík, Petr; Medveďová, Alžbeta; Stacke, Václav; Chudý, František
2014-11-01
Large earthflow-type landslides are destructive mass movement phenomena with highly unpredictable behaviour. Knowledge of earthflow kinematics is essential for understanding the mechanisms that control its movements. The present paper characterises the kinematic behaviour of a large earthflow near the village of Ľubietová in Central Slovakia over a period of 35 years following its most recent reactivation in 1977. For this purpose, multi-temporal spatial data acquired by point-based in-situ monitoring and optical remote sensing methods have been used. Quantitative data analyses including strain modelling and DEM differencing techniques have enabled us to: (i) calculate the annual landslide movement rates; (ii) detect the trend of surface displacements; (iii) characterise spatial variability of movement rates; (iv) measure changes in the surface topography on a decadal scale; and (v) define areas with distinct kinematic behaviour. The results also integrate the qualitative characteristics of surface topography, in particular the distribution of surface structures as defined by a high-resolution DEM, and the landslide subsurface structure, as revealed by 2D resistivity imaging. Then, the ground surface kinematics of the landslide is evaluated with respect to the specific conditions encountered in the study area including slope morphology, landslide subsurface structure, and local geological and hydrometeorological conditions. Finally, the broader implications of the presented research are discussed with particular focus on the role that strain-related structures play in landslide kinematic behaviour.
Narcissistic Traits and Explicit Self-Esteem: The Moderating Role of Implicit Self-View
Di Pierro, Rossella; Mattavelli, Simone; Gallucci, Marcello
2016-01-01
Objective: Whilst the relationship between narcissism and self-esteem has been studied for a long time, findings are still controversial. The majority of studies investigated narcissistic grandiosity (NG), neglecting the existence of vulnerable manifestations of narcissism. Moreover, recent studies have shown that grandiosity traits are not always associated with inflated explicit self-esteem. The aim of the present study is to investigate the relationship between narcissistic traits and explicit self-esteem, distinguishing between grandiosity and vulnerability. Moreover, we consider the role of implicit self-esteem in qualifying these associations. Method: Narcissistic traits, explicit and implicit self-esteem measures were assessed among 120 university students (55.8% women, Mage = 22.55, SD = 3.03). Results: Results showed different patterns of association between narcissistic traits and explicit self-esteem, depending on phenotypic manifestations of narcissism. Narcissistic vulnerability (NV) was linked to low explicit self-evaluations regardless of one’s levels of implicit self-esteem. On the other hand, the link between NG and explicit self-esteem was qualified by levels of implicit self-views, such that grandiosity was significantly associated with inflated explicit self-evaluations only at either high or medium levels of implicit self-views. Discussion: These findings showed that the relationship between narcissistic traits and explicit self-esteem is not univocal, highlighting the importance of distinguishing between NG and NV. Finally, the study suggested that both researchers and clinicians should consider the relevant role of implicit self-views in conditioning self-esteem levels reported explicitly by individuals with grandiose narcissistic traits. PMID:27920739
Narcissistic Traits and Explicit Self-Esteem: The Moderating Role of Implicit Self-View.
Di Pierro, Rossella; Mattavelli, Simone; Gallucci, Marcello
2016-01-01
Objective: Whilst the relationship between narcissism and self-esteem has been studied for a long time, findings are still controversial. The majority of studies investigated narcissistic grandiosity (NG), neglecting the existence of vulnerable manifestations of narcissism. Moreover, recent studies have shown that grandiosity traits are not always associated with inflated explicit self-esteem. The aim of the present study is to investigate the relationship between narcissistic traits and explicit self-esteem, distinguishing between grandiosity and vulnerability. Moreover, we consider the role of implicit self-esteem in qualifying these associations. Method: Narcissistic traits, explicit and implicit self-esteem measures were assessed among 120 university students (55.8% women, M age = 22.55, SD = 3.03). Results: Results showed different patterns of association between narcissistic traits and explicit self-esteem, depending on phenotypic manifestations of narcissism. Narcissistic vulnerability (NV) was linked to low explicit self-evaluations regardless of one's levels of implicit self-esteem. On the other hand, the link between NG and explicit self-esteem was qualified by levels of implicit self-views, such that grandiosity was significantly associated with inflated explicit self-evaluations only at either high or medium levels of implicit self-views. Discussion: These findings showed that the relationship between narcissistic traits and explicit self-esteem is not univocal, highlighting the importance of distinguishing between NG and NV. Finally, the study suggested that both researchers and clinicians should consider the relevant role of implicit self-views in conditioning self-esteem levels reported explicitly by individuals with grandiose narcissistic traits.
Implicit time accurate simulation of unsteady flow
NASA Astrophysics Data System (ADS)
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay Derivas, E.
1975-01-01
A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.
Stability of mixed time integration schemes for transient thermal analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lin, J. I.
1982-01-01
A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.
Functional differences between statistical learning with and without explicit training
Reber, Paul J.; Paller, Ken A.
2015-01-01
Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and prepare for incoming input. In this study, we ask whether the function of statistical learning may be enhanced through supplementary explicit training, in which underlying regularities are explicitly taught rather than simply abstracted through exposure. Learners were randomly assigned either to an explicit group or an implicit group. All learners were exposed to a continuous stream of repeating nonsense words. Prior to this implicit training, learners in the explicit group received supplementary explicit training on the nonsense words. Statistical learning was assessed through a speeded reaction-time (RT) task, which measured the extent to which learners used acquired statistical knowledge to optimize online processing. Both RTs and brain potentials revealed significant differences in online processing as a function of training condition. RTs showed a crossover interaction; responses in the explicit group were faster to predictable targets and marginally slower to less predictable targets relative to responses in the implicit group. P300 potentials to predictable targets were larger in the explicit group than in the implicit group, suggesting greater recruitment of controlled, effortful processes. Taken together, these results suggest that information abstracted through passive exposure during statistical learning may be processed more automatically and with less effort than information that is acquired explicitly. PMID:26472644
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu
2014-01-21
The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less
Explicit finite-difference simulation of optical integrated devices on massive parallel computers.
Sterkenburgh, T; Michels, R M; Dress, P; Franke, H
1997-02-20
An explicit method for the numerical simulation of optical integrated circuits by means of the finite-difference time-domain (FDTD) method is presented. This method, based on an explicit solution of Maxwell's equations, is well established in microwave technology. Although the simulation areas are small, we verified the behavior of three interesting problems, especially nonparaxial problems, with typical aspects of integrated optical devices. Because numerical losses are within acceptable limits, we suggest the use of the FDTD method to achieve promising quantitative simulation results.
Age differences in implicit memory: conceptual, perceptual, or methodological?
Mitchell, David B; Bruss, Peter J
2003-12-01
The authors examined age differences in conceptual and perceptual implicit memory via word-fragment completion, word-stem completion, category exemplar generation, picture-fragment identification, and picture naming. Young, middle-aged, and older participants (N = 60) named pictures and words at study. Limited test exposure minimized explicit memory contamination, yielding no reliable age differences and equivalent cross-format effects. In contrast, explicit memory and neuropsychological measures produced significant age differences. In a follow-up experiment, 24 young adults were informed a priori about implicit testing. Their priming was equivalent to the main experiment, showing that test trial time restrictions limit explicit memory strategies. The authors concluded that most implicit memory processes remain stable across adulthood and suggest that explicit contamination be rigorously monitored in aging studies.
EdgeMaps: visualizing explicit and implicit relations
NASA Astrophysics Data System (ADS)
Dörk, Marian; Carpendale, Sheelagh; Williamson, Carey
2011-01-01
In this work, we introduce EdgeMaps as a new method for integrating the visualization of explicit and implicit data relations. Explicit relations are specific connections between entities already present in a given dataset, while implicit relations are derived from multidimensional data based on shared properties and similarity measures. Many datasets include both types of relations, which are often difficult to represent together in information visualizations. Node-link diagrams typically focus on explicit data connections, while not incorporating implicit similarities between entities. Multi-dimensional scaling considers similarities between items, however, explicit links between nodes are not displayed. In contrast, EdgeMaps visualize both implicit and explicit relations by combining and complementing spatialization and graph drawing techniques. As a case study for this approach we chose a dataset of philosophers, their interests, influences, and birthdates. By introducing the limitation of activating only one node at a time, interesting visual patterns emerge that resemble the aesthetics of fireworks and waves. We argue that the interactive exploration of these patterns may allow the viewer to grasp the structure of a graph better than complex node-link visualizations.
Implicit and explicit motor learning: Application to children with Autism Spectrum Disorder (ASD).
Izadi-Najafabadi, Sara; Mirzakhani-Araghi, Navid; Miri-Lavasani, Negar; Nejati, Vahid; Pashazadeh-Azari, Zahra
2015-12-01
This study aims to determine whether children with Autism Spectrum Disorder (ASD) are capable of learning a motor skill both implicitly and explicitly. In the present study, 30 boys with ASD, aged 7-11 with IQ average of 81.2, were compared with 32 typical IQ- and age-matched boys on their performance on a serial reaction time task (SRTT). Children were grouped by ASD and typical children and by implicit and explicit learning groups for the SRTT. Implicit motor learning occurred in both children with ASD (p=.02) and typical children (p=.01). There were no significant differences between groups (p=.39). However, explicit motor learning was only observed in typical children (p=.01) not children with ASD (p=.40). There was a significant difference between groups for explicit learning (p=.01). The results of our study showed that implicit motor learning is not affected in children with ASD. Implications for implicit and explicit learning are applied to the CO-OP approach of motor learning with children with ASD. Copyright © 2015 Elsevier Ltd. All rights reserved.
Henriksen, Niel M.; Roe, Daniel R.; Cheatham, Thomas E.
2013-01-01
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 microseconds of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations. PMID:23477537
Henriksen, Niel M; Roe, Daniel R; Cheatham, Thomas E
2013-04-18
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example, by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 μs of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations.
Time since maximum of Brownian motion and asymmetric Lévy processes
NASA Astrophysics Data System (ADS)
Martin, R. J.; Kearney, M. J.
2018-07-01
Motivated by recent studies of record statistics in relation to strongly correlated time series, we consider explicitly the drawdown time of a Lévy process, which is defined as the time since it last achieved its running maximum when observed over a fixed time period . We show that the density function of this drawdown time, in the case of a completely asymmetric jump process, may be factored as a function of t multiplied by a function of T ‑ t. This extends a known result for the case of pure Brownian motion. We state the factors explicitly for the cases of exponential down-jumps with drift, and for the downward inverse Gaussian Lévy process with drift.
An explicit scheme for ohmic dissipation with smoothed particle magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Tsukamoto, Yusuke; Iwasaki, Kazunari; Inutsuka, Shu-ichiro
2013-09-01
In this paper, we present an explicit scheme for Ohmic dissipation with smoothed particle magnetohydrodynamics (SPMHD). We propose an SPH discretization of Ohmic dissipation and solve Ohmic dissipation part of induction equation with the super-time-stepping method (STS) which allows us to take a longer time step than Courant-Friedrich-Levy stability condition. Our scheme is second-order accurate in space and first-order accurate in time. Our numerical experiments show that optimal choice of the parameters of STS for Ohmic dissipation of SPMHD is νsts ˜ 0.01 and Nsts ˜ 5.
An explicit mixed numerical method for mesoscale model
NASA Technical Reports Server (NTRS)
Hsu, H.-M.
1981-01-01
A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.
NASA Technical Reports Server (NTRS)
Dey, C.; Dey, S. K.
1983-01-01
An explicit finite difference scheme consisting of a predictor and a corrector has been developed and applied to solve some hyperbolic partial differential equations (PDEs). The corrector is a convex-type function which is applied at each time level and at each mesh point. It consists of a parameter which may be estimated such that for larger time steps the algorithm should remain stable and generate a fast speed of convergence to the steady-state solution. Some examples have been given.
Batterink, Laura; Neville, Helen
2011-01-01
The vast majority of word meanings are learned simply by extracting them from context, rather than by rote memorization or explicit instruction. Although this skill is remarkable, little is known about the brain mechanisms involved. In the present study, ERPs were recorded as participants read stories in which pseudowords were presented multiple times, embedded in consistent, meaningful contexts (referred to as meaning condition, M+) or inconsistent, meaningless contexts (M−). Word learning was then assessed implicitly using a lexical decision task and explicitly through recall and recognition tasks. Overall, during story reading, M− words elicited a larger N400 than M+ words, suggesting that participants were better able to semantically integrate M+ words than M− words throughout the story. In addition, M+ words whose meanings were subsequently correctly recognized and recalled elicited a more positive ERP in a later time-window compared to M+ words whose meanings were incorrectly remembered, consistent with the idea that the late positive component (LPC) is an index of encoding processes. In the lexical decision task, no behavioral or electrophysiological evidence for implicit priming was found for M+ words. In contrast, during the explicit recognition task, M+ words showed a robust N400 effect. The N400 effect was dependent upon recognition performance, such that only correctly recognized M+ words elicited an N400. This pattern of results provides evidence that the explicit representations of word meanings can develop rapidly, while implicit representations may require more extensive exposure or more time to emerge. PMID:21452941
Reduced Implicit and Explicit Sequence Learning in First-Episode Schizophrenia
ERIC Educational Resources Information Center
Pedersen, Anya; Siegmund, Ansgar; Ohrmann, Patricia; Rist, Fred; Rothermundt, Matthias; Suslow, Thomas; Arolt, Volker
2008-01-01
A high prevalence of deficits in explicit learning has been reported for schizophrenic patients, but it is less clear whether these patients are impaired in implicit learning. Deficits in implicit learning indicative of a fronto-striatal dysfunction have been reported using a serial reaction-time task (SRT), but the impact of typical neuroleptic…
A Conceptual Model for the Design and Delivery of Explicit Thinking Skills Instruction
ERIC Educational Resources Information Center
Kassem, Cherrie L.
2005-01-01
Developing student thinking skills is an important goal for most educators. However, due to time constraints and weighty content standards, thinking skills instruction is often embedded in subject matter, implicit and incidental. For best results, thinking skills instruction requires a systematic design and explicit teaching strategies. The…
Explicit and Implicit Verbal Response Inhibition in Preschool-Age Children Who Stutter.
Anderson, Julie D; Wagovich, Stacy A
2017-04-14
The purpose of this study was to examine (a) explicit and implicit verbal response inhibition in preschool children who do stutter (CWS) and do not stutter (CWNS) and (b) the relationship between response inhibition and language skills. Participants were 41 CWS and 41 CWNS between the ages of 3;1 and 6;1 (years;months). Explicit verbal response inhibition was measured using a computerized version of the grass-snow task (Carlson & Moses, 2001), and implicit verbal response inhibition was measured using the baa-meow task. Main dependent variables were reaction time and accuracy. The CWS were significantly less accurate than the CWNS on the implicit task, but not the explicit task. The CWS also exhibited slower reaction times than the CWNS on both tasks. Between-group differences in performance could not be attributed to working memory demands. Overall, children's performance on the inhibition tasks corresponded with parents' perceptions of their children's inhibition skills in daily life. CWS are less effective and efficient than CWNS in suppressing a dominant response while executing a conflicting response in the verbal domain.
Explicit and Implicit Verbal Response Inhibition in Preschool-Age Children Who Stutter
Wagovich, Stacy A.
2017-01-01
Purpose The purpose of this study was to examine (a) explicit and implicit verbal response inhibition in preschool children who do stutter (CWS) and do not stutter (CWNS) and (b) the relationship between response inhibition and language skills. Method Participants were 41 CWS and 41 CWNS between the ages of 3;1 and 6;1 (years;months). Explicit verbal response inhibition was measured using a computerized version of the grass–snow task (Carlson & Moses, 2001), and implicit verbal response inhibition was measured using the baa–meow task. Main dependent variables were reaction time and accuracy. Results The CWS were significantly less accurate than the CWNS on the implicit task, but not the explicit task. The CWS also exhibited slower reaction times than the CWNS on both tasks. Between-group differences in performance could not be attributed to working memory demands. Overall, children's performance on the inhibition tasks corresponded with parents' perceptions of their children's inhibition skills in daily life. Conclusions CWS are less effective and efficient than CWNS in suppressing a dominant response while executing a conflicting response in the verbal domain. PMID:28384673
Finite-difference numerical simulations of underground explosion cavity decoupling
NASA Astrophysics Data System (ADS)
Aldridge, D. F.; Preston, L. A.; Jensen, R. P.
2012-12-01
Earth models containing a significant portion of ideal fluid (e.g., air and/or water) are of increasing interest in seismic wave propagation simulations. Examples include a marine model with a thick water layer, and a land model with air overlying a rugged topographic surface. The atmospheric infrasound community is currently interested in coupled seismic-acoustic propagation of low-frequency signals over long ranges (~tens to ~hundreds of kilometers). Also, accurate and efficient numerical treatment of models containing underground air-filled voids (caves, caverns, tunnels, subterranean man-made facilities) is essential. In support of the Source Physics Experiment (SPE) conducted at the Nevada National Security Site (NNSS), we are developing a numerical algorithm for simulating coupled seismic and acoustic wave propagation in mixed solid/fluid media. Solution methodology involves explicit, time-domain, finite-differencing of the elastodynamic velocity-stress partial differential system on a three-dimensional staggered spatial grid. Conditional logic is used to avoid shear stress updating within the fluid zones; this approach leads to computational efficiency gains for models containing a significant proportion of ideal fluid. Numerical stability and accuracy are maintained at air/rock interfaces (where the contrast in mass density is on the order of 1 to 2000) via a finite-difference operator "order switching" formalism. The fourth-order spatial FD operator used throughout the bulk of the earth model is reduced to second-order in the immediate vicinity of a high-contrast interface. Current modeling efforts are oriented toward quantifying the amount of atmospheric infrasound energy generated by various underground seismic sources (explosions and earthquakes). Source depth and orientation, and surface topography play obvious roles. The cavity decoupling problem, where an explosion is detonated within an air-filled void, is of special interest. A point explosion source located at the center of a spherical cavity generates only diverging compressional waves. However, we find that shear waves are generated by an off-center source, or by a non-spherical cavity (e.g. a tunnel). Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
High Performance Programming Using Explicit Shared Memory Model on Cray T3D1
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Saini, Subhash; Grassi, Charles
1994-01-01
The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve
1987-01-01
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
1989-08-09
quantitative and can be ascribed to differences in experimental methodology , recovery methods and canputational procedure. one important differenc.e in...when the oil was pyrolyzed in sealed glass tubes. Aircraft turbo oil lubricants with the designation MIL-L-23699 are in canron usage throughout the...which is not explosive, not an oxidizing agent and is relatively inflamnable and non -corrosive. It has the following structure: CH2 - 0 CH3 CH2 C CH2 - 0
2014-09-15
solver, OpenFOAM version 2.1.‡ In particular, the incompressible laminar flow equations (Eq. 6-8) were solved in conjunction with the pressure im- plicit...central differencing and upwinding schemes, respectively. Since the OpenFOAM code is inherently transient, steady-state conditions were ob- tained...collaborative effort between Kitware and Los Alamos National Laboratory. ‡ OpenFOAM is a free, open-source computational fluid dynamics software developed
An application of fractional integration to a long temperature series
NASA Astrophysics Data System (ADS)
Gil-Alana, L. A.
2003-11-01
Some recently proposed techniques of fractional integration are applied to a long UK temperature series. The tests are valid under general forms of serial correlation and do not require estimation of the fractional differencing parameter. The results show that central England temperatures have increased about 0.23 °C per 100 years in recent history. Attempting to summarize the conclusions for each of the months, we are left with the impression that the highest increase has occurred during the months from October to March.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E.W.
A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.
1986-01-01
A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.
Further results on the stagnation point boundary layer with hydrogen injection.
NASA Technical Reports Server (NTRS)
Wu, P.; Libby, P. A.
1972-01-01
The results of an earlier paper on the behavior of the boundary layer at an axisymmetric stagnation with hydrogen injection into a hot external airstream are extended to span the entire range from essentially frozen to essentially equilibrium flow. This extension is made possible by the employment of finite difference methods; the accurate treatment of the boundary conditions at 'infinity,' the differencing technique employed and the formulation resulting in block tri-diagonal matrices are slight variants in the present work.
Jay D. Miller; Eric E. Knapp; Carl H. Key; Carl N. Skinner; Clint J. Isbell; R. Max Creasy; Joseph W. Sherlock
2009-01-01
Multispectral satellite data have become a common tool used in the mapping of wildland fire effects. Fire severity, defined as the degree to which a site has been altered, is often the variable mapped. The Normalized Burn Ratio (NBR) used in an absolute difference change detection protocol (dNBR), has become the remote sensing method of choice for US Federal land...
Research in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Murman, Earll M.
1987-01-01
The numerical integration of quasi-one-dimensional unsteady flow problems which involve finite rate chemistry are discussed, and are expressed in terms of conservative form Euler and species conservation equations. Hypersonic viscous calculations for delta wing geometries is also examined. The conical Navier-Stokes equations model was selected in order to investigate the effects of viscous-inviscid interations. The more complete three-dimensional model is beyond the available computing resources. The flux vector splitting method with van Leer's MUSCL differencing is being used. Preliminary results were computed for several conditions.
CFD in Support of Wind Tunnel Testing for Aircraft/Weapons Integration
2004-06-01
Warming flux vector splitting scheme. Viscous rate t mies s to the oDentati ote t fluxes (computed using spatial central differencing) in erotate try...computations factors to eliminate them from the current computation. performed. The grid system consisted of 18 x 106 points These newly i-blanked grid...273-295. 130 14. van Leer, B., "Towards the Ultimate Conservative 18 . Suhs, N.E., and R.W. Tramel, "PEGSUS 4.0 Users Manual." Difference Scheme V. A
2007-11-01
again, with of the prevailing T, S, and, hence, D gradients through the the advent of high-performance spaceborne altimeters (e.g., high- aspect - ratio ... rectangular domains with linear dimensions largely , if not completely, eliminated by the differencing oper- of about 60 km in a 4-h flight. (See...strongest A simple four- quadrant arctangent of the terms in the density in the 00 and 1800 directions, whereas compensation is most ratio would serve our
Numerical Field Model Simulation of Full Scale Fire Tests in a Closed Spherical/Cylindrical Vessel.
1987-12-01
the behavior of an actual fire on board a ship. The computer model will be verified by the experimental data obtained in Fire-l. It is important to... behavior in simulations where convection is important. The upwind differencing scheme takes into account the unsymmetrical phenomenon of convection by using...TANK CELL ON THE NORTH SIDE) FOR A * * PARTICULAR FIRE CELL * * COSUMS (I,J) = THE ARRAY TO STORE THE SIMILIAR VALUE FOR THE FIRE * * CELL TO THE SOUTH
Computing interface motion in compressible gas dynamics
NASA Technical Reports Server (NTRS)
Mulder, W.; Osher, S.; Sethan, James A.
1992-01-01
An analysis is conducted of the coupling of Osher and Sethian's (1988) 'Hamilton-Jacobi' level set formulation of the equations of motion for propagating interfaces to a system of conservation laws for compressible gas dynamics, giving attention to both the conservative and nonconservative differencing of the level set function. The capabilities of the method are illustrated in view of the results of numerical convergence studies of the compressible Rayleigh-Taylor and Kelvin-Helmholtz instabilities for air-air and air-helium boundaries.
Mid-Infrared Spectroscopy of Carbon Stars in the Small Magellanic Cloud
2006-07-10
nod. Before extracting spectra from fit a variety of spectral feature shapes using MgS considerably the images, we used the imclean software package...mined from neighboring pixels. In addition to the dust features , the IRS wavelength range also To extract spectra from the cleaned and differenced...Example of the extraction of the molecular bands and the SiC dust 24 jIm, and they avoid any potential problems at the joint be- feature from the spectrum
Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.
2015-01-01
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174
Deng, Nanjie; Zhang, Bin W; Levy, Ronald M
2015-06-09
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.
NASA Astrophysics Data System (ADS)
Hanagan, C.; La Femina, P.
2017-12-01
Understanding processes that lead to volcanic eruptions is paramount for predicting future volcanic activity. Telica volcano, Nicaragua is a persistently active volcano with hundreds of daily, low magnitude and low frequency seismic events, high-temperature degassing, and sub-decadal VEI 1-3 eruptions. The phreatic vulcanian eruptions of 1999, 2011, and 2013, and phreatic to phreatomagmatic vulcanian eruption of 2015 are thought to have resulted by sealing of the hydrothermal system prior to the eruptions. Two mechanisms have been proposed for sealing of the volcanic system, hydrothermal mineralization and landslides covering the vent. These eruptions affect the crater morphology of Telica volcano, and therefore the exact mechanisms of change to the crater's form are of interest to provide data that may support or refute the proposed sealing mechanisms, improving our understanding of eruption mechanisms. We use a collection of photographs between February 1994 and May 2016 and a combination of qualitative and quantitative photogrammetry to detect the extent and type of changes in crater morphology associated with 2011, 2013, and 2015 eruptive activity. We produced dense point cloud models using Agisoft PhotoScan Professional for times with sufficient photographic coverage, including August 2011, March 2013, December 2015, March 2016, and May 2016. Our May 2016 model is georeferenced, and each other point cloud was differenced using the C2C tool in CloudCompare and the M3C2 method (CloudCompare plugin) Lague et al. (2013). Results of the qualitative observations and quantitative differencing reveal a general trend of material subtraction from the inner crater walls associated with eruptive activity and accumulation of material on the crater floor, often visibly sourced from the walls of the crater. Both daily activity and VEI 1-3 explosive events changed the crater morphology, and correlation between a landslide-covered vent and the 2011 and 2015 eruptive sequences exists. Though further study and integration with other date sets is required, a positive feedback mechanism between accumulation of material blocking the vent, eruption, and subsequent accumulation of material to re-block the vent remains possible.
NASA Astrophysics Data System (ADS)
Otto, M.; Scherer, D.; Richters, J.
2011-05-01
High Altitude Wetlands of the Andes (HAWA) belong to a unique type of wetland within the semi-arid high Andean region. Knowledge about HAWA has been derived mainly from studies at single sites within different parts of the Andes at only small time scales. On the one hand, HAWA depend on water provided by glacier streams, snow melt or precipitation. On the other hand, they are suspected to influence hydrology through water retention and vegetation growth altering stream flow velocity. We derived HAWA land cover from satellite data at regional scale and analysed changes in connection with precipitation over the last decade. Perennial and temporal HAWA subtypes can be distinguished by seasonal changes of photosynthetically active vegetation (PAV) indicating the perennial or temporal availability of water during the year. HAWA have been delineated within a region of 12 800 km2 situated in the Northwest of Lake Titicaca. The multi-temporal classification method used Normalized Differenced Vegetation Index (NDVI) and Normalized Differenced Infrared Index (NDII) data derived from two Landsat ETM+ scenes at the end of austral winter (September 2000) and at the end of austral summer (May 2001). The mapping result indicates an unexpected high abundance of HAWA covering about 800 km2 of the study region (6 %). Annual HAWA mapping was computed using NDVI 16-day composites of Moderate Resolution Imaging Spectroradiometer (MODIS). Analyses on the relation between HAWA and precipitation was based on monthly precipitation data of the Tropical Rain Measurement Mission (TRMM 3B43) and MODIS Eight Day Maximum Snow Extent data (MOD10A2) from 2000 to 2010. We found HAWA subtype specific dependencies on precipitation conditions. A strong relation exists between perennial HAWA and snow fall (r2: 0.82) in dry austral winter months (June to August) and between temporal HAWA and precipitation (r2: 0.75) during austral summer (March to May). Annual changes in spatial extend of perennial HAWA indicate alterations in annual water supply generated from snow melt.
NASA Astrophysics Data System (ADS)
Otto, M.; Scherer, D.; Richters, J.
2011-01-01
High Altitude Wetlands of the Andes (HAWA) are unique types of wetlands within the semi-arid high Andean region. Knowledge about HAWA has been derived mainly from studies at single sites within different parts of the Andes at only small time scales. On the one hand HAWA depend on water provided by glacier streams, snow melt or precipitation. On the other hand, they are suspected to influence hydrology through water retention and vegetation growth altering stream flow velocity. We derived HAWA land cover from satellite data at regional scale and analysed changes in connection with precipitation over the last decade. Perennial and temporal HAWA subtypes can be distinguished by seasonal changes of photosynthetically active vegetation (PAV) indicating the perennial or temporal availability of water during the year. HAWA have been delineated within a region of 11 000 km2 situated in the Northwest of Lake Titicaca. The multi temporal classification method used Normalized Differenced Vegetation Index (NDVI) and Normalized Differenced Infrared Index (NDII) data derived from two Landsat ETM+ scenes at the end of austral winter (September 2000) and at the end of austral summer (May 2001). The mapping result indicates an unexpected high abundance of HAWA covering about 800 km2 of the study region (6%). Annual HAWA mapping was computed using NDVI 16-day composites of Moderate Resolution Imaging Spectroradiometer (MODIS). Analyses on the reletation between HAWA and precipitation was based on monthly precipitation data of the Tropical Rain Measurement Mission (TRMM 3B43) and MODIS Eight Day Maximum Snow Extent data (MOD10A2) from 2000 to 2010. We found HAWA subtype specific dependencies to precipitation conditions. Strong relation exists between perennial HAWA and snow fall (r2: 0.82) in dry austral winter months (June to August) and between temporal HAWA and precipitation (r2: 0.75) during austral summer (March to May). Annual spatial patterns of perennial HAWA indicated spatial alteration of water supply for PAV up to several hundred metres at a single HAWA site.
A review of hybrid implicit explicit finite difference time domain method
NASA Astrophysics Data System (ADS)
Chen, Juan
2018-06-01
The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.
Lindgren, Kristen P.; Ramirez, Jason J.; Olin, Cecilia C.; Neighbors, Clayton
2016-01-01
Drinking identity – how much individuals view themselves as drinkers– is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity’s utility and uniqueness as a predictor relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every three months over two academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit, versus, implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. PMID:27428756
Lindgren, Kristen P; Ramirez, Jason J; Olin, Cecilia C; Neighbors, Clayton
2016-09-01
Drinking identity-how much individuals view themselves as drinkers-is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity's utility and uniqueness as predictors relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every 3 months over 2 academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit versus implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Xia, Yidong
The objective this work is to develop a parallel, implicit reconstructed discontinuous Galerkin (RDG) method using Taylor basis for the solution of the compressible Navier-Stokes equations on 3D hybrid grids. This third-order accurate RDG method is based on a hierarchical weighed essentially non- oscillatory reconstruction scheme, termed as HWENO(P1P 2) to indicate that a quadratic polynomial solution is obtained from the underlying linear polynomial DG solution via a hierarchical WENO reconstruction. The HWENO(P1P2) is designed not only to enhance the accuracy of the underlying DG(P1) method but also to ensure non-linear stability of the RDG method. In this reconstruction scheme, a quadratic polynomial (P2) solution is first reconstructed using a least-squares approach from the underlying linear (P1) discontinuous Galerkin solution. The final quadratic solution is then obtained using a Hermite WENO reconstruction, which is necessary to ensure the linear stability of the RDG method on 3D unstructured grids. The first derivatives of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the non-linear stability of the RDG method. The parallelization in the RDG method is based on a message passing interface (MPI) programming paradigm, where the METIS library is used for the partitioning of a mesh into subdomain meshes of approximately the same size. Both multi-stage explicit Runge-Kutta and simple implicit backward Euler methods are implemented for time advancement in the RDG method. In the implicit method, three approaches: analytical differentiation, divided differencing (DD), and automatic differentiation (AD) are developed and implemented to obtain the resulting flux Jacobian matrices. The automatic differentiation is a set of techniques based on the mechanical application of the chain rule to obtain derivatives of a function given as a computer program. By using an AD tool, the manpower can be significantly reduced for deriving the flux Jacobians, which can be quite complicated, tedious, and error-prone if done by hand or symbolic arithmetic software, depending on the complexity of the numerical flux scheme. In addition, the workload for code maintenance can also be largely reduced in case the underlying flux scheme is updated. The approximate system of linear equations arising from the Newton linearization is solved by the general minimum residual (GMRES) algorithm with lower-upper symmetric gauss-seidel (LUSGS) preconditioning. This GMRES+LU-SGS linear solver is the most robust and efficient for implicit time integration of the discretized Navier-Stokes equations when the AD-based flux Jacobians are provided other than the other two approaches. The developed HWENO(P1P2) method is used to compute a variety of well-documented compressible inviscid and viscous flow test cases on 3D hybrid grids, including some standard benchmark test cases such as the Sod shock tube, flow past a circular cylinder, and laminar flow past a at plate. The computed solutions are compared with either analytical solutions or experimental data, if available to assess the accuracy of the HWENO(P 1P2) method. Numerical results demonstrate that the HWENO(P 1P2) method is able to not only enhance the accuracy of the underlying HWENO(P1) method, but also ensure the linear and non-linear stability at the presence of strong discontinuities. An extensive study of grid convergence analysis on various types of elements: tetrahedron, prism, hexahedron, and hybrid prism/hexahedron, for a number of test cases indicates that the developed HWENO(P1P2) method is able to achieve the designed third-order accuracy of spatial convergence for smooth inviscid flows: one order higher than the underlying second-order DG(P1) method without significant increase in computing costs and storage requirements. The performance of the the developed GMRES+LU-SGS implicit method is compared with the multi-stage Runge-Kutta time stepping scheme for a number of test cases in terms of the timestep and CPU time. Numerical results indicate that the overall performance of the implicit method with AD-based Jacobians is order of magnitude better than the its explicit counterpart. Finally, a set of parallel scaling tests for both explicit and implicit methods is conducted on North Carolina State University's ARC cluster, demonstrating almost an ideal scalability of the RDG method. (Abstract shortened by UMI.)
NASA Technical Reports Server (NTRS)
Jaggers, R. F.
1977-01-01
A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.
Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests
NASA Astrophysics Data System (ADS)
Toth, G.; Keppens, R.; Botchev, M. A.
1998-04-01
We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.
Efficiency Study of Implicit and Explicit Time Integration Operators for Finite Element Applications
1977-07-01
cffiAciency, wherein Beta =0 provides anl exp~licit algorithm, wvhile Beta &0 provides anl implicit algorithm. Both algorithmns arc used in the same...Hlueneme CA: CO, Code C44A Port j IHuenemne, CA NAVSEC Cod,. 6034 (Library), Washington DC NAVSI*CGRUAC’I’ PWO, ’rorri Sta, OkinawaI NAVSIIIPRBFTAC Library
Fong, Kenneth N K; Howie, Dorothy R
2009-01-01
We investigated the effects of an explicit problem-solving skills training program using a metacomponential approach with 33 outpatients with moderate acquired brain injury, in the Hong Kong context. We compared an experimental training intervention with this explicit problem-solving approach, which taught metacomponential strategies, with a conventional cognitive training approach that did not have this explicit metacognitive training. We found significant advantages for the experimental group on the Metacomponential Interview measure in association with the explicit metacomponential training, but transfer to the real-life problem-solving measures was not evidenced in statistically significant findings. Small sample size, limited time of intervention, and some limitations with these tools may have been contributing factors to these results. The training program was demonstrated to have a significantly greater effect than the conventional training approach on metacomponential functioning and the component of problem representation. However, these benefits were not transferable to real-life situations.
Explicit analytical expression for the condition number of polynomials in power form
NASA Astrophysics Data System (ADS)
Rack, Heinz-Joachim
2017-07-01
In his influential papers [1-3] W. Gautschi has defined and reshaped the condition number κ∞ of polynomials Pn of degree ≤ n which are represented in power form on a zero-symmetric interval [-ω, ω]. Basically, κ∞ is expressed as the product of two operator norms: an explicit factor times an implicit one (the l∞-norm of the coefficient vector of the n-th Chebyshev polynomial of the first kind relative to [-ω, ω]). We provide a new proof, economize the second factor and express it by an explicit analytical formula.
High-Order/Low-Order methods for ocean modeling
Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...
2015-06-01
In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.
Dynamic symmetries and quantum nonadiabatic transitions
Li, Fuxiang; Sinitsyn, Nikolai A.
2016-05-30
Kramers degeneracy theorem is one of the basic results in quantum mechanics. According to it, the time-reversal symmetry makes each energy level of a half-integer spin system at least doubly degenerate, meaning the absence of transitions or scatterings between degenerate states if the Hamiltonian does not depend on time explicitly. Here we generalize this result to the case of explicitly time-dependent spin Hamiltonians. We prove that for a spin system with the total spin being a half integer, if its Hamiltonian and the evolution time interval are symmetric under a specifically defined time reversal operation, the scattering amplitude between anmore » arbitrary initial state and its time reversed counterpart is exactly zero. Lastly, we also discuss applications of this result to the multistate Landau–Zener (LZ) theory.« less
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
Design and Implementation of an RTK-Based Vector Phase Locked Loop
Shafaati, Ahmad; Lin, Tao; Broumandan, Ali; Lachapelle, Gérard
2018-01-01
This paper introduces a novel double-differential vector phase-locked loop (DD-VPLL) for Global Navigation Satellite Systems (GNSS) that leverages carrier phase position solutions as well as base station measurements in the estimation of rover tracking loop parameters. The use of double differencing alleviates the need for estimating receiver clock dynamics and atmospheric delays; therefore, the navigation filter consists of the baseline dynamic states only. It is shown that using vector processing for carrier phase tracking leads to a significant enhancement in the receiver sensitivity compared to using the conventional scalar-based tracking loop (STL) and vector frequency locked loop (VFLL). The sensitivity improvement of 8 to 10 dB compared to STL, and 7 to 8 dB compared to VFLL, is obtained based on the test cases reported in the paper. Also, an increased probability of ambiguity resolution in the proposed method results in better availability for real time kinematic (RTK) applications. PMID:29533994
Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S
2018-06-21
The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.
Forecasting conditional climate-change using a hybrid approach
Esfahani, Akbar Akbari; Friedel, Michael J.
2014-01-01
A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.
Facebook and Twitter vaccine sentiment in response to measles outbreaks.
Deiner, Michael S; Fathy, Cherie; Kim, Jessica; Niemeyer, Katherine; Ramirez, David; Ackley, Sarah F; Liu, Fengchen; Lietman, Thomas M; Porco, Travis C
2017-11-01
Social media posts regarding measles vaccination were classified as pro-vaccination, expressing vaccine hesitancy, uncertain, or irrelevant. Spearman correlations with Centers for Disease Control and Prevention-reported measles cases and differenced smoothed cumulative case counts over this period were reported (using time series bootstrap confidence intervals). A total of 58,078 Facebook posts and 82,993 tweets were identified from 4 January 2009 to 27 August 2016. Pro-vaccination posts were correlated with the US weekly reported cases (Facebook: Spearman correlation 0.22 (95% confidence interval: 0.09 to 0.34), Twitter: 0.21 (95% confidence interval: 0.06 to 0.34)). Vaccine-hesitant posts, however, were uncorrelated with measles cases in the United States (Facebook: 0.01 (95% confidence interval: -0.13 to 0.14), Twitter: 0.0011 (95% confidence interval: -0.12 to 0.12)). These findings may result from more consistent social media engagement by individuals expressing vaccine hesitancy, contrasted with media- or event-driven episodic interest on the part of individuals favoring current policy.
Cost Savings Effects of Olanzapine as Long Term Treatment for Bipolar Disorder
Zhang, Yuting
2007-01-01
Newer and more expensive drugs account for most of the recent rapid growth of spending on prescription drugs in the past nine years. But if more expensive drugs can reduce the use of other types of health care services, total health care costs might fall. In this paper, I investigate the “drug-offset” hypothesis for an atypical antipsychotic drug, olanzapine, compared to lithium, to treat bipolar disorder. I use a propensity-score method to match on observed variables. Then, using various identification strategies, namely interrupted time series, differencing strategies, and an instrument-variable approach, I find that olanzapine does not reduce spending on other types of medical care services, compared with lithium. Olanzapine users spend $330 per month more than lithium users on non-drug health care services after drug treatment and $470 more per month on total health care spending, contradicting the “drug-offset” hypothesis in this case. JEL classification: H51; I1; I18; C1; C2 PMID:18806303
Development of an upwind, finite-volume code with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Molvik, Gregory A.
1994-01-01
Under this grant, two numerical algorithms were developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both algorithms take advantage of the benefits of upwind differencing, total variation diminishing techniques, and a finite-volume framework, but obtain their solution in two separate manners. The first algorithm is a zonal, time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms and boundary conditions. Strong coupling is used between the fluid dynamic, chemical, and turbulence equations. These codes have been validated on numerous hypersonic test cases and have provided excellent comparison with existing data.
Revisiting the relationship between managed care and hospital consolidation.
Town, Robert J; Wholey, Douglas; Feldman, Roger; Burns, Lawton R
2007-02-01
This paper analyzes whether the rise in managed care during the 1990s caused the increase in hospital concentration. We assemble data from the American Hospital Association, InterStudy and government censuses from 1990 to 2000. We employ linear regression analyses on long differenced data to estimate the impact of managed care penetration on hospital consolidation. Instrumental variable analogs of these regressions are also analyzed to control for potential endogeneity. All data are from secondary sources merged at the level of the Health Care Services Area. In 1990, the mean population-weighted hospital Herfindahl-Hirschman index (HHI) in a Health Services Area was .19. By 2000, the HHI had risen to .26. Most of this increase in hospital concentration is due to hospital consolidation. Over the same time frame HMO penetration increased three fold. However, our regression analysis strongly implies that the rise of managed care did not cause the hospital consolidation wave. This finding is robust to a number of different specifications.
Robust cubature Kalman filter for GNSS/INS with missing observations and colored measurement noise.
Cui, Bingbo; Chen, Xiyuan; Tang, Xihua; Huang, Haoqian; Liu, Xiao
2018-01-01
In order to improve the accuracy of GNSS/INS working in GNSS-denied environment, a robust cubature Kalman filter (RCKF) is developed by considering colored measurement noise and missing observations. First, an improved cubature Kalman filter (CKF) is derived by considering colored measurement noise, where the time-differencing approach is applied to yield new observations. Then, after analyzing the disadvantages of existing methods, the measurement augment in processing colored noise is translated into processing the uncertainties of CKF, and new sigma point update framework is utilized to account for the bounded model uncertainties. By reusing the diffused sigma points and approximation residual in the prediction stage of CKF, the RCKF is developed and its error performance is analyzed theoretically. Results of numerical experiment and field test reveal that RCKF is more robust than CKF and extended Kalman filter (EKF), and compared with EKF, the heading error of land vehicle is reduced by about 72.4%. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
An evaluation of four single element airfoil analytic methods
NASA Technical Reports Server (NTRS)
Freuler, R. J.; Gregorek, G. M.
1979-01-01
A comparison of four computer codes for the analysis of two-dimensional single element airfoil sections is presented for three classes of section geometries. Two of the computer codes utilize vortex singularities methods to obtain the potential flow solution. The other two codes solve the full inviscid potential flow equation using finite differencing techniques, allowing results to be obtained for transonic flow about an airfoil including weak shocks. Each program incorporates boundary layer routines for computing the boundary layer displacement thickness and boundary layer effects on aerodynamic coefficients. Computational results are given for a symmetrical section represented by an NACA 0012 profile, a conventional section illustrated by an NACA 65A413 profile, and a supercritical type section for general aviation applications typified by a NASA LS(1)-0413 section. The four codes are compared and contrasted in the areas of method of approach, range of applicability, agreement among each other and with experiment, individual advantages and disadvantages, computer run times and memory requirements, and operational idiosyncrasies.
GALEX Study of the UV Variability of Nearby Galaxies and a Deep Probe of the UV Luminosity Function
NASA Technical Reports Server (NTRS)
Schlegel, Eric
2005-01-01
The proposal has two aims - a deep exposure of NGC 300, about a factor of 10 deeper than the GALEX all-sky survey; and an examination of the UV variability. The data were received just prior to a series of proposal deadlines in early spring. A subsequent analysis delay includes a move from SAO to the University of Texas - San Antonio. Nevertheless, we have merged the data into a single deep exposure as well as undertaking a preliminary examination of the variability. No UV halo is present as detected in the GALEX observation of M83. No UV bursts are visible; however a more stringent limit will only be obtained through a differencing of the sub-images. Papers: we expect 2 papers at about 12 pages/paper to flow from this project. The first paper will report on the time variability while the second will focus on the deep UV image obtained from stacking the individual observations.
Investigation of the transient fuel preburner manifold and combustor
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Farmer, Richard C.
1989-01-01
A computational fluid dynamics (CFD) model with finite rate reactions, FDNS, was developed to study the start transient of the Space Shuttle Main Engine (SSME) fuel preburner (FPB). FDNS is a time accurate, pressure based CFD code. An upwind scheme was employed for spatial discretization. The upwind scheme was based on second and fourth order central differencing with adaptive artificial dissipation. A state of the art two-equation k-epsilon (T) turbulence model was employed for the turbulence calculation. A Pade' Rational Solution (PARASOL) chemistry algorithm was coupled with the point implicit procedure. FDNS was benchmarked with three well documented experiments: a confined swirling coaxial jet, a non-reactive ramjet dump combustor, and a reactive ramjet dump combustor. Excellent comparisons were obtained for the benchmark cases. The code was then used to study the start transient of an axisymmetric SSME fuel preburner. Predicted transient operation of the preburner agrees well with experiment. Furthermore, it was also found that an appreciable amount of unburned oxygen entered the turbine stages.
Revisiting the Relationship between Managed Care and Hospital Consolidation
Town, Robert J; Wholey, Douglas; Feldman, Roger; Burns, Lawton R
2007-01-01
Objective This paper analyzes whether the rise in managed care during the 1990s caused the increase in hospital concentration. Data Sources We assemble data from the American Hospital Association, InterStudy and government censuses from 1990 to 2000. Study Design We employ linear regression analyses on long differenced data to estimate the impact of managed care penetration on hospital consolidation. Instrumental variable analogs of these regressions are also analyzed to control for potential endogeneity. Data Collection All data are from secondary sources merged at the level of the Health Care Services Area. Principle Findings In 1990, the mean population-weighted hospital Herfindahl–Hirschman index (HHI) in a Health Services Area was .19. By 2000, the HHI had risen to .26. Most of this increase in hospital concentration is due to hospital consolidation. Over the same time frame HMO penetration increased three fold. However, our regression analysis strongly implies that the rise of managed care did not cause the hospital consolidation wave. This finding is robust to a number of different specifications. PMID:17355590
NASA Technical Reports Server (NTRS)
Bates, J. R.; Semazzi, F. H. M.; Higgins, R. W.; Barros, Saulo R. M.
1990-01-01
A vector semi-Lagrangian semi-implicit two-time-level finite-difference integration scheme for the shallow water equations on the sphere is presented. A C-grid is used for the spatial differencing. The trajectory-centered discretization of the momentum equation in vector form eliminates pole problems and, at comparable cost, gives greater accuracy than a previous semi-Lagrangian finite-difference scheme which used a rotated spherical coordinate system. In terms of the insensitivity of the results to increasing timestep, the new scheme is as successful as recent spectral semi-Lagrangian schemes. In addition, the use of a multigrid method for solving the elliptic equation for the geopotential allows efficient integration with an operation count which, at high resolution, is of lower order than in the case of the spectral models. The properties of the new scheme should allow finite-difference models to compete with spectral models more effectively than has previously been possible.
Space-based observations of megacity carbon dioxide
NASA Astrophysics Data System (ADS)
Kort, Eric A.; Frankenberg, Christian; Miller, Charles E.; Oda, Tom
2012-09-01
Urban areas now house more than half the world's population, and are estimated to contribute over 70% of global energy-related CO2 emissions. Many cities have emission reduction policies in place, but lack objective, observation-based methods for verifying their outcomes. Here we demonstrate the potential of satellite-borne instruments to provide accurate global monitoring of megacity CO2 emissions using GOSAT observations of column averaged CO2 dry air mole fraction (XCO2) collected over Los Angeles and Mumbai. By differencing observations over the megacity with those in nearby background, we observe robust, statistically significant XCO2 enhancements of 3.2 ± 1.5 ppm for Los Angeles and 2.4 ± 1.2 ppm for Mumbai, and find these enhancements can be exploited to track anthropogenic emission trends over time. We estimate that XCO2 changes as small as 0.7 ppm in Los Angeles, corresponding to a 22% change in emissions, could be detected with GOSAT at the 95% confidence level.
Gangl, Markus; Ziefle, Andrea
2015-09-01
The authors investigate the relationship between family policy and women's attachment to the labor market, focusing specifically on policy feedback on women's subjective work commitment. They utilize a quasi-experimental design to identify normative policy effects from changes in mothers' work commitment in conjunction with two policy changes that significantly extended the length of statutory parental leave entitlements in Germany. Using unique survey data from the German Socio-Economic Panel and difference-in-differences, triple-differenced, and instrumental variables estimators for panel data, they obtain consistent empirical evidence that increasing generosity of leave entitlements led to a decline in mothers' work commitment in both East and West Germany. They also probe potential mediating mechanisms and find strong evidence for role exposure and norm setting effects. Finally, they demonstrate that policy-induced shifts in mothers' preferences have contributed to. retarding women's labor force participation after childbirth in Germany, especially as far as mothers' return to full-time employment is concerned.
Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST.
Xu, X Q
2008-07-01
We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (psi,theta,micro) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.
Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST
NASA Astrophysics Data System (ADS)
Xu, X. Q.
2008-07-01
We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (ψ,θ,γ,μ) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.
SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dormody, M.; Johnson, R. P.; Atwood, W. B.
2011-12-01
We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less
NASA Technical Reports Server (NTRS)
Castelli, Michael G.; Arnold, Steven M.
2000-01-01
Structural materials for the design of advanced aeropropulsion components are usually subject to loading under elevated temperatures, where a material's viscosity (resistance to flow) is greatly reduced in comparison to its viscosity under low-temperature conditions. As a result, the propensity for the material to exhibit time-dependent deformation is significantly enhanced, even when loading is limited to a quasi-linear stress-strain regime as an effort to avoid permanent (irreversible) nonlinear deformation. An understanding and assessment of such time-dependent effects in the context of combined reversible and irreversible deformation is critical to the development of constitutive models that can accurately predict the general hereditary behavior of material deformation. To this end, researchers at the NASA Glenn Research Center at Lewis Field developed a unique experimental technique that identifies the existence of and explicitly determines a threshold stress k, below which the time-dependent material deformation is wholly reversible, and above which irreversible deformation is incurred. This technique is unique in the sense that it allows, for the first time, an objective, explicit, experimental measurement of k. The underlying concept for the experiment is based on the assumption that the material s time-dependent reversible response is invariable, even in the presence of irreversible deformation.
Physician-assisted deaths under the euthanasia law in Belgium: a population-based survey.
Chambaere, Kenneth; Bilsen, Johan; Cohen, Joachim; Onwuteaka-Philipsen, Bregje D; Mortier, Freddy; Deliens, Luc
2010-06-15
Legalization of euthanasia and physician-assisted suicide has been heavily debated in many countries. To help inform this debate, we describe the practices of euthanasia and assisted suicide, and the use of life-ending drugs without an explicit request from the patient, in Flanders, Belgium, where euthanasia is legal. We mailed a questionnaire regarding the use of life-ending drugs with or without explicit patient request to physicians who certified a representative sample (n = 6927) of death certificates of patients who died in Flanders between June and November 2007. The response rate was 58.4%. Overall, 208 deaths involving the use of life-ending drugs were reported: 142 (weighted prevalence 2.0%) were with an explicit patient request (euthanasia or assisted suicide) and 66 (weighted prevalence 1.8%) were without an explicit request. Euthanasia and assisted suicide mostly involved patients less than 80 years of age, those with cancer and those dying at home. Use of life-ending drugs without an explicit request mostly involved patients 80 years of older, those with a disease other than cancer and those in hospital. Of the deaths without an explicit request, the decision was not discussed with the patient in 77.9% of cases. Compared with assisted deaths with the patient's explicit request, those without an explicit request were more likely to have a shorter length of treatment of the terminal illness, to have cure as a goal of treatment in the last week, to have a shorter estimated time by which life was shortened and to involve the administration of opioids. Physician-assisted deaths with an explicit patient request (euthanasia and assisted suicide) and without an explicit request occurred in different patient groups and under different circumstances. Cases without an explicit request often involved patients whose diseases had unpredictable end-of-life trajectories. Although opioids were used in most of these cases, misconceptions seem to persist about their actual life-shortening effects.
Nonlinear time series analysis of electrocardiograms
NASA Astrophysics Data System (ADS)
Bezerianos, A.; Bountis, T.; Papaioannou, G.; Polydoropoulos, P.
1995-03-01
In recent years there has been an increasing number of papers in the literature, applying the methods and techniques of Nonlinear Dynamics to the time series of electrical activity in normal electrocardiograms (ECGs) of various human subjects. Most of these studies are based primarily on correlation dimension estimates, and conclude that the dynamics of the ECG signal is deterministic and occurs on a chaotic attractor, whose dimension can distinguish between healthy and severely malfunctioning cases. In this paper, we first demonstrate that correlation dimension calculations must be used with care, as they do not always yield reliable estimates of the attractor's ``dimension.'' We then carry out a number of additional tests (time differencing, smoothing, principal component analysis, surrogate data analysis, etc.) on the ECGs of three ``normal'' subjects and three ``heavy smokers'' at rest and after mild exercising, whose cardiac rhythms look very similar. Our main conclusion is that no major dynamical differences are evident in these signals. A preliminary estimate of three to four basic variables governing the dynamics (based on correlation dimension calculations) is updated to five to six, when temporal correlations between points are removed. Finally, in almost all cases, the transition between resting and mild exercising seems to imply a small increase in the complexity of cardiac dynamics.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flows.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flow.
A comparative analysis of massed vs. distributed practice on basic math fact fluency growth rates.
Schutte, Greg M; Duhon, Gary J; Solomon, Benjamin G; Poncy, Brian C; Moore, Kathryn; Story, Bailey
2015-04-01
To best remediate academic deficiencies, educators need to not only identify empirically validated interventions but also be able to apply instructional modifications that result in more efficient student learning. The current study compared the effect of massed and distributed practice with an explicit timing intervention to evaluate the extent to which these modifications lead to increased math fact fluency on basic addition problems. Forty-eight third-grade students were placed into one of three groups with each of the groups completing four 1-min math explicit timing procedures each day across 19 days. Group one completed all four 1-min timings consecutively; group two completed two back-to-back 1-min timings in the morning and two back-to-back 1-min timings in the afternoon, and group three completed one, 1-min independent timing four times distributed across the day. Growth curve modeling was used to examine the progress throughout the course of the study. Results suggested that students in the distributed practice conditions, both four times per day and two times per day, showed significantly higher fluency growth rates than those practicing only once per day in a massed format. These results indicate that combining distributed practice with explicit timing procedures is a useful modification that enhances student learning without the addition of extra instructional time when targeting math fact fluency. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Krieger, Nancy; Waterman, Pamela D.; Kosheleva, Anna; Chen, Jarvis T.; Carney, Dana R.; Smith, Kevin W.; Bennett, Gary G.; Williams, David R.; Freeman, Elmer; Russell, Beverley; Thornhill, Gisele; Mikolowsky, Kristin; Rifkin, Rachel; Samuel, Latrice
2011-01-01
Background To date, research on racial discrimination and health typically has employed explicit self-report measures, despite their potentially being affected by what people are able and willing to say. We accordingly employed an Implicit Association Test (IAT) for racial discrimination, first developed and used in two recent published studies, and measured associations of the explicit and implicit discrimination measures with each other, socioeconomic and psychosocial variables, and smoking. Methodology/Principal Findings Among the 504 black and 501 white US-born participants, age 35–64, randomly recruited in 2008–2010 from 4 community health centers in Boston, MA, black participants were over 1.5 times more likely (p<0.05) to be worse off economically (e.g., for poverty and low education) and have higher social desirability scores (43.8 vs. 28.2); their explicit discrimination exposure was also 2.5 to 3.7 times higher (p<0.05) depending on the measure used, with over 60% reporting exposure in 3 or more domains and within the last year. Higher IAT scores for target vs. perpetrator of discrimination occurred for the black versus white participants: for “black person vs. white person”: 0.26 vs. 0.13; and for “me vs. them”: 0.24 vs. 0.19. In both groups, only low non-significant correlations existed between the implicit and explicit discrimination measures; social desirability was significantly associated with the explicit but not implicit measures. Although neither the explicit nor implicit discrimination measures were associated with odds of being a current smoker, the excess risk for black participants (controlling for age and gender) rose in models that also controlled for the racial discrimination and psychosocial variables; additional control for socioeconomic position sharply reduced and rendered the association null. Conclusions Implicit and explicit measures of racial discrimination are not equivalent and both warrant use in research on racial discrimination and health, along with data on socioeconomic position and social desirability. PMID:22125618
NASA Astrophysics Data System (ADS)
Speck, Jared
2013-07-01
In this article, we study the 1 + 3-dimensional relativistic Euler equations on a pre-specified conformally flat expanding spacetime background with spatial slices that are diffeomorphic to {R}^3. We assume that the fluid verifies the equation of state {p = c2s ρ,} where {0 ≤ cs ≤ √{1/3}} is the speed of sound. We also assume that the reciprocal of the scale factor associated with the expanding spacetime metric verifies a c s -dependent time-integrability condition. Under these assumptions, we use the vector field energy method to prove that an explicit family of physically motivated, spatially homogeneous, and spatially isotropic fluid solutions are globally future-stable under small perturbations of their initial conditions. The explicit solutions corresponding to each scale factor are analogs of the well-known spatially flat Friedmann-Lemaître-Robertson-Walker family. Our nonlinear analysis, which exploits dissipative terms generated by the expansion, shows that the perturbed solutions exist for all future times and remain close to the explicit solutions. This work is an extension of previous results, which showed that an analogous stability result holds when the spacetime is exponentially expanding. In the case of the radiation equation of state p = (1/3)ρ, we also show that if the time-integrability condition for the reciprocal of the scale factor fails to hold, then the explicit fluid solutions are unstable. More precisely, we show the existence of an open family of initial data such that (i) it contains arbitrarily small smooth perturbations of the explicit solutions' data and (ii) the corresponding perturbed solutions necessarily form shocks in finite time. The shock formation proof is based on the conformal invariance of the relativistic Euler equations when {c2s = 1/3,} which allows for a reduction to a well-known result of Christodoulou.
NASA Astrophysics Data System (ADS)
Tsai, Meng-Jung; Hsu, Chung-Yuan; Tsai, Chin-Chung
2012-04-01
Due to a growing trend of exploring scientific knowledge on the Web, a number of studies have been conducted to highlight examination of students' online searching strategies. The investigation of online searching generally employs methods including a survey, interview, screen-capturing, or transactional logs. The present study firstly intended to utilize a survey, the Online Information Searching Strategies Inventory (OISSI), to examine users' searching strategies in terms of control, orientation, trial and error, problem solving, purposeful thinking, selecting main ideas, and evaluation, which is defined as implicit strategies. Second, this study conducted screen-capturing to investigate the students' searching behaviors regarding the number of keywords, the quantity and depth of Web page exploration, and time attributes, which is defined as explicit strategies. Ultimately, this study explored the role that these two types of strategies played in predicting the students' online science information searching outcomes. A total of 103 Grade 10 students were recruited from a high school in northern Taiwan. Through Pearson correlation and multiple regression analyses, the results showed that the students' explicit strategies, particularly the time attributes proposed in the present study, were more successful than their implicit strategies in predicting their outcomes of searching science information. The participants who spent more time on detailed reading (explicit strategies) and had better skills of evaluating Web information (implicit strategies) tended to have superior searching performance.
NASA Technical Reports Server (NTRS)
Randall, David A.; Fowler, Laura D.
1999-01-01
This report summarizes the design of a new version of the stratiform cloud parameterization called Eauliq; the new version is called Eauliq NG. The key features of Eauliq NG are: (1) a prognostic fractional area covered by stratiform cloudiness, following the approach developed by M. Tiedtke for use in the ECMWF model; (2) separate prognostic thermodynamic variables for the clear and cloudy portions of each grid cell; (3) separate vertical velocities for the clear and cloudy portions of each grid cell, allowing the model to represent some aspects of observed mesoscale circulations; (4) cumulus entrainment from both the clear and cloudy portions of a grid cell, and cumulus detrainment into the cloudy portion only; and (5) the effects of the cumulus-induced subsidence in the cloudy portion of a grid cell on the cloud water and ice there. In this paper we present the mathematical framework of Eauliq NG; a discussion of cumulus effects; a new parameterization of lateral mass exchanges between clear and cloudy regions; and a theory to determine the mesoscale mass circulation, based on the hypothesis that the stratiform clouds remain neutrally buoyant through time and that the mesoscale circulations are the mechanism which makes this possible. An appendix also discusses some time-differencing methods.
A High-Resolution Capability for Large-Eddy Simulation of Jet Flows
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2011-01-01
A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arakawa, Akio; Konor, C.S.
Two types of vertical grids are used for atmospheric models: The Lorenz (L grid) and the Charney-Phillips grid (CP grid). In this paper, problems with the L grid are pointed out that are due to the existence of an extra degree of freedom in the vertical distribution of the temperature (and the potential temperature). Then a vertical differencing of the primitive equations based on the CP grid is presented, while most of the advantages of the L grid in a hybrid {sigma}-p vetical coordinate are maintained. The discrete hydrostatic equation is constructed in such a way that it is freemore » from the vertical computational mode in the thermal field. Also, the vertical advection of the potential temperature in the discrete thermodynamic equation is constructed in such a way that it reduces to the standard (and most straightforward) vertical differencing of the quasigeostrophic equations based on the CP grid. Simulations of standing oscillations superposed on a resting atmosphere are presented using two vertically discrete models, one based on the L grid and the other on the CP grid. The comparison of the simulations shows that with the L grid a stationary vertically zigzag pattern dominates in the thermal field, while with the CP grid no such pattern is evident. Simulations of the growth of an extrapolated cyclone in a cyclic channel on a {beta} plan are also presented using two different {sigma}-coordinate models, again one with the L grid and the other with the CP grid, starting from random disturbances. 17 refs., 8 figs.« less
Megathrust splay faults at the focus of the Prince William Sound asperity, Alaska
Liberty, Lee M.; Finn, Shaun P.; Haeussler, Peter J.; Pratt, Thomas L.; Peterson, Andrew
2013-01-01
High-resolution sparker and crustal-scale air gun seismic reflection data, coupled with repeat bathymetric surveys, document a region of repeated coseismic uplift on the portion of the Alaska subduction zone that ruptured in 1964. This area defines the western limit of Prince William Sound. Differencing of vintage and modern bathymetric surveys shows that the region of greatest uplift related to the 1964 Great Alaska earthquake was focused along a series of subparallel faults beneath Prince William Sound and the adjacent Gulf of Alaska shelf. Bathymetric differencing indicates that 12 m of coseismic uplift occurred along two faults that reached the seafloor as submarine terraces on the Cape Cleare bank southwest of Montague Island. Sparker seismic reflection data provide cumulative Holocene slip estimates as high as 9 mm/yr along a series of splay thrust faults within both the inner wedge and transition zone of the accretionary prism. Crustal seismic data show that these megathrust splay faults root separately into the subduction zone décollement. Splay fault divergence from this megathrust correlates with changes in midcrustal seismic velocity and magnetic susceptibility values, best explained by duplexing of the subducted Yakutat terrane rocks above Pacific plate rocks along the trailing edge of the Yakutat terrane. Although each splay fault is capable of independent motion, we conclude that the identified splay faults rupture in a similar pattern during successive megathrust earthquakes and that the region of greatest seismic coupling has remained consistent throughout the Holocene.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Trajectory control sensor engineering model detailed test objective
NASA Technical Reports Server (NTRS)
Dekome, Kent; Barr, Joseph Martin
1991-01-01
The concept employed in an existing Trajectory Control Sensor (TCS) breadboard is being developed into an engineering model to be considered for flight on the Shuttle as a Detailed Test Objective (DTO). The sensor design addresses the needs of Shuttle/SSF docking/berthing by providing relative range and range rate to 1500 meters as well as the perceived needs of AR&C by relative attitude measurement over the last 100 meters. Range measurement is determined using a four-tone ranging technique. The Doppler shift on the highest frequency tone will be used to provide direct measurement of range rate. Bearing rate and attitude rates will be determined through back differencing of bearing and attitude, respectively. The target consists of an isosceles triangle configuration of three optical retroreflectors, roughly one meter and one-half meter in size. After target acquisition, the sensor continually updates the positions of the three retros at a rate of about one hertz. The engineering model is expected to weigh about 25 pounds, consume 25-30 watts, and have an envelope of about 1.25 cubic feet. The following concerns were addressed during the presentation: are there any concerns with differentiating attitude and bearing to get attitude and bearing rates? Since the docking scenario has low data bandwidth, back differencing is a sufficient approximation of a perfect differentiator for this application. Could range data be obtained if there were no retroreflectors on the target vehicle? Possibly, but only at close range. It would be dependent on target characteristics.
Megathrust splay faults at the focus of the Prince William Sound asperity, Alaska
NASA Astrophysics Data System (ADS)
Liberty, Lee M.; Finn, Shaun P.; Haeussler, Peter J.; Pratt, Thomas L.; Peterson, Andrew
2013-10-01
sparker and crustal-scale air gun seismic reflection data, coupled with repeat bathymetric surveys, document a region of repeated coseismic uplift on the portion of the Alaska subduction zone that ruptured in 1964. This area defines the western limit of Prince William Sound. Differencing of vintage and modern bathymetric surveys shows that the region of greatest uplift related to the 1964 Great Alaska earthquake was focused along a series of subparallel faults beneath Prince William Sound and the adjacent Gulf of Alaska shelf. Bathymetric differencing indicates that 12 m of coseismic uplift occurred along two faults that reached the seafloor as submarine terraces on the Cape Cleare bank southwest of Montague Island. Sparker seismic reflection data provide cumulative Holocene slip estimates as high as 9 mm/yr along a series of splay thrust faults within both the inner wedge and transition zone of the accretionary prism. Crustal seismic data show that these megathrust splay faults root separately into the subduction zone décollement. Splay fault divergence from this megathrust correlates with changes in midcrustal seismic velocity and magnetic susceptibility values, best explained by duplexing of the subducted Yakutat terrane rocks above Pacific plate rocks along the trailing edge of the Yakutat terrane. Although each splay fault is capable of independent motion, we conclude that the identified splay faults rupture in a similar pattern during successive megathrust earthquakes and that the region of greatest seismic coupling has remained consistent throughout the Holocene.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Chacon, Luis; Knoll, Dana Alan
2015-07-31
A multi-rate PIC formulation was developed that employs large timesteps for slow field evolution, and small (adaptive) timesteps for particle orbit integrations. Implementation is based on a JFNK solver with nonlinear elimination and moment preconditioning. The approach is free of numerical instabilities (ω peΔt >>1, and Δx >> λ D), and requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant gains (vs. conventional explicit PIC) may be possible for large scale simulations. The paper is organized as follows: Vlasov-Maxwell Particle-in-cell (PIC) methods for plasmas; Explicit, semi-implicit, and implicit time integrations; Implicit PIC formulation (Jacobian-Free Newton-Krylovmore » (JFNK) with nonlinear elimination allows different treatments of disparate scales, discrete conservation properties (energy, charge, canonical momentum, etc.)); Some numerical examples; and Summary.« less
ERIC Educational Resources Information Center
Quixal, Martí; Meurers, Detmar
2016-01-01
The paper tackles a central question in the field of Intelligent Computer-Assisted Language Learning (ICALL): How can language learning tasks be conceptualized and made explicit in a way that supports the pedagogical goals of current Foreign Language Teaching and Learning and at the same time provides an explicit characterization of the Natural…
ERIC Educational Resources Information Center
Doornwaard, Suzan M.; Bickham, David S.; Rich, Michael; ter Bogt, Tom F. M.; van den Eijnden, Regina J. J. M.
2015-01-01
Although research has repeatedly demonstrated that adolescents' use of sexually explicit Internet material (SEIM) is related to their endorsement of permissive sexual attitudes and their experience with sexual behavior, it is not clear how linkages between these constructs unfold over time. This study combined 2 types of longitudinal modeling,…
a Landsat Time-Series Stacks Model for Detection of Cropland Change
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, J.; Zhang, J.
2017-09-01
Global, timely, accurate and cost-effective cropland monitoring with a fine spatial resolution will dramatically improve our understanding of the effects of agriculture on greenhouse gases emissions, food safety, and human health. Time-series remote sensing imagery have been shown particularly potential to describe land cover dynamics. The traditional change detection techniques are often not capable of detecting land cover changes within time series that are severely influenced by seasonal difference, which are more likely to generate pseuso changes. Here,we introduced and tested LTSM ( Landsat time-series stacks model), an improved Continuous Change Detection and Classification (CCDC) proposed previously approach to extract spectral trajectories of land surface change using a dense Landsat time-series stacks (LTS). The method is expected to eliminate pseudo changes caused by phenology driven by seasonal patterns. The main idea of the method is that using all available Landsat 8 images within a year, LTSM consisting of two term harmonic function are estimated iteratively for each pixel in each spectral band .LTSM can defines change area by differencing the predicted and observed Landsat images. The LTSM approach was compared with change vector analysis (CVA) method. The results indicated that the LTSM method correctly detected the "true change" without overestimating the "false" one, while CVA pointed out "true change" pixels with a large number of "false changes". The detection of change areas achieved an overall accuracy of 92.37 %, with a kappa coefficient of 0.676.
Neill, Erica; Rossell, Susan Lee
2013-02-28
Semantic memory deficits in schizophrenia (SZ) are profound, yet there is no research comparing implicit and explicit semantic processing in the same participant sample. In the current study, both implicit and explicit priming are investigated using direct (LION-TIGER) and indirect (LION-STRIPES; where tiger is not displayed) stimuli comparing SZ to healthy controls. Based on a substantive review (Rossell and Stefanovic, 2007) and meta-analysis (Pomarol-Clotet et al., 2008), it was predicted that SZ would be associated with increased indirect priming implicitly. Further, it was predicted that SZ would be associated with abnormal indirect priming explicitly, replicating earlier work (Assaf et al., 2006). No specific hypotheses were made for implicit direct priming due to the heterogeneity of the literature. It was hypothesised that explicit direct priming would be intact based on the structured nature of this task. The pattern of results suggests (1) intact reaction time (RT) and error performance implicitly in the face of abnormal direct priming and (2) impaired RT and error performance explicitly. This pattern confirms general findings regarding implicit/explicit memory impairments in SZ whilst highlighting the unique pattern of performance specific to semantic priming. Finally, priming performance is discussed in relation to thought disorder and length of illness. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Alexander, David M; Trengove, Chris; van Leeuwen, Cees
2015-11-01
An assumption nearly all researchers in cognitive neuroscience tacitly adhere to is that of space-time separability. Historically, it forms the basis of Donders' difference method, and to date, it underwrites all difference imaging and trial-averaging of cortical activity, including the customary techniques for analyzing fMRI and EEG/MEG data. We describe the assumption and how it licenses common methods in cognitive neuroscience; in particular, we show how it plays out in signal differencing and averaging, and how it misleads us into seeing the brain as a set of static activity sources. In fact, rather than being static, the domains of cortical activity change from moment to moment: Recent research has suggested the importance of traveling waves of activation in the cortex. Traveling waves have been described at a range of different spatial scales in the cortex; they explain a large proportion of the variance in phase measurements of EEG, MEG and ECoG, and are important for understanding cortical function. Critically, traveling waves are not space-time separable. Their prominence suggests that the correct frame of reference for analyzing cortical activity is the dynamical trajectory of the system, rather than the time and space coordinates of measurements. We illustrate what the failure of space-time separability implies for cortical activation, and what consequences this should have for cognitive neuroscience.
Synchronization of spontaneous eyeblinks while viewing video stories
Nakano, Tamami; Yamamoto, Yoshiharu; Kitajo, Keiichi; Takahashi, Toshimitsu; Kitazawa, Shigeru
2009-01-01
Blinks are generally suppressed during a task that requires visual attention and tend to occur immediately before or after the task when the timing of its onset and offset are explicitly given. During the viewing of video stories, blinks are expected to occur at explicit breaks such as scene changes. However, given that the scene length is unpredictable, there should also be appropriate timing for blinking within a scene to prevent temporal loss of critical visual information. Here, we show that spontaneous blinks were highly synchronized between and within subjects when they viewed the same short video stories, but were not explicitly tied to the scene breaks. Synchronized blinks occurred during scenes that required less attention such as at the conclusion of an action, during the absence of the main character, during a long shot and during repeated presentations of a similar scene. In contrast, blink synchronization was not observed when subjects viewed a background video or when they listened to a story read aloud. The results suggest that humans share a mechanism for controlling the timing of blinks that searches for an implicit timing that is appropriate to minimize the chance of losing critical information while viewing a stream of visual events. PMID:19640888
NASA Astrophysics Data System (ADS)
Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.
2018-03-01
In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.
NASA Astrophysics Data System (ADS)
Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.
2004-11-01
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Decision or no decision: how do patient-physician interactions end and what matters?
Tai-Seale, Ming; Bramson, Rachel; Bao, Xiaoming
2007-03-01
A clearly stated clinical decision can induce a cognitive closure in patients and is an important investment in the end of patient-physician communications. Little is known about how often explicit decisions are made in primary care visits. To use an innovative videotape analysis approach to assess physicians' propensity to state decisions explicitly, and to examine the factors influencing decision patterns. We coded topics discussed in 395 videotapes of primary care visits, noting the number of instances and the length of discussions on each topic, and how discussions ended. A regression analysis tested the relationship between explicit decisions and visit factors such as the nature of topics under discussion, instances of discussion, the amount of time the patient spoke, and competing demands from other topics. About 77% of topics ended with explicit decisions. Patients spoke for an average of 58 seconds total per topic. Patients spoke more during topics that ended with an explicit decision, (67 seconds), compared with 36 seconds otherwise. The number of instances of a topic was associated with higher odds of having an explicit decision (OR = 1.73, p < 0.01). Increases in the number of topics discussed in visits (OR = 0.95, p < .05), and topics on lifestyle and habits (OR = 0.60, p < .01) were associated with lower odds of explicit decisions. Although discussions often ended with explicit decisions, there were variations related to the content and dynamics of interactions. We recommend strengthening patients' voice and developing clinical tools, e.g., an "exit prescription," to improving decision making.
Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
2016-12-22
Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less
Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
NASA Technical Reports Server (NTRS)
Gilbertsen, Noreen D.; Belytschko, Ted
1990-01-01
The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.
Explicit and implicit assessment of gender roles.
Fernández, Juan; Quiroga, M Ángeles; Escorial, Sergio; Privado, Jesús
2014-05-01
Gender roles have been assessed by explicit measures and, recently, by implicit measures. In the former case, the theoretical assumptions have been questioned by empirical results. To solve this contradiction, we carried out two concatenated studies based on a relatively well-founded theoretical and empirical approach. The first study was designed to obtain a sample of genderized activities of the domestic sphere by means of an explicit assessment. Forty-two raters (22 women and 20 men, balanced on age, sex, and level of education) took part as raters. In the second study, an implicit assessment of gender roles was carried out, focusing on the response time given to the sample activities obtained from the first study. A total of 164 adults (90 women and 74 men, mean age = 43), with experience in living with a partner and balanced on age, sex, and level of education, participated. Taken together, results show that explicit and implicit assessment converge. The current social reality shows that there is still no equity in some gender roles in the domestic sphere. These consistent results show considerable theoretical and empirical robustness, due to the double implicit and explicit assessment.
Efficient High-Order Accurate Methods using Unstructured Grids for Hydrodynamics and Acoustics
2007-08-31
Leer. On upstream differencing and godunov-type schemes for hyperbolic conservation laws. SIAM Review, 25(1):35-61, 1983. [46] F . Eleuterio Toro ...early stage [4-61. The basic idea can be surmised from simple approximation theory. If a continuous function f is to be approximated over a set of...a2f 4h4 a4ff(x+eh) = f (x)+-- + _ •-+• e +0 +... (1) where 0 < e < 1 for approximations inside the interval of width h. For a second-order approximation
CFD propels NASP propulsion progress
NASA Technical Reports Server (NTRS)
Povinelli, Louis A.; Dwoyer, Douglas L.; Green, Michael J.
1990-01-01
The most complex aerothermodynamics encountered in the National Aerospace Plane (NASP) propulsion system are associated with the fuel-mixing and combustion-reaction flows of its combustor section; adequate CFD tools must be developed to model shock-wave systems, turbulent hydrogen/air mixing, flow separation, and combustion. Improvements to existing CFD codes have involved extension from two dimensions to three, as well as the addition of finite-rate hydrogen-air chemistry. A novel CFD code for the treatment of reacting flows throughout the NASP, designated GASP, uses the most advanced upwind-differencing technology.