Sample records for exponential time differencing

  1. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  2. Exponential integrators in time-dependent density-functional calculations

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  3. Efficient and stable exponential time differencing Runge-Kutta methods for phase field elastic bending energy models

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqiang; Ju, Lili; Du, Qiang

    2016-07-01

    The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.

  4. A Time Domain Analysis of Gust-Cascade Interaction Noise

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Hixon, R.; Sawyer, S. D.; Dyson, R. W.

    2003-01-01

    The gust response of a 2 D cascade is studied by solving the full nonlinear Euler equations employing higher order accurate spatial differencing and time stepping techniques. The solutions exhibit the exponential decay of the two circumferential mode orders of the cutoff blade passing frequency (BPF) tone and propagation of one circumferential mode order at 2BPF, as would be expected for the flow configuration considered. Two frequency excitations indicate that the interaction between the frequencies and the self interaction contribute to the amplitude of the propagating mode.

  5. EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES

    PubMed Central

    Börgers, Christoph; Nectow, Alexander R.

    2013-01-01

    Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276

  6. Time-marching transonic flutter solutions including angle-of-attack effects

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.; Bennett, R. M.; Whitlow, W., Jr.; Seidel, D. A.

    1982-01-01

    Transonic aeroelastic solutions based upon the transonic small perturbation potential equation were studied. Time-marching transient solutions of plunging and pitching airfoils were analyzed using a complex exponential modal identification technique, and seven alternative integration techniques for the structural equations were evaluated. The HYTRAN2 code was used to determine transonic flutter boundaries versus Mach number and angle-of-attack for NACA 64A010 and MBB A-3 airfoils. In the code, a monotone differencing method, which eliminates leading edge expansion shocks, is used to solve the potential equation. When the effect of static pitching moment upon the angle-of-attack is included, the MBB A-3 airfoil can have multiple flutter speeds at a given Mach number.

  7. A Fourier spectral-discontinuous Galerkin method for time-dependent 3-D Schrödinger-Poisson equations with discontinuous potentials

    NASA Astrophysics Data System (ADS)

    Lu, Tiao; Cai, Wei

    2008-10-01

    In this paper, we propose a high order Fourier spectral-discontinuous Galerkin method for time-dependent Schrödinger-Poisson equations in 3-D spaces. The Fourier spectral Galerkin method is used for the two periodic transverse directions and a high order discontinuous Galerkin method for the longitudinal propagation direction. Such a combination results in a diagonal form for the differential operators along the transverse directions and a flexible method to handle the discontinuous potentials present in quantum heterojunction and supperlattice structures. As the derivative matrices are required for various time integration schemes such as the exponential time differencing and Crank Nicholson methods, explicit derivative matrices of the discontinuous Galerkin method of various orders are derived. Numerical results, using the proposed method with various time integration schemes, are provided to validate the method.

  8. Orbit determination performances using single- and double-differenced methods: SAC-C and KOMPSAT-2

    NASA Astrophysics Data System (ADS)

    Hwang, Yoola; Lee, Byoung-Sun; Kim, Haedong; Kim, Jaehoon

    2011-01-01

    In this paper, Global Positioning System-based (GPS) Orbit Determination (OD) for the KOrea-Multi-Purpose-SATellite (KOMPSAT)-2 using single- and double-differenced methods is studied. The requirement of KOMPSAT-2 orbit accuracy is to allow 1 m positioning error to generate 1-m panchromatic images. KOMPSAT-2 OD is computed using real on-board GPS data. However, the local time of the KOMPSAT-2 GPS receiver is not synchronized with the zero fractional seconds of the GPS time internally, and it continuously drifts according to the pseudorange epochs. In order to resolve this problem, an OD based on single-differenced GPS data from the KOMPSAT-2 uses the tagged time of the GPS receiver, and the accuracy of the OD result is assessed using the overlapping orbit solution between two adjacent days. The clock error of the GPS satellites in the KOMPSAT-2 single-differenced method is corrected using International GNSS Service (IGS) clock information at 5-min intervals. KOMPSAT-2 OD using both double- and single-differenced methods satisfies the requirement of 1-m accuracy in overlapping three dimensional orbit solutions. The results of the SAC-C OD compared with JPL’s POE (Precise Orbit Ephemeris) are also illustrated to demonstrate the implementation of the single- and double-differenced methods using a satellite that has independent orbit information available for validation.

  9. Stochastic model stationarization by eliminating the periodic term and its effect on time series prediction

    NASA Astrophysics Data System (ADS)

    Moeeni, Hamid; Bonakdari, Hossein; Fatemi, Seyed Ehsan

    2017-04-01

    Because time series stationarization has a key role in stochastic modeling results, three methods are analyzed in this study. The methods are seasonal differencing, seasonal standardization and spectral analysis to eliminate the periodic effect on time series stationarity. First, six time series including 4 streamflow series and 2 water temperature series are stationarized. The stochastic term for these series obtained with ARIMA is subsequently modeled. For the analysis, 9228 models are introduced. It is observed that seasonal standardization and spectral analysis eliminate the periodic term completely, while seasonal differencing maintains seasonal correlation structures. The obtained results indicate that all three methods present acceptable performance overall. However, model accuracy in monthly streamflow prediction is higher with seasonal differencing than with the other two methods. Another advantage of seasonal differencing over the other methods is that the monthly streamflow is never estimated as negative. Standardization is the best method for predicting monthly water temperature although it is quite similar to seasonal differencing, while spectral analysis performed the weakest in all cases. It is concluded that for each monthly seasonal series, seasonal differencing is the best stationarization method in terms of periodic effect elimination. Moreover, the monthly water temperature is predicted with more accuracy than monthly streamflow. The criteria of the average stochastic term divided by the amplitude of the periodic term obtained for monthly streamflow and monthly water temperature were 0.19 and 0.30, 0.21 and 0.13, and 0.07 and 0.04 respectively. As a result, the periodic term is more dominant than the stochastic term for water temperature in the monthly water temperature series compared to streamflow series.

  10. Efficient field-theoretic simulation of polymer solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106

    2014-12-14

    We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less

  11. Performance Analysis of Several GPS/Galileo Precise Point Positioning Models

    PubMed Central

    Afifi, Akram; El-Rabbany, Ahmed

    2015-01-01

    This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada’s GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference. PMID:26102495

  12. Performance Analysis of Several GPS/Galileo Precise Point Positioning Models.

    PubMed

    Afifi, Akram; El-Rabbany, Ahmed

    2015-06-19

    This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada's GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference.

  13. Syndromic surveillance using veterinary laboratory data: data pre-processing and algorithm performance evaluation

    PubMed Central

    Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Revie, Crawford W.; Sanchez, Javier

    2013-01-01

    Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt–Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel. PMID:23576782

  14. Syndromic surveillance using veterinary laboratory data: data pre-processing and algorithm performance evaluation.

    PubMed

    Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Revie, Crawford W; Sanchez, Javier

    2013-06-06

    Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt-Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel.

  15. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  16. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  17. MACH2: A Two-Dimensional Magnetohydrodynamic Simulation Code for Complex Experimental Configurations.

    DTIC Science & Technology

    1987-09-01

    Eulerian or Lagrangian flow problems, use of real equations of state and transport properties from the Los Alamos National Laboratory SESAME package...permissible problem geometries; time differencing; and spatial discretization, centering, and differ- encing of MACH2. /. I." - Magnetohydrodynamics...R-A & Y7 24 9 5.2 THE IDEAL COORDINATE SYSTEM DTIC TAB 13 24 5.3 THE MATERIAL DERIVATIVE Uannounoed 0 26 Justifloatlo- 6. TIME DIFFERENCING 31 6.1

  18. A three dimensional multigrid multiblock multistage time stepping scheme for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.

    1991-01-01

    A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.

  19. Trend time-series modeling and forecasting with neural networks.

    PubMed

    Qi, Min; Zhang, G Peter

    2008-05-01

    Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.

  20. Numerical solution of the incompressible Navier-Stokes equations. Ph.D. Thesis - Stanford Univ., Mar. 1989

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1990-01-01

    The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.

  1. GNSS global real-time augmentation positioning: Real-time precise satellite clock estimation, prototype system construction and performance analysis

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang

    2018-01-01

    Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm for BeiDou GEO satellites. The real-time positioning results prove that the GPS + BeiDou + Galileo RT-PPP comparing to GPS-only can effectively accelerate convergence time by about 60%, improve the positioning accuracy by about 30% and obtain averaged RMS 4 cm in horizontal and 6 cm in vertical; additionally RT-SPP accuracy in the prototype system can realize positioning accuracy with about averaged RMS 1 m in horizontal and 1.5-2 m in vertical, which are improved by 60% and 70% to SPP based on broadcast ephemeris, respectively.

  2. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  3. Non-oscillatory central differencing for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Nessyahu, Haim; Tadmor, Eitan

    1988-01-01

    Many of the recently developed high resolution schemes for hyperbolic conservation laws are based on upwind differencing. The building block for these schemes is the averaging of an appropriate Godunov solver; its time consuming part involves the field-by-field decomposition which is required in order to identify the direction of the wind. Instead, the use of the more robust Lax-Friedrichs (LxF) solver is proposed. The main advantage is simplicity: no Riemann problems are solved and hence field-by-field decompositions are avoided. The main disadvantage is the excessive numerical viscosity typical to the LxF solver. This is compensated for by using high-resolution MUSCL-type interpolants. Numerical experiments show that the quality of results obtained by such convenient central differencing is comparable with those of the upwind schemes.

  4. Effects of riparian vegetation on topographic change during a large flood event, Rio Puerco, New Mexico, USA

    USGS Publications Warehouse

    Perignon, M. C.; Tucker, G.E.; Griffin, Eleanor R.; Friedman, Jonathan M.

    2013-01-01

    The spatial distribution of riparian vegetation can strongly influence the geomorphic evolution of dryland rivers during large floods. We present the results of an airborne lidar differencing study that quantifies the topographic change that occurred along a 12 km reach of the Lower Rio Puerco, New Mexico, during an extreme event in 2006. Extensive erosion of the channel banks took place immediately upstream of the study area, where tamarisk and sandbar willow had been removed. Within the densely vegetated study reach, we measure a net volumetric change of 578,050 ± ∼ 490,000 m3, with 88.3% of the total aggradation occurring along the floodplain and channel and 76.7% of the erosion focusing on the vertical valley walls. The sediment derived from the devegetated reach deposited within the first 3.6 km of the study area, with depth decaying exponentially with distance downstream. Elsewhere, floodplain sediments were primarily sourced from the erosion of valley walls. Superimposed on this pattern are the effects of vegetation and valley morphology on sediment transport. Sediment thickness is seen to be uniform among sandbar willows and highly variable within tamarisk groves. These reach-scale patterns of sedimentation observed in the lidar differencing likely reflect complex interactions of vegetation, flow, and sediment at the scale of patches to individual plants.

  5. High-precision coseismic displacement estimation with a single-frequency GPS receiver

    NASA Astrophysics Data System (ADS)

    Guo, Bofeng; Zhang, Xiaohong; Ren, Xiaodong; Li, Xingxing

    2015-07-01

    To improve the performance of Global Positioning System (GPS) in the earthquake/tsunami early warning and rapid response applications, minimizing the blind zone and increasing the stability and accuracy of both the rapid source and rupture inversion, the density of existing GPS networks must be increased in the areas at risk. For economic reasons, low-cost single-frequency receivers would be preferable to make the sparse dual-frequency GPS networks denser. When using single-frequency GPS receivers, the main problem that must be solved is the ionospheric delay, which is a critical factor when determining accurate coseismic displacements. In this study, we introduce a modified Satellite-specific Epoch-differenced Ionospheric Delay (MSEID) model to compensate for the effect of ionospheric error on single-frequency GPS receivers. In the MSEID model, the time-differenced ionospheric delays observed from a regional dual-frequency GPS network to a common satellite are fitted to a plane rather than part of a sphere, and the parameters of this plane are determined by using the coordinates of the stations. When the parameters are known, time-differenced ionospheric delays for a single-frequency GPS receiver could be derived from the observations of those dual-frequency receivers. Using these ionospheric delay corrections, coseismic displacements of a single-frequency GPS receiver can be accurately calculated based on time-differenced carrier-phase measurements in real time. The performance of the proposed approach is validated using 5 Hz GPS data collected during the 2012 Nicoya Peninsula Earthquake (Mw 7.6, 2012 September 5) in Costa Rica. This shows that the proposed approach improves the accuracy of the displacement of a single-frequency GPS station, and coseismic displacements with an accuracy of a few centimetres are achieved over a 10-min interval.

  6. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  7. Reducing numerical diffusion for incompressible flow calculations

    NASA Technical Reports Server (NTRS)

    Claus, R. W.; Neely, G. M.; Syed, S. A.

    1984-01-01

    A number of approaches for improving the accuracy of incompressible, steady-state flow calculations are examined. Two improved differencing schemes, Quadratic Upstream Interpolation for Convective Kinematics (QUICK) and Skew-Upwind Differencing (SUD), are applied to the convective terms in the Navier-Stokes equations and compared with results obtained using hybrid differencing. In a number of test calculations, it is illustrated that no single scheme exhibits superior performance for all flow situations. However, both SUD and QUICK are shown to be generally more accurate than hybrid differencing.

  8. Method and apparatus for rate integration supplement for attitude referencing with quaternion differencing

    NASA Technical Reports Server (NTRS)

    Rodden, John James (Inventor); Price, Xenophon (Inventor); Carrou, Stephane (Inventor); Stevens, Homer Darling (Inventor)

    2002-01-01

    A control system for providing attitude control in spacecraft. The control system comprising a primary attitude reference system, a secondary attitude reference system, and a hyper-complex number differencing system. The hyper-complex number differencing system is connectable to the primary attitude reference system and the secondary attitude reference system.

  9. Time-asymptotic solutions of the Navier-Stokes equation for free shear flows using an alternating-direction implicit method

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.

    1976-01-01

    An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.

  10. Performance of differenced range data types in Voyager navigation

    NASA Technical Reports Server (NTRS)

    Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.

    1982-01-01

    Voyager radio navigation made use of a differenced rage data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter-to-Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.

  11. Performance of differenced range data types in Voyager navigation

    NASA Technical Reports Server (NTRS)

    Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.

    1982-01-01

    Voyager radio navigation made use of differenced range data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter to Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.

  12. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.

  13. Prediction of fire growth on furniture using CFD

    NASA Astrophysics Data System (ADS)

    Pehrson, Richard David

    A fire growth calculation method has been developed that couples a computational fluid dynamics (CFD) model with bench scale cone calorimeter test data for predicting the rate of flame spread on compartment contents such as furniture. The commercial CFD code TASCflow has been applied to solve time averaged conservation equations using an algebraic multigrid solver with mass weighted skewed upstream differencing for advection. Closure models include k-e for turbulence, eddy breakup for combustion following a single step irreversible reaction with Arrhenius rate constant, finite difference radiation transfer, and conjugate heat transfer. Radiation properties are determined from concentrations of soot, CO2 and H2O using the narrow band model of Grosshandler and exponential wide band curve fit model of Modak. The growth in pyrolyzing area is predicted by treating flame spread as a series of piloted ignitions based on coupled gas-fluid boundary conditions. The mass loss rate from a given surface element follows the bench scale test data for input to the combustion prediction. The fire growth model has been tested against foam-fabric mattresses and chairs burned in the furniture calorimeter. In general, agreement between model and experiment for peak heat release rate (HRR), time to peak HRR, and total energy lost is within +/-20%. Used as a proxy for the flame spread velocity, the slope of the HRR curve predicted by model agreed with experiment within +/-20% for all but one case.

  14. Second-order variational equations for N-body simulations

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2016-07-01

    First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.

  15. Effective image differencing with convolutional neural networks for real-time transient hunting

    NASA Astrophysics Data System (ADS)

    Sedaghat, Nima; Mahabal, Ashish

    2018-06-01

    Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.

  16. Flux splitting algorithms for two-dimensional viscous flows with finite-rate chemistry

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun; Liou, Meng-Sing

    1989-01-01

    The Roe flux difference splitting method was extended to treat 2-D viscous flows with nonequilibrium chemistry. The derivations have avoided unnecessary assumptions or approximations. For spatial discretization, the second-order Roe upwind differencing is used for the convective terms and central differencing for the viscous terms. An upwind-based TVD scheme is applied to eliminate oscillations and obtain a sharp representation of discontinuities. A two-state Runge-Kutta method is used to time integrate the discretized Navier-Stokes and species transport equations for the asymptotic steady solutions. The present method is then applied to two types of flows: the shock wave/boundary layer interaction problems and the jet in cross flows.

  17. Flux splitting algorithms for two-dimensional viscous flows with finite-rate chemistry

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun; Liou, Meng-Sing

    1989-01-01

    The Roe flux-difference splitting method has been extended to treat two-dimensional viscous flows with nonequilibrium chemistry. The derivations have avoided unnecessary assumptions or approximations. For spatial discretization, the second-order Roe upwind differencing is used for the convective terms and central differencing for the viscous terms. An upwind-based TVD scheme is applied to eliminate oscillations and obtain a sharp representation of discontinuities. A two-stage Runge-Kutta method is used to time integrate the discretized Navier-Stokes and species transport equations for the asymptotic steady solutions. The present method is then applied to two types of flows: the shock wave/boundary layer interaction problems and the jet in cross flows.

  18. The terminal area simulation system. Volume 1: Theoretical formulation

    NASA Technical Reports Server (NTRS)

    Proctor, F. H.

    1987-01-01

    A three-dimensional numerical cloud model was developed for the general purpose of studying convective phenomena. The model utilizes a time splitting integration procedure in the numerical solution of the compressible nonhydrostatic primitive equations. Turbulence closure is achieved by a conventional first-order diagnostic approximation. Open lateral boundaries are incorporated which minimize wave reflection and which do not induce domain-wide mass trends. Microphysical processes are governed by prognostic equations for potential temperature water vapor, cloud droplets, ice crystals, rain, snow, and hail. Microphysical interactions are computed by numerous Orville-type parameterizations. A diagnostic surface boundary layer is parameterized assuming Monin-Obukhov similarity theory. The governing equation set is approximated on a staggered three-dimensional grid with quadratic-conservative central space differencing. Time differencing is approximated by the second-order Adams-Bashforth method. The vertical grid spacing may be either linear or stretched. The model domain may translate along with a convective cell, even at variable speeds.

  19. Gigahertz-gated InGaAs/InP single-photon detector with detection efficiency exceeding 55% at 1550 nm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comandar, L. C.; Engineering Department, Cambridge University, 9 J J Thomson Ave, Cambridge CB3 0FA; Fröhlich, B.

    We report on a gated single-photon detector based on InGaAs/InP avalanche photodiodes (APDs) with a single-photon detection efficiency exceeding 55% at 1550 nm. Our detector is gated at 1 GHz and employs the self-differencing technique for gate transient suppression. It can operate nearly dead time free, except for the one clock cycle dead time intrinsic to self-differencing, and we demonstrate a count rate of 500 Mcps. We present a careful analysis of the optimal driving conditions of the APD measured with a dead time free detector characterization setup. It is found that a shortened gate width of 360 ps together with anmore » increased driving signal amplitude and operation at higher temperatures leads to improved performance of the detector. We achieve an afterpulse probability of 7% at 50% detection efficiency with dead time free measurement and a record efficiency for InGaAs/InP APDs of 55% at an afterpulse probability of only 10.2% with a moderate dead time of 10 ns.« less

  20. Interferometric observations of an artificial satellite.

    PubMed

    Preston, R A; Ergas, R; Hinteregger, H F; Knight, C A; Robertson, D S; Shapiro, I I; Whitney, A R; Rogers, A E; Clark, T A

    1972-10-27

    Very-long-baseline interferometric observations of radio signals from the TACSAT synchronous satellite, even though extending over only 7 hours, have enabled an excellent orbit to be deduced. Precision in differenced delay and delay-rate measurements reached 0.15 nanosecond ( approximately 5 centimeters in equivalent differenced distance) and 0.05 picosecond per second ( approximately 0.002 centimeter per second in equivalent differenced velocity), respectively. The results from this initial three-station experiment demonstrate the feasibility of using the method for accurate satellite tracking and for geodesy. Comparisons are made with other techniques.

  1. AN IMMERSED BOUNDARY METHOD FOR COMPLEX INCOMPRESSIBLE FLOWS

    EPA Science Inventory

    An immersed boundary method for time-dependant, three- dimensional, incompressible flows is presented in this paper. The incompressible Navier-Stokes equations are discretized using a low-diffusion flux splitting method for the inviscid fluxes and a second order central differenc...

  2. The study and realization of BDS un-differenced network-RTK based on raw observations

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Pengfei; Zhang, Rui; Lu, Cuixian; Liu, Jinhai; Lu, Xiaochun

    2017-06-01

    A BeiDou Navigation Satellite System (BDS) Un-Differenced (UD) Network Real Time Kinematic (URTK) positioning algorithm, which is based on raw observations, is developed in this study. Given an integer ambiguity datum, the UD integer ambiguity can be recovered from Double-Differenced (DD) integer ambiguities, thus the UD observation corrections can be calculated and interpolated for the rover station to achieve the fast positioning. As this URTK model uses raw observations instead of the ionospheric-free combinations, it is applicable for both dual- and single-frequency users to realize the URTK service. The algorithm was validated with the experimental BDS data collected at four regional stations from day of year 080 to 083 in 2016. The achieved results confirmed the high efficiency of the proposed URTK for providing the rover users a rapid and precise positioning service compared to the standard NRTK. In our test, the BDS URTK can provide a positioning service with cm level accuracy, i.e., 1 cm in the horizontal components, and 2-3 cm in the vertical component. Within the regional network, the mean convergence time for the users to fix the UD ambiguities is 2.7 s for the dual-frequency observations and of 6.3 s for the single-frequency observations after the DD ambiguity resolution. Furthermore, due to the feature of realizing URTK technology under the UD processing mode, it is possible to integrate the global Precise Point Positioning (PPP) and the local NRTK into a seamless positioning service.

  3. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  4. Pump-probe differencing technique for cavity-enhanced, noise-canceling saturation laser spectroscopy.

    PubMed

    de Vine, Glenn; McClelland, David E; Gray, Malcolm B; Close, John D

    2005-05-15

    We present an experimental technique that permits mechanical-noise-free, cavity-enhanced frequency measurements of an atomic transition and its hyperfine structure. We employ the 532-nm frequency-doubled output from a Nd:YAG laser and an iodine vapor cell. The cell is placed in a folded ring cavity (FRC) with counterpropagating pump and probe beams. The FRC is locked with the Pound-Drever-Hall technique. Mechanical noise is rejected by differencing the pump and probe signals. In addition, this differenced error signal provides a sensitive measure of differential nonlinearity within the FRC.

  5. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McHugh, P.R.; Ramshaw, J.D.

    MAGMA is a FORTRAN computer code designed to viscous flow in in situ vitrification melt pools. It models three-dimensional, incompressible, viscous flow and heat transfer. The momentum equation is coupled to the temperature field through the buoyancy force terms arising from the Boussinesq approximation. All fluid properties, except density, are assumed variable. Density is assumed constant except in the buoyancy force terms in the momentum equation. A simple melting model based on the enthalpy method allows the study of the melt front progression and latent heat effects. An indirect addressing scheme used in the numerical solution of the momentum equationmore » voids unnecessary calculations in cells devoid of liquid. Two-dimensional calculations can be performed using either rectangular or cylindrical coordinates, while three-dimensional calculations use rectangular coordinates. All derivatives are approximated by finite differences. The incompressible Navier-Stokes equations are solved using a new fully implicit iterative technique, while the energy equation is differenced explicitly in time. Spatial derivatives are written in conservative form using a uniform, rectangular, staggered mesh based on the marker and cell placement of variables. Convective terms are differenced using a weighted average of centered and donor cell differencing to ensure numerical stability. Complete descriptions of MAGMA governing equations, numerics, code structure, and code verification are provided. 14 refs.« less

  7. Proportionality between Doppler noise and integrated signal path electron density validated by differenced S-X range

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1977-01-01

    Observations of Viking differenced S-band/X-band (S-X) range are shown to correlate strongly with Viking Doppler noise. A ratio of proportionality between downlink S-band plasma-induced range error and two-way Doppler noise is calculated. A new parameter (similar to the parameter epsilon which defines the ratio of local electron density fluctuations to mean electron density) is defined as a function of observed data sample interval (Tau) where the time-scale of the observations is 15 Tau. This parameter is interpreted to yield the ratio of net observed phase (or electron density) fluctuations to integrated electron density (in RMS meters/meter). Using this parameter and the thin phase-changing screen approximation, a value for the scale size L is calculated. To be consistent with Doppler noise observations, it is seen necessary for L to be proportional to closest approach distance a, and a strong function of the observed data sample interval, and hence the time-scale of the observations.

  8. Continuous non-invasive blood glucose monitoring by spectral image differencing method

    NASA Astrophysics Data System (ADS)

    Huang, Hao; Liao, Ningfang; Cheng, Haobo; Liang, Jing

    2018-01-01

    Currently, the use of implantable enzyme electrode sensor is the main method for continuous blood glucose monitoring. But the effect of electrochemical reactions and the significant drift caused by bioelectricity in body will reduce the accuracy of the glucose measurements. So the enzyme-based glucose sensors need to be calibrated several times each day by the finger-prick blood corrections. This increases the patient's pain. In this paper, we proposed a method for continuous Non-invasive blood glucose monitoring by spectral image differencing method in the near infrared band. The method uses a high-precision CCD detector to switch the filter in a very short period of time, obtains the spectral images. And then by using the morphological method to obtain the spectral image differences, the dynamic change of blood sugar is reflected in the image difference data. Through the experiment proved that this method can be used to monitor blood glucose dynamically to a certain extent.

  9. Fast and accurate implementation of Fourier spectral approximations of nonlocal diffusion operators and its applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Qiang, E-mail: jyanghkbu@gmail.com; Yang, Jiang, E-mail: qd2125@columbia.edu

    This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simplemore » ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.« less

  10. Basic research for the Earth dynamics program

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The technique of range differencing with Lageos ranges to obtain more accurate estimates of baseline lengths and polar motion variation was studied. Differencing quasi simultaneous range observations eliminate a great deal of orbital biases. Progress is reported on the definition and maintenance of a conventional terrestrial reference system.

  11. RESULTS FROM KINEROS STREAM CHANNEL ELEMENTS MODEL OUTPUT THROUGH AGWA DIFFERENCING 1973 AND 1997 NALC LANDCOVER DATA

    EPA Science Inventory

    Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.

  12. Change analysis in the United Arab Emirates: An investigation of techniques

    USGS Publications Warehouse

    Sohl, Terry L.

    1999-01-01

    Much of the landscape of the United Arab Emirates has been transformed over the past 15 years by massive afforestation, beautification, and agricultural programs. The "greening" of the United Arab Emirates has had environmental consequences, however, including degraded groundwater quality and possible damage to natural regional ecosystems. Personnel from the Ground- Water Research project, a joint effort between the National Drilling Company of the Abu Dhabi Emirate and the U.S. Geological Survey, were interested in studying landscape change in the Abu Dhabi Emirate using Landsat thematic mapper (TM) data. The EROs Data Center in Sioux Falls, South Dakota was asked to investigate land-cover change techniques that (1) provided locational, quantitative, and qualitative information on landcover change within the Abu Dhabi Emirate; and (2) could be easily implemented by project personnel who were relatively inexperienced in remote sensing. A number of products were created with 1987 and 1996 Landsat TM data using change-detection techniques, including univariate image differencing, an "enhanced" image differencing, vegetation index differencing, post-classification differencing, and changevector analysis. The different techniques provided products that varied in levels of adequacy according to the specific application and the ease of implementation and interpretation. Specific quantitative values of change were most accurately and easily provided by the enhanced image-differencing technique, while the change-vector analysis excelled at providing rich qualitative detail about the nature of a change. 

  13. Using classification and NDVI differencing methods for monitoring sparse vegetation coverage: a case study of saltcedar in Nevada, USA.

    USDA-ARS?s Scientific Manuscript database

    A change detection experiment for an invasive species, saltcedar, near Lovelock, Nevada, was conducted with multi-date Compact Airborne Spectrographic Imager (CASI) hyperspectral datasets. Classification and NDVI differencing change detection methods were tested, In the classification strategy, a p...

  14. CINDA-3G: Improved Numerical Differencing Analyzer Program for Third-Generation Computers

    NASA Technical Reports Server (NTRS)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1970-01-01

    The goal of this work was to develop a new and versatile program to supplement or replace the original Chrysler Improved Numerical Differencing Analyzer (CINDA) thermal analyzer program in order to take advantage of the improved systems software and machine speeds of the third-generation computers.

  15. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  16. Fast Image Subtraction Using Multi-cores and GPUs

    NASA Astrophysics Data System (ADS)

    Hartung, Steven; Shukla, H.

    2013-01-01

    Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.

  17. An Explicit Upwind Algorithm for Solving the Parabolized Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1991-01-01

    An explicit, upwind algorithm was developed for the direct (noniterative) integration of the 3-D Parabolized Navier-Stokes (PNS) equations in a generalized coordinate system. The new algorithm uses upwind approximations of the numerical fluxes for the pressure and convection terms obtained by combining flux difference splittings (FDS) formed from the solution of an approximate Riemann (RP). The approximate RP is solved using an extension of the method developed by Roe for steady supersonic flow of an ideal gas. Roe's method is extended for use with the 3-D PNS equations expressed in generalized coordinates and to include Vigneron's technique of splitting the streamwise pressure gradient. The difficulty associated with applying Roe's scheme in the subsonic region is overcome. The second-order upwind differencing of the flux derivatives are obtained by adding FDS to either an original forward or backward differencing of the flux derivative. This approach is used to modify an explicit MacCormack differencing scheme into an upwind differencing scheme. The second order upwind flux approximations, applied with flux limiters, provide a method for numerically capturing shocks without the need for additional artificial damping terms which require adjustment by the user. In addition, a cubic equation is derived for determining Vegneron's pressure splitting coefficient using the updated streamwise flux vector. Decoding the streamwise flux vector with the updated value of Vigneron's pressure splitting improves the stability of the scheme. The new algorithm is applied to 2-D and 3-D supersonic and hypersonic laminar flow test cases. Results are presented for the experimental studies of Holden and of Tracy. In addition, a flow field solution is presented for a generic hypersonic aircraft at a Mach number of 24.5 and angle of attack of 1 degree. The computed results compare well to both experimental data and numerical results from other algorithms. Computational times required for the upwind PNS code are approximately equal to an explicit PNS MacCormack's code and existing implicit PNS solvers.

  18. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.

  19. Application of non-coherent Doppler data types for deep space navigation

    NASA Technical Reports Server (NTRS)

    Bhaskaran, Shyam

    1995-01-01

    Recent improvements in computational capability and Deep Space Network technology have renewed interest in examining the possibility of using one-way Doppler data alone to navigate interplanetary spacecraft. The one-way data can be formulated as the standard differenced-count Doppler or as phase measurements, and the data can be received at a single station or differenced if obtained simultaneously at two stations. A covariance analysis is performed which analyzes the accuracy obtainable by combinations of one-way Doppler data and compared with similar results using standard two-way Doppler and range. The sample interplanetary trajectory used was that of the Mars Pathfinder mission to Mars. It is shown that differenced one-way data is capable of determining the angular position of the spacecraft to fairly high accuracy, but has relatively poor sensitivity to the range. When combined with single station data, the position dispersions are roughly an order of magnitude larger in range and comparable in angular position as compared to dispersions obtained with standard data two-way types. It was also found that the phase formulation is less sensitive to data weight variations and data coverage than the differenced-count Doppler formulation.

  20. The application of noncoherent Doppler data types for Deep Space Navigation

    NASA Technical Reports Server (NTRS)

    Bhaskaran, S.

    1995-01-01

    Recent improvements in computational capability and DSN technology have renewed interest in examining the possibility of using one-way Doppler data alone to navigate interplanetary spacecraft. The one-way data can be formulated as the standard differenced-count Doppler or as phase measurements, and the data can be received at a single station or differenced if obtained simultaneously at two stations. A covariance analysis, which analyzes the accuracy obtainable by combinations of one-way Doppler data, is performed and compared with similar results using standard two-way Doppler and range. The sample interplanetary trajectory used was that of the Mars Pathfinder mission to Mars. It is shown that differenced one-way data are capable of determining the angular position of the spacecraft to fairly high accuracy, but have relatively poor sensitivity to the range. When combined with single-station data, the position dispersions are roughly an order of magnitude larger in range and comparable in angular position as compared to dispersions obtained with standard two-way data types. It was also found that the phase formulation is less sensitive to data weight variations and data coverage than the differenced-count Doppler formulation.

  1. Finite difference methods for reducing numerical diffusion in TEACH-type calculations. [Teaching Elliptic Axisymmetric Characteristics Heuristically

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.

    1985-01-01

    A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.

  2. Path length differencing and energy conservation of the S[sub N] Boltzmann/Spencer-Lewis equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippone, W.L.; Monahan, S.P.

    It is shown that the S[sub N] Boltzmann/Spencer-Lewis equations conserve energy locally if and only if they satisfy particle balance and diamond differencing is used in path length. In contrast, the spatial differencing schemes have no bearing on the energy balance. Energy is conserved globally if it is conserved locally and the multigroup cross sections are energy conserving. Although the coupled electron-photon cross sections generated by CEPXS conserve particles and charge, they do not precisely conserve energy. It is demonstrated that these cross sections can be adjusted such that particles, charge, and energy are conserved. Finally, since a conventional negativemore » flux fixup destroys energy balance when applied to path legend, a modified fixup scheme that does not is presented.« less

  3. Detection and identification of six Monilinia spp. causing brown rot using TaqMan real-time PCR from pure cultures and infected apple fruit

    USDA-ARS?s Scientific Manuscript database

    Brown rot is a severe disease affecting stone and pome fruits. This disease was recently confirmed to be caused by the following six closely related species: Monilinia fructicola, Monilinia laxa, Monilinia fructigena, Monilia polystroma, Monilia mumecola and Monilia yunnanensis. Because of differenc...

  4. Joint production and substitution in timber supply: a panel data analysis

    Treesearch

    Torjus F Bolkesjo; Joseph Buongiorno; Birger Solberg

    2010-01-01

    Supply equations for sawlog and pulpwood were developed with a panel of data from 102 Norwegian municipalities, observed from 1980 to 2000. Static and dynamic models were estimated by cross-section, time-series andpanel data methods. A static model estimated by first differencing gavethe best overall results in terms of theoretical expectations, pattern ofresiduals,...

  5. Near Real-Time Event Detection & Prediction Using Intelligent Software Agents

    DTIC Science & Technology

    2006-03-01

    value was 0.06743. Multiple autoregressive integrated moving average ( ARIMA ) models were then build to see if the raw data, differenced data, or...slight improvement. The best adjusted r^2 value was found to be 0.1814. Successful results were not expected from linear or ARIMA -based modelling ...appear, 2005. [63] Mora-Lopez, L., Mora, J., Morales-Bueno, R., et al. Modelling time series of climatic parameters with probabilistic finite

  6. Higher-order differencing method with a multigrid approach for the solution of the incompressible flow equations at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Tzanos, Constantine P.

    1992-10-01

    A higher-order differencing scheme (Tzanos, 1990) is used in conjunction with a multigrid approach to obtain accurate solutions of the Navier-Stokes convection-diffusion equations at high Re numbers. Flow in a square cavity with a moving lid is used as a test problem. a multigrid approach based on the additive correction method (Settari and Aziz) and an iterative incomplete lower and upper solver demonstrated good performance for the whole range of Re number under consideration (from 1000 to 10,000) and for both uniform and nonuniform grids. It is concluded that the combination of the higher-order differencing scheme with a multigrid approach proved to be an effective technique for giving accurate solutions of the Navier-Stokes equations at high Re numbers.

  7. Gender-related asymmetric brain vasomotor response to color stimulation: a functional transcranial Doppler spectroscopy study.

    PubMed

    Njemanze, Philip C

    2010-11-30

    The present study was designed to examine the effects of color stimulation on cerebral blood mean flow velocity (MFV) in men and women. The study included 16 (8 men and 8 women) right-handed healthy subjects. The MFV was recorded simultaneously in both right and left middle cerebral arteries in Dark and white Light conditions, and during color (Blue, Yellow and Red) stimulations, and was analyzed using functional transcranial Doppler spectroscopy (fTCDS) technique. Color processing occurred within cortico-subcortical circuits. In men, wavelength-differencing of Yellow/Blue pairs occurred within the right hemisphere by processes of cortical long-term depression (CLTD) and subcortical long-term potentiation (SLTP). Conversely, in women, frequency-differencing of Blue/Yellow pairs occurred within the left hemisphere by processes of cortical long-term potentiation (CLTP) and subcortical long-term depression (SLTD). In both genders, there was luminance effect in the left hemisphere, while in men it was along an axis opposite (orthogonal) to that of chromatic effect, in women, it was parallel. Gender-related differences in color processing demonstrated a right hemisphere cognitive style for wavelength-differencing in men, and a left hemisphere cognitive style for frequency-differencing in women. There are potential applications of fTCDS technique, for stroke rehabilitation and monitoring of drug effects.

  8. An exact formulation of the time-ordered exponential using path-sums

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giscard, P.-L., E-mail: p.giscard1@physics.ox.ac.uk; Lui, K.; Thwaite, S. J.

    2015-05-15

    We present the path-sum formulation for the time-ordered exponential of a time-dependent matrix. The path-sum formulation gives the time-ordered exponential as a branched continued fraction of finite depth and breadth. The terms of the path-sum have an elementary interpretation as self-avoiding walks and self-avoiding polygons on a graph. Our result is based on a representation of the time-ordered exponential as the inverse of an operator, the mapping of this inverse to sums of walks on a graphs, and the algebraic structure of sets of walks. We give examples demonstrating our approach. We establish a super-exponential decay bound for the magnitudemore » of the entries of the time-ordered exponential of sparse matrices. We give explicit results for matrices with commonly encountered sparse structures.« less

  9. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  10. The true quantum face of the "exponential" decay: Unstable systems in rest and in motion

    NASA Astrophysics Data System (ADS)

    Urbanowski, K.

    2017-12-01

    Results of theoretical studies and numerical calculations presented in the literature suggest that the survival probability P0(t) has the exponential form starting from times much smaller than the lifetime τ up to times t ⪢τ and that P0(t) exhibits inverse power-law behavior at the late time region for times longer than the so-called crossover time T ⪢ τ (The crossover time T is the time when the late time deviations of P0(t) from the exponential form begin to dominate). More detailed analysis of the problem shows that in fact the survival probability P0(t) can not take the pure exponential form at any time interval including times smaller than the lifetime τ or of the order of τ and it has has an oscillating form. We also study the survival probability of moving relativistic unstable particles with definite momentum . These studies show that late time deviations of the survival probability of these particles from the exponential-like form of the decay law, that is the transition times region between exponential-like and non-exponential form of the survival probability, should occur much earlier than it follows from the classical standard considerations.

  11. Steganography algorithm multi pixel value differencing (MPVD) to increase message capacity and data security

    NASA Astrophysics Data System (ADS)

    Rojali, Siahaan, Ida Sri Rejeki; Soewito, Benfano

    2017-08-01

    Steganography is the art and science of hiding the secret messages so the existence of the message cannot be detected by human senses. The data concealment is using the Multi Pixel Value Differencing (MPVD) algorithm, utilizing the difference from each pixel. The development was done by using six interval tables. The objective of this algorithm is to enhance the message capacity and to maintain the data security.

  12. TLE uncertainty estimation using robust weighted differencing

    NASA Astrophysics Data System (ADS)

    Geul, Jacco; Mooij, Erwin; Noomen, Ron

    2017-05-01

    Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).

  13. Filtering of Discrete-Time Switched Neural Networks Ensuring Exponential Dissipative and $l_{2}$ - $l_{\\infty }$ Performances.

    PubMed

    Choi, Hyun Duck; Ahn, Choon Ki; Karimi, Hamid Reza; Lim, Myo Taeg

    2017-10-01

    This paper studies delay-dependent exponential dissipative and l 2 - l ∞ filtering problems for discrete-time switched neural networks (DSNNs) including time-delayed states. By introducing a novel discrete-time inequality, which is a discrete-time version of the continuous-time Wirtinger-type inequality, we establish new sets of linear matrix inequality (LMI) criteria such that discrete-time filtering error systems are exponentially stable with guaranteed performances in the exponential dissipative and l 2 - l ∞ senses. The design of the desired exponential dissipative and l 2 - l ∞ filters for DSNNs can be achieved by solving the proposed sets of LMI conditions. Via numerical simulation results, we show the validity of the desired discrete-time filter design approach.

  14. The matrix exponential in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    1987-01-01

    The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.

  15. On the effects of signal processing on sample entropy for postural control.

    PubMed

    Lubetzky, Anat V; Harel, Daphna; Lubetzky, Eyal

    2018-01-01

    Sample entropy, a measure of time series regularity, has become increasingly popular in postural control research. We are developing a virtual reality assessment of sensory integration for postural control in people with vestibular dysfunction and wished to apply sample entropy as an outcome measure. However, despite the common use of sample entropy to quantify postural sway, we found lack of consistency in the literature regarding center-of-pressure signal manipulations prior to the computation of sample entropy. We therefore wished to investigate the effect of parameters choice and signal processing on participants' sample entropy outcome. For that purpose, we compared center-of-pressure sample entropy data between patients with vestibular dysfunction and age-matched controls. Within our assessment, participants observed virtual reality scenes, while standing on floor or a compliant surface. We then analyzed the effect of: modification of the radius of similarity (r) and the embedding dimension (m); down-sampling or filtering and differencing or detrending. When analyzing the raw center-of-pressure data, we found a significant main effect of surface in medio-lateral and anterior-posterior directions across r's and m's. We also found a significant interaction group × surface in the medio-lateral direction when r was 0.05 or 0.1 with a monotonic increase in p value with increasing r in both m's. These effects were maintained with down-sampling by 2, 3, and 4 and with detrending but not with filtering and differencing. Based on these findings, we suggest that for sample entropy to be compared across postural control studies, there needs to be increased consistency, particularly of signal handling prior to the calculation of sample entropy. Procedures such as filtering, differencing or detrending affect sample entropy values and could artificially alter the time series pattern. Therefore, if such procedures are performed they should be well justified.

  16. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  17. Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.

    PubMed

    Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2014-11-25

    In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.

  18. A numerical study of the steady scalar convective diffusion equation for small viscosity

    NASA Technical Reports Server (NTRS)

    Giles, M. B.; Rose, M. E.

    1983-01-01

    A time-independent convection diffusion equation is studied by means of a compact finite difference scheme and numerical solutions are compared to the analytic inviscid solutions. The correct internal and external boundary layer behavior is observed, due to an inherent feature of the scheme which automatically produces upwind differencing in inviscid regions and the correct viscous behavior in viscous regions.

  19. Short-term change detection for UAV video

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2012-11-01

    In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer IOSB, see Heinze et. al. 2010.1 In a further step we plan to incorporate more information from the video sequences to the change detection input images, e.g., by image enhancement or by along-track stereo which are available in the ABUL system.

  20. Hydrologic and Geomorphic Changes Resulting from the Onset of Episodic Glacial Lake Outburst Floods: Colonia River, Chile

    NASA Astrophysics Data System (ADS)

    Jacquet, J.; McCoy, S. W.; McGrath, D.; Nimick, D.; Friesen, B.; Fahey, M. J.; Leidich, J.; Okuinghttons, J.

    2015-12-01

    The Colonia river system, draining the eastern edge of the Northern Patagonia Icefield, Chile, has experienced a dramatic shift in flow regime from one characterized by seasonal discharge variability to one dominated by episodic glacial lake outburst floods (GLOFs). We use multi-temporal visible satellite images, high-resolution digital elevation models (DEMs) derived from stereo image pairs, and in situ observations to quantify sediment and water fluxes out of the dammed glacial lake, Lago Cachet Dos (LC2), as well as the concomitant downstream environmental change. GLOFs initiated in April 2008 and have since occurred, on average, two to three times a year. Differencing concurrent gage measurements made on the Baker River upstream and downstream of the confluence with the Colonia river finds peak GLOF discharges of ~ 3,000 m3s-1, which is ~ 4 times the median discharge of the Baker River and over 20 times the median discharge of the Colonia river. During each GLOF, ~ 200,000,000 m3 of water evacuates from the LC2, resulting in erosion of valley-fill sediments and the delta on the upstream end of LC2. Differencing DEMs between April 2008 and February 2014 revealed that ~ 2.5 x 107 m3 of sediment was eroded. Multi-temporal DEM differencing shows that erosion rates were highest initially, with > 20 vertical m of sediment removed between 2008 and 2012, and generally less than 5 m between 2012 and 2014. The downstream Colonia River Sandur also experienced geomorphic changes due to GLOFs. Using Landsat imagery to calculate the normalized difference water index (NDWI), we demonstrate that the Colonia River was in a stable configuration between 1984 and 2008. At the onset of GLOFs in April 2008, a change in channel location began and continued with each subsequent GLOF. Quantification of sediment and water fluxes due to GLOFs in the Colonia river valley provides insight on the geomorphic and environmental changes in river systems experiencing dramatic shifts in flow regime.

  1. Psychophysics of time perception and intertemporal choice models

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2008-03-01

    Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.

  2. On the Prony series representation of stretched exponential relaxation

    NASA Astrophysics Data System (ADS)

    Mauro, John C.; Mauro, Yihong Z.

    2018-09-01

    Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.

  3. A study of pressure-based methodology for resonant flows in non-linear combustion instabilities

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; Pindera, M. Z.; Przekwas, A. J.; Tucker, K.

    1992-01-01

    This paper presents a systematic assessment of a large variety of spatial and temporal differencing schemes on nonstaggered grids by the pressure-based methods for the problems of fast transient flows. The observation from the present study is that for steady state flow problems, pressure-based methods can be very competitive with the density-based methods. For transient flow problems, pressure-based methods utilizing the same differencing scheme are less accurate, even though the wave speeds are correctly predicted.

  4. Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment

    NASA Astrophysics Data System (ADS)

    Barnett, D. A., Jr.

    1991-02-01

    An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.

  5. Two-dimensional CFD modeling of wave rotor flow dynamics

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.; Chima, Rodrick V.

    1994-01-01

    A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. Roe's approximate Riemann solution scheme or the computationally less expensive advection upstream splitting method (AUSM) flux-splitting scheme is used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passages and the distribution of flow variables in the stationary inlet port region.

  6. Two-dimensional CFD modeling of wave rotor flow dynamics

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.; Chima, Rodrick V.

    1993-01-01

    A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. The Roe approximate Riemann solution scheme or the computationally less expensive Advection Upstream Splitting Method (AUSM) flux-splitting scheme are used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passage and the distribution of flow variables in the stationary inlet port region.

  7. Analysis of BeiDou Satellite Measurements with Code Multipath and Geometry-Free Ionosphere-Free Combinations

    PubMed Central

    Zhao, Qile; Wang, Guangxing; Liu, Zhizhao; Hu, Zhigang; Dai, Zhiqiang; Liu, Jingnan

    2016-01-01

    Using GNSS observable from some stations in the Asia-Pacific area, the carrier-to-noise ratio (CNR) and multipath combinations of BeiDou Navigation Satellite System (BDS), as well as their variations with time and/or elevation were investigated and compared with those of GPS and Galileo. Provided the same elevation, the CNR of B1 observables is the lowest among the three BDS frequencies, while B3 is the highest. The code multipath combinations of BDS inclined geosynchronous orbit (IGSO) and medium Earth orbit (MEO) satellites are remarkably correlated with elevation, and the systematic “V” shape trends could be eliminated through between-station-differencing or modeling correction. Daily periodicity was found in the geometry-free ionosphere-free (GFIF) combinations of both BDS geostationary Earth orbit (GEO) and IGSO satellites. The variation range of carrier phase GFIF combinations of GEO satellites is −2.0 to 2.0 cm. The periodicity of carrier phase GFIF combination could be significantly mitigated through between-station differencing. Carrier phase GFIF combinations of BDS GEO and IGSO satellites might also contain delays related to satellites. Cross-correlation suggests that the GFIF combinations’ time series of some GEO satellites might vary according to their relative geometries with the sun. PMID:26805831

  8. Analysis of BeiDou Satellite Measurements with Code Multipath and Geometry-Free Ionosphere-Free Combinations.

    PubMed

    Zhao, Qile; Wang, Guangxing; Liu, Zhizhao; Hu, Zhigang; Dai, Zhiqiang; Liu, Jingnan

    2016-01-20

    Using GNSS observable from some stations in the Asia-Pacific area, the carrier-to-noise ratio (CNR) and multipath combinations of BeiDou Navigation Satellite System (BDS), as well as their variations with time and/or elevation were investigated and compared with those of GPS and Galileo. Provided the same elevation, the CNR of B1 observables is the lowest among the three BDS frequencies, while B3 is the highest. The code multipath combinations of BDS inclined geosynchronous orbit (IGSO) and medium Earth orbit (MEO) satellites are remarkably correlated with elevation, and the systematic "V" shape trends could be eliminated through between-station-differencing or modeling correction. Daily periodicity was found in the geometry-free ionosphere-free (GFIF) combinations of both BDS geostationary Earth orbit (GEO) and IGSO satellites. The variation range of carrier phase GFIF combinations of GEO satellites is -2.0 to 2.0 cm. The periodicity of carrier phase GFIF combination could be significantly mitigated through between-station differencing. Carrier phase GFIF combinations of BDS GEO and IGSO satellites might also contain delays related to satellites. Cross-correlation suggests that the GFIF combinations' time series of some GEO satellites might vary according to their relative geometries with the sun.

  9. The effect of stochastic modeling of ionospheric effect on the various lengths of baseline determination

    NASA Astrophysics Data System (ADS)

    Kwon, J.; Yang, H.

    2006-12-01

    Although GPS provides continuous and accurate position information, there are still some rooms for improvement of its positional accuracy, especially in the medium and long range baseline determination. In general, in case of more than 50 km baseline length, the effect of ionospheric delay is the one causing the largest degradation in positional accuracy. For example, the ionospheric delay in terms of double differenced mode easily reaches 10 cm with baseline length of 101 km. Therefore, many researchers have been tried to mitigate/reduce the effect using various modeling methods. In this paper, the optimal stochastic modeling of the ionospheric delay in terms of baseline length is presented. The data processing has been performed by constructing a Kalman filter with states of positions, ambiguities, and the ionospheric delays in the double differenced mode. Considering the long baseline length, both double differenced GPS phase and code observations are used as observables and LAMBDA has been applied to fix the ambiguities. Here, the ionospheric delay is stochastically modeled by well-known Gaussian, 1st and 3rd order Gauss-Markov process. The parameters required in those models such as correlation distance and time is determined by the least-square adjustment using ionosphere-only observables. Mainly the results and analysis from this study show the effect of stochastic models of the ionospheric delay in terms of the baseline length, models, and parameters used. In the above example with 101 km baseline length, it was found that the positional accuracy with appropriate ionospheric modeling (Gaussian) was about ±2 cm whereas it reaches about ±15 cm with no stochastic modeling. It is expected that the approach in this study contributes to improve positional accuracy, especially in medium and long range baseline determination.

  10. Single-Receiver GPS Phase Bias Resolution

    NASA Technical Reports Server (NTRS)

    Bertiger, William I.; Haines, Bruce J.; Weiss, Jan P.; Harvey, Nathaniel E.

    2010-01-01

    Existing software has been modified to yield the benefits of integer fixed double-differenced GPS-phased ambiguities when processing data from a single GPS receiver with no access to any other GPS receiver data. When the double-differenced combination of phase biases can be fixed reliably, a significant improvement in solution accuracy is obtained. This innovation uses a large global set of GPS receivers (40 to 80 receivers) to solve for the GPS satellite orbits and clocks (along with any other parameters). In this process, integer ambiguities are fixed and information on the ambiguity constraints is saved. For each GPS transmitter/receiver pair, the process saves the arc start and stop times, the wide-lane average value for the arc, the standard deviation of the wide lane, and the dual-frequency phase bias after bias fixing for the arc. The second step of the process uses the orbit and clock information, the bias information from the global solution, and only data from the single receiver to resolve double-differenced phase combinations. It is called "resolved" instead of "fixed" because constraints are introduced into the problem with a finite data weight to better account for possible errors. A receiver in orbit has much shorter continuous passes of data than a receiver fixed to the Earth. The method has parameters to account for this. In particular, differences in drifting wide-lane values must be handled differently. The first step of the process is automated, using two JPL software sets, Longarc and Gipsy-Oasis. The resulting orbit/clock and bias information files are posted on anonymous ftp for use by any licensed Gipsy-Oasis user. The second step is implemented in the Gipsy-Oasis executable, gd2p.pl, which automates the entire process, including fetching the information from anonymous ftp

  11. A consistent spatial differencing scheme for the transonic full-potential equation in three dimensions

    NASA Technical Reports Server (NTRS)

    Thomas, S. D.; Holst, T. L.

    1985-01-01

    A full-potential steady transonic wing flow solver has been modified so that freestream density and residual are captured in regions of constant velocity. This numerically precise freestream consistency is obtained by slightly altering the differencing scheme without affecting the implicit solution algorithm. The changes chiefly affect the fifteen metrics per grid point, which are computed once and stored. With this new method, the outer boundary condition is captured accurately, and the smoothness of the solution is especially improved near regions of grid discontinuity.

  12. Upwind differencing and LU factorization for chemical non-equilibrium Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    By means of either the Roe or the Van Leer flux-splittings for inviscid terms, in conjunction with central differencing for viscous terms in the explicit operator and the Steger-Warming splitting and lower-upper approximate factorization for the implicit operator, the present, robust upwind method for solving the chemical nonequilibrium Navier-Stokes equations yields formulas for finite-volume discretization in general coordinates. Numerical tests in the illustrative cases of a hypersonic blunt body, a ramped duct, divergent nozzle flows, and shock wave/boundary layer interactions, establish the method's efficiency.

  13. Assessment of trend and seasonality in road accident data: an Iranian case study.

    PubMed

    Razzaghi, Alireza; Bahrampour, Abbas; Baneshi, Mohammad Reza; Zolala, Farzaneh

    2013-06-01

    Road traffic accidents and their related deaths have become a major concern, particularly in developing countries. Iran has adopted a series of policies and interventions to control the high number of accidents occurring over the past few years. In this study we used a time series model to understand the trend of accidents, and ascertain the viability of applying ARIMA models on data from Taybad city. This study is a cross-sectional study. We used data from accidents occurring in Taybad between 2007 and 2011. We obtained the data from the Ministry of Health (MOH) and used the time series method with a time lag of one month. After plotting the trend, non-stationary data in mean and variance were removed using Box-Cox transformation and a differencing method respectively. The ACF and PACF plots were used to control the stationary situation. The traffic accidents in our study had an increasing trend over the five years of study. Based on ACF and PACF plots gained after applying Box-Cox transformation and differencing, data did not fit to a time series model. Therefore, neither ARIMA model nor seasonality were observed. Traffic accidents in Taybad have an upward trend. In addition, we expected either the AR model, MA model or ARIMA model to have a seasonal trend, yet this was not observed in this analysis. Several reasons may have contributed to this situation, such as uncertainty of the quality of data, weather changes, and behavioural factors that are not taken into account by time series analysis.

  14. Methods and Applications of Time Series Analysis. Part I. Regression, Trends, Smoothing, and Differencing.

    DTIC Science & Technology

    1980-07-01

    FUNCTION ( t) CENTERED AT C WITH PERIOD n -nr 0 soTIME t FIGURE 3.4S RECTAPOOLAR PORN )=C FUNCTION g t) CENTERED AT 0 WITH PERIOD n n n 52n tI y I (h...of a typical family in Kabiria (a city in Northern Algeria) over the time period Jan.-Feb. 1975 through Nov.-Dec. 1977. We would like to obtain a...values of y .. .. ... -75- Table 4.2 The Average Bi-Monthly Expenses of a Family in Kabiria and Their Fourier Representation Fourier Coefficients x k

  15. Solidification of a binary mixture

    NASA Technical Reports Server (NTRS)

    Antar, B. N.

    1982-01-01

    The time dependent concentration and temperature profiles of a finite layer of a binary mixture are investigated during solidification. The coupled time dependent Stefan problem is solved numerically using an implicit finite differencing algorithm with the method of lines. Specifically, the temporal operator is approximated via an implicit finite difference operator resulting in a coupled set of ordinary differential equations for the spatial distribution of the temperature and concentration for each time. Since the resulting differential equations set form a boundary value problem with matching conditions at an unknown spatial point, the method of invariant imbedding is used for its solution.

  16. Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time

    NASA Astrophysics Data System (ADS)

    Himeoka, Yusuke; Kaneko, Kunihiko

    2017-04-01

    The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.

  17. Fluid particles only separate exponentially in the dissipation range of turbulence after extremely long times

    NASA Astrophysics Data System (ADS)

    Dhariwal, Rohit; Bragg, Andrew D.

    2018-03-01

    In this paper, we consider how the statistical moments of the separation between two fluid particles grow with time when their separation lies in the dissipation range of turbulence. In this range, the fluid velocity field varies smoothly and the relative velocity of two fluid particles depends linearly upon their separation. While this may suggest that the rate at which fluid particles separate is exponential in time, this is not guaranteed because the strain rate governing their separation is a strongly fluctuating quantity in turbulence. Indeed, Afik and Steinberg [Nat. Commun. 8, 468 (2017), 10.1038/s41467-017-00389-8] argue that there is no convincing evidence that the moments of the separation between fluid particles grow exponentially with time in the dissipation range of turbulence. Motivated by this, we use direct numerical simulations (DNS) to compute the moments of particle separation over very long periods of time in a statistically stationary, isotropic turbulent flow to see if we ever observe evidence for exponential separation. Our results show that if the initial separation between the particles is infinitesimal, the moments of the particle separation first grow as power laws in time, but we then observe convincing evidence that at sufficiently long times the moments do grow exponentially. However, this exponential growth is only observed after extremely long times ≳200 τη , where τη is the Kolmogorov time scale. This is due to fluctuations in the strain rate about its mean value measured along the particle trajectories, the effect of which on the moments of the particle separation persists for very long times. We also consider the backward-in-time (BIT) moments of the article separation, and observe that they too grow exponentially in the long-time regime. However, a dramatic consequence of the exponential separation is that at long times the difference between the rate of the particle separation forward in time (FIT) and BIT grows exponentially in time, leading to incredibly strong irreversibility in the dispersion. This is in striking contrast to the irreversibility of their relative dispersion in the inertial range, where the difference between FIT and BIT is constant in time according to Richardson's phenomenology.

  18. Reply to the Discussion of Space-Time Modelling with Long-Memory Dependence: Assessing Ireland’s Wind Resource

    DTIC Science & Technology

    1988-10-01

    meteorologists’ rule-of-thumb that climatic drift manifests itself in periods greater than 30 years. For a fractionally-differenced model with our...estimates in a univariate ARIMA (p, d, q) with I d I< 0.5 has been derived by Li and McLrjd (1986). The model used by I-Iaslett an Raftery can be viewed as...Reply to the Discussion of "Space-time Modelling with Long-mnmory cDependence: Assessing Ireland’s Wind Resource" cJohn Haslett Department of

  19. Discrete models for the numerical analysis of time-dependent multidimensional gas dynamics

    NASA Technical Reports Server (NTRS)

    Roe, P. L.

    1984-01-01

    A possible technique is explored for extending to multidimensional flows some of the upwind-differencing methods that are highly successful in the one-dimensional case. Emphasis is on the two-dimensional case, and the flow domain is assumed to be divided into polygonal computational elements. Inside each element, the flow is represented by a local superposition of elementary solutions consisting of plane waves not necessarily aligned with the element boundaries.

  20. On the construction and application of implicit factored schemes for conservation laws. [in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Warming, R. F.; Beam, R. M.

    1978-01-01

    Efficient, noniterative, implicit finite difference algorithms are systematically developed for nonlinear conservation laws including purely hyperbolic systems and mixed hyperbolic parabolic systems. Utilization of a rational fraction or Pade time differencing formulas, yields a direct and natural derivation of an implicit scheme in a delta form. Attention is given to advantages of the delta formation and to various properties of one- and two-dimensional algorithms.

  1. Bi-exponential T2 analysis of healthy and diseased Achilles tendons: an in vivo preliminary magnetic resonance study and correlation with clinical score.

    PubMed

    Juras, Vladimir; Apprich, Sebastian; Szomolanyi, Pavol; Bieri, Oliver; Deligianni, Xeni; Trattnig, Siegfried

    2013-10-01

    To compare mono- and bi-exponential T2 analysis in healthy and degenerated Achilles tendons using a recently introduced magnetic resonance variable-echo-time sequence (vTE) for T2 mapping. Ten volunteers and ten patients were included in the study. A variable-echo-time sequence was used with 20 echo times. Images were post-processed with both techniques, mono- and bi-exponential [T2 m, short T2 component (T2 s) and long T2 component (T2 l)]. The number of mono- and bi-exponentially decaying pixels in each region of interest was expressed as a ratio (B/M). Patients were clinically assessed with the Achilles Tendon Rupture Score (ATRS), and these values were correlated with the T2 values. The means for both T2 m and T2 s were statistically significantly different between patients and volunteers; however, for T2 s, the P value was lower. In patients, the Pearson correlation coefficient between ATRS and T2 s was -0.816 (P = 0.007). The proposed variable-echo-time sequence can be successfully used as an alternative method to UTE sequences with some added benefits, such as a short imaging time along with relatively high resolution and minimised blurring artefacts, and minimised susceptibility artefacts and chemical shift artefacts. Bi-exponential T2 calculation is superior to mono-exponential in terms of statistical significance for the diagnosis of Achilles tendinopathy. • Magnetic resonance imaging offers new insight into healthy and diseased Achilles tendons • Bi-exponential T2 calculation in Achilles tendons is more beneficial than mono-exponential • A short T2 component correlates strongly with clinical score • Variable echo time sequences successfully used instead of ultrashort echo time sequences.

  2. Automatic differentiation evaluated as a tool for rotorcraft design and optimization

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.

    1995-01-01

    This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.

  3. Prediction of the Thrust Performance and the Flowfield of Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Wang, T.-S.

    1990-01-01

    In an effort to improve the current solutions in the design and analysis of liquid propulsive engines, a computational fluid dynamics (CFD) model capable of calculating the reacting flows from the combustion chamber, through the nozzle to the external plume, was developed. The Space Shuttle Main Engine (SSME) fired at sea level, was investigated as a sample case. The CFD model, FDNS, is a pressure based, non-staggered grid, viscous/inviscid, ideal gas/real gas, reactive code. An adaptive upwinding differencing scheme is employed for the spatial discretization. The upwind scheme is based on fourth order central differencing with fourth order damping for smooth regions, and second order central differencing with second order damping for shock capturing. It is equipped with a CHMQGM equilibrium chemistry algorithm and a PARASOL finite rate chemistry algorithm using the point implicit method. The computed flow results and performance compared well with those of other standard codes and engine hot fire test data. In addition, the transient nozzle flowfield calculation was also performed to demonstrate the ability of FDNS in capturing the flow separation during the startup process.

  4. Ice Sheet Change Detection by Satellite Image Differencing

    NASA Technical Reports Server (NTRS)

    Bindschadler, Robert A.; Scambos, Ted A.; Choi, Hyeungu; Haran, Terry M.

    2010-01-01

    Differencing of digital satellite image pairs highlights subtle changes in near-identical scenes of Earth surfaces. Using the mathematical relationships relevant to photoclinometry, we examine the effectiveness of this method for the study of localized ice sheet surface topography changes using numerical experiments. We then test these results by differencing images of several regions in West Antarctica, including some where changes have previously been identified in altimeter profiles. The technique works well with coregistered images having low noise, high radiometric sensitivity, and near-identical solar illumination geometry. Clouds and frosts detract from resolving surface features. The ETM(plus) sensor on Landsat-7, ALI sensor on EO-1, and MODIS sensor on the Aqua and Terra satellite platforms all have potential for detecting localized topographic changes such as shifting dunes, surface inflation and deflation features associated with sub-glacial lake fill-drain events, or grounding line changes. Availability and frequency of MODIS images favor this sensor for wide application, and using it, we demonstrate both qualitative identification of changes in topography and quantitative mapping of slope and elevation changes.

  5. Exponential stability of impulsive stochastic genetic regulatory networks with time-varying delays and reaction-diffusion

    DOE PAGES

    Cao, Boqiang; Zhang, Qimin; Ye, Ming

    2016-11-29

    We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.

  6. Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.

    NASA Astrophysics Data System (ADS)

    Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof

    2014-05-01

    A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.

  7. Rapid, Quantitative Assessment of Submerged Cultural Resource Degradation Using Repeat Video Surveys and Structure from Motion

    NASA Astrophysics Data System (ADS)

    Mertes, J. R.; Zant, C. N.; Gulley, J. D.; Thomsen, T. L.

    2017-08-01

    Monitoring, managing and preserving submerged cultural resources (SCR) such as shipwrecks can involve time consuming detailed physical surveys, expensive side-scan sonar surveys, the study of photomosaics and even photogrammetric analysis. In some cases, surveys of SCR have produced 3D models, though these models have not typically been used to document patterns of site degradation over time. In this study, we report a novel approach for quantifying degradation and changes to SCR that relies on diver-acquired video surveys, generation of 3D models from data acquired at different points in time using structure from motion, and differencing of these models. We focus our study on the shipwreck S.S. Wisconsin, which is located roughly 10.2 km southeast of Kenosha, Wisconsin, in Lake Michigan. We created two digital elevation models of the shipwreck using surveys performed during the summers of 2006 and 2015 and differenced these models to map spatial changes within the wreck. Using orthomosaics and difference map data, we identified a change in degradation patterns. Degradation was anecdotally believed to be caused by inward collapse, but maps indicated a pattern of outward collapse of the hull structure, which has resulted in large scale shifting of material in the central upper deck. In addition, comparison of the orthomosaics with the difference map clearly shows movement of objects, degradation of smaller pieces and in some locations, an increase in colonization of mussels.

  8. On the gap between an empirical distribution and an exponential distribution of waiting times for price changes in a financial market

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya

    2007-03-01

    We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.

  9. A Rapidly Prototyped Vegetation Dryness Index Evaluated for Wildfire Risk Assessment at Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Ross, Kenton; Graham, William; Prados, Don; Spruce, Joseph

    2007-01-01

    MVDI, which effectively involves the differencing of NDMI and NDVI, appears to display increased noise that is consistent with a differencing technique. This effect masks finer variations in vegetation moisture, preventing MVDI from fulfilling the requirement of giving decision makers insight into spatial variation of fire risk. MVDI shows dependencies on land cover and phenology which also argue against its use as a fire risk proxy in an area of diverse and fragmented land covers. The conclusion of the rapid prototyping effort is that MVDI should not be implemented for SSC decision support.

  10. Relative motion using analytical differential gravity

    NASA Technical Reports Server (NTRS)

    Gottlieb, Robert G.

    1988-01-01

    This paper presents a new approach to the computation of the motion of one satellite relative to another. The trajectory of the reference satellite is computed accurately subject to geopotential perturbations. This precise trajectory is used as a reference in computing the position of a nearby body, or bodies. The problem that arises in this approach is differencing nearly equal terms in the geopotential model, especially as the separation of the reference and nearby bodies approaches zero. By developing closed form expressions for differences in higher order and degree geopotential terms, the numerical problem inherent in the differencing approach is eliminated.

  11. SCISEAL: A CFD code for analysis of fluid dynamic forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh; Przekwas, Andrzej

    1994-01-01

    A viewgraph presentation is made of the objectives, capabilities, and test results of the computer code SCISEAL. Currently, the seal code has: a finite volume, pressure-based integration scheme; colocated variables with strong conservation approach; high-order spatial differencing, up to third-order; up to second-order temporal differencing; a comprehensive set of boundary conditions; a variety of turbulence models and surface roughness treatment; moving grid formulation for arbitrary rotor whirl; rotor dynamic coefficients calculated by the circular whirl and numerical shaker methods; and small perturbation capabilities to handle centered and eccentric seals.

  12. Three-dimensional simulation of vortex breakdown

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Salas, M. D.

    1990-01-01

    The integral form of the complete, unsteady, compressible, three-dimensional Navier-Stokes equations in the conservation form, cast in generalized coordinate system, are solved, numerically, to simulate the vortex breakdown phenomenon. The inviscid fluxes are discretized using Roe's upwind-biased flux-difference splitting scheme and the viscous fluxes are discretized using central differencing. Time integration is performed using a backward Euler ADI (alternating direction implicit) scheme. A full approximation multigrid is used to accelerate the convergence to steady state.

  13. Computer program documentation: Raw-to-processed SINDA program (RTOPHS) user's guide

    NASA Technical Reports Server (NTRS)

    Damico, S. J.

    1980-01-01

    Use of the Raw to Processed SINDA(System Improved Numerical Differencing Analyzer) Program, RTOPHS, which provides a means of making the temperature prediction data on binary HSTFLO and HISTRY units generated by SINDA available to engineers in an easy to use format, is discussed. The program accomplishes this by reading the HISTRY unit and according to user input instructions, the desired times and temperature prediction data are extracted and written to a word addressable drum file.

  14. Impacts of Ocean Waves on the Atmospheric Surface Layer: Simulations and Observations

    DTIC Science & Technology

    2008-06-06

    energy and pressure described in § 4 are solved using a mixed finite - difference pseudospectral scheme with a third-order Runge-Kutta time stepping with a...to that in our DNS code (Sullivan and McWilliams 2002; Sullivan et al. 2000). For our mixed finite - difference pseudospec- tral differencing scheme a...Poisson equation. The spatial discretization is pseu- dospectral along lines of constant or and second- order finite difference in the vertical

  15. Time-varying volatility in Malaysian stock exchange: An empirical study using multiple-volatility-shift fractionally integrated model

    NASA Astrophysics Data System (ADS)

    Cheong, Chin Wen

    2008-02-01

    This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.

  16. Category 3: Sound Generation by Interacting with a Gust

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2004-01-01

    The cascade-gust interaction problem is solved employing a time-domain approach. The purpose of this problem is to test the ability of a CFD/CAA code to accurately predict the unsteady aerodynamic and aeroacoustic response of a single airfoil to a two-dimensional, periodic vortical gust.Nonlinear time dependent Euler equations are solved using higher order spatial differencing and time marching techniques. The solutions indicate the generation and propagation of expected mode orders for the given configuration and flow conditions. The blade passing frequency (BPF) is cut off for this cascade while higher harmonic, 2BPF and 3BPF, modes are cut on.

  17. Hand-held UXO Discriminator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasperikova, E.; Smith, J.T.; Kappler, K.N.

    2010-04-01

    With prior funding (UX-1225, MM-0437, and MM-0838), we have successfully designed and built a cart-mounted Berkeley UXO Discriminator (BUD) and demonstrated its performance at various test sites (e.g., Gasperikova et al., 2007, 2009). It is a multi-transmitter multi-receiver active electromagnetic system that is able to discriminate UXO from scrap at a single measurement position, hence eliminates equirement of a very accurate sensor location. The cart-mounted system comprises of three orthogonal transmitters and eight pairs of differenced receivers (Smith et al., 2007). Receiver coils are located on ymmetry lines through the center of the system and see identical fields during themore » on-time of the pulse in all of the transmitter coils. They can then be wired in opposition to produce zero output during the n-ime of the pulses in three orthogonal transmitters. Moreover, this configuration dramatically reduces noise in the measurements by canceling the background electromagnetic fields (these fields are uniform ver the scale of the receiver array and are consequently nulled by the differencing operation), and by canceling the noise contributed by the tilt of the receivers in the Earth's magnetic field, and therefore reatly enhances receivers sensitivity to the gradients of the target.« less

  18. An Exponential Growth Learning Trajectory: Students' Emerging Understanding of Exponential Growth through Covariation

    ERIC Educational Resources Information Center

    Ellis, Amy B.; Ozgur, Zekiye; Kulow, Torrey; Dogan, Muhammed F.; Amidon, Joel

    2016-01-01

    This article presents an Exponential Growth Learning Trajectory (EGLT), a trajectory identifying and characterizing middle grade students' initial and developing understanding of exponential growth as a result of an instructional emphasis on covariation. The EGLT explicates students' thinking and learning over time in relation to a set of tasks…

  19. Compact exponential product formulas and operator functional derivative

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.

  20. On improving the iterative convergence properties of an implicit approximate-factorization finite difference algorithm. [considering transonic flow

    NASA Technical Reports Server (NTRS)

    Desideri, J. A.; Steger, J. L.; Tannehill, J. C.

    1978-01-01

    The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.

  1. Efficient entanglement distribution over 200 kilometers.

    PubMed

    Dynes, J F; Takesue, H; Yuan, Z L; Sharpe, A W; Harada, K; Honjo, T; Kamada, H; Tadanaga, O; Nishida, Y; Asobe, M; Shields, A J

    2009-07-06

    Here we report the first demonstration of entanglement distribution over a record distance of 200 km which is of sufficient fidelity to realize secure communication. In contrast to previous entanglement distribution schemes, we use detection elements based on practical avalanche photodiodes (APDs) operating in a self-differencing mode. These APDs are low-cost, compact and easy to operate requiring only electrical cooling to achieve high single photon detection efficiency. The self-differencing APDs in combination with a reliable parametric down-conversion source demonstrate that entanglement distribution over ultra-long distances has become both possible and practical. Consequently the outlook is extremely promising for real world entanglement-based communication between distantly separated parties.

  2. On the geodetic applications of simultaneous range-differencing to LAGEOS

    NASA Technical Reports Server (NTRS)

    Pablis, E. C.

    1982-01-01

    The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.

  3. Proceedings of the Conference on the Design of Experiments in Army Research Development and Testing (32nd)

    DTIC Science & Technology

    1987-06-01

    number of series among the 63 which were identified as a particular ARIMA form and were "best" modeled by a particular technique. Figure 1 illustrates a...th time from xe’s. The integrbted autoregressive - moving average model , denoted by ARIMA (p,d,q) is a result of combining d-th differencing process...Experiments, (4) Data Analysis and Modeling , (5) Theory and Probablistic Inference, (6) Fuzzy Statistics, (7) Forecasting and Prediction, (8) Small Sample

  4. CFD in the 1980's from one point of view

    NASA Technical Reports Server (NTRS)

    Lomax, Harvard

    1991-01-01

    The present interpretive treatment of the development history of CFD in the 1980s gives attention to advancements in such algorithmic techniques as flux Jacobian-based upwind differencing, total variation-diminishing and essentially nonoscillatory schemes, multigrid methods, unstructured grids, and nonrectangular structured grids. At the same time, computational turbulence research gave attention to turbulence modeling on the bases of increasingly powerful supercomputers and meticulously constructed databases. The major future developments in CFD will encompass such capabilities as structured and unstructured three-dimensional grids.

  5. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  6. Universality in stochastic exponential growth.

    PubMed

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  7. Universality in Stochastic Exponential Growth

    NASA Astrophysics Data System (ADS)

    Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.

    2014-07-01

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  8. Compact exponential product formulas and operator functional derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, M.

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}

  9. Solving the Sea-Level Equation in an Explicit Time Differencing Scheme

    NASA Astrophysics Data System (ADS)

    Klemann, V.; Hagedoorn, J. M.; Thomas, M.

    2016-12-01

    In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x

  10. Two-stage unified stretched-exponential model for time-dependence of threshold voltage shift under positive-bias-stresses in amorphous indium-gallium-zinc oxide thin-film transistors

    NASA Astrophysics Data System (ADS)

    Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In

    2017-08-01

    In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.

  11. Multigrid for hypersonic viscous two- and three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Turkel, E.; Swanson, R. C.; Vatsa, V. N.; White, J. A.

    1991-01-01

    The use of a multigrid method with central differencing to solve the Navier-Stokes equations for hypersonic flows is considered. The time dependent form of the equations is integrated with an explicit Runge-Kutta scheme accelerated by local time stepping and implicit residual smoothing. Variable coefficients are developed for the implicit process that removes the diffusion limit on the time step, producing significant improvement in convergence. A numerical dissipation formulation that provides good shock capturing capability for hypersonic flows is presented. This formulation is shown to be a crucial aspect of the multigrid method. Solutions are given for two-dimensional viscous flow over a NACA 0012 airfoil and three-dimensional flow over a blunt biconic.

  12. Global Positioning System Time Transfer Receiver (GPS/TTR) prototype design and initial test evaluation

    NASA Technical Reports Server (NTRS)

    Oaks, J.; Frank, A.; Falvey, S.; Lister, M.; Buisson, J.; Wardrip, C.; Warren, H.

    1982-01-01

    Time transfer equipment and techniques used with the Navigation Technology Satellites were modified and extended for use with the Global Positioning System (GPS) satellites. A prototype receiver was built and field tested. The receiver uses the GPS L1 link at 1575 MHz with C/A code only to resolve a measured range to the satellite. A theoretical range is computed from the satellite ephemeris transmitted in the data message and the user's coordinates. Results of user offset from GPS time are obtained by differencing the measured and theoretical ranges and applying calibration corrections. Results of the first field test evaluation of the receiver are presented.

  13. Multi-transmitter multi-receiver null coupled systems for inductive detection and characterization of metallic objects

    NASA Astrophysics Data System (ADS)

    Smith, J. Torquil; Morrison, H. Frank; Doolittle, Lawrence R.; Tseng, Hung-Wen

    2007-03-01

    Equivalent dipole polarizabilities are a succinct way to summarize the inductive response of an isolated conductive body at distances greater than the scale of the body. Their estimation requires measurement of secondary magnetic fields due to currents induced in the body by time varying magnetic fields in at least three linearly independent (e.g., orthogonal) directions. Secondary fields due to an object are typically orders of magnitude smaller than the primary inducing fields near the primary field sources (transmitters). Receiver coils may be oriented orthogonal to primary fields from one or two transmitters, nulling their response to those fields, but simultaneously nulling to fields of additional transmitters is problematic. If transmitter coils are constructed symmetrically with respect to inversion in a point, their magnetic fields are symmetric with respect to that point. If receiver coils are operated in pairs symmetric with respect to inversion in the same point, then their differenced output is insensitive to the primary fields of any symmetrically constructed transmitters, allowing nulling to three (or more) transmitters. With a sufficient number of receivers pairs, object equivalent dipole polarizabilities can be estimated in situ from measurements at a single instrument sitting, eliminating effects of inaccurate instrument location on polarizability estimates. The method is illustrated with data from a multi-transmitter multi-receiver system with primary field nulling through differenced receiver pairs, interpreted in terms of principal equivalent dipole polarizabilities as a function of time.

  14. Exponentially decaying interaction potential of cavity solitons

    NASA Astrophysics Data System (ADS)

    Anbardan, Shayesteh Rahmani; Rimoldi, Cristina; Kheradmand, Reza; Tissoni, Giovanna; Prati, Franco

    2018-03-01

    We analyze the interaction of two cavity solitons in an optically injected vertical cavity surface emitting laser above threshold. We show that they experience an attractive force even when their distance is much larger than their diameter, and eventually they merge. Since the merging time depends exponentially on the initial distance, we suggest that the attraction could be associated with an exponentially decaying interaction potential, similarly to what is found for hydrophobic materials. We also show that the merging time is simply related to the characteristic times of the laser, photon lifetime, and carrier lifetime.

  15. A numerical study of the axisymmetric Couette-Taylor problem using a fast high-resolution second-order central scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kupferman, R.

    The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.

  16. Response functions of free mass gravitational wave antennas

    NASA Technical Reports Server (NTRS)

    Estabrook, F. B.

    1985-01-01

    The work of Gursel, Linsay, Spero, Saulson, Whitcomb and Weiss (1984) on the response of a free-mass interferometric antenna is extended. Starting from first principles, the earlier work derived the response of a 2-arm gravitational wave antenna to plane polarized gravitational waves. Equivalent formulas (generalized slightly to allow for arbitrary elliptical polarization) are obtained by a simple differencing of the '3-pulse' Doppler response functions of two 1-arm antennas. A '4-pulse' response function is found, with quite complicated angular dependences for arbitrary incident polarization. The differencing method can as readily be used to write exact response functions ('3n+1 pulse') for antennas having multiple passes or more arms.

  17. Tracking and Data Relay Satellite System (TDRSS) Support of User Spacecraft without TDRSS Transponders

    NASA Technical Reports Server (NTRS)

    Jackson, James A.; Marr, Greg C.; Maher, Michael J.

    1995-01-01

    NASA GSFC VNS TSG personnel have proposed the use of TDRSS to obtain telemetry and/or S-band one-way return Doppler tracking data for spacecraft which do not have TDRSS-compatible transponders and therefore were never considered candidates for TDRSS support. For spacecraft with less stable local oscillators (LO), one-way return Doppler tracking data is typically of poor quality. It has been demonstrated using UARS, WIND, and NOAA-J tracking data that the simultaneous use of two TDRSS spacecraft can yield differenced one-way return Doppler data of high quality which is usable for orbit determination by differencing away the effects of oscillator instability.

  18. New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays.

    PubMed

    Zhang, Guodong; Zeng, Zhigang; Hu, Junhao

    2018-01-01

    This paper is concerned with the global exponential dissipativity of memristive inertial neural networks with discrete and distributed time-varying delays. By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived. Moreover, the globally exponential attractive sets and positive invariant sets are also presented here. In addition, the new proposed results here complement and extend the earlier publications on conventional or memristive neural network dynamical systems. Finally, numerical simulations are given to illustrate the effectiveness of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Three-dimensional time dependent computation of turbulent flow

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Reynolds, W. C.; Ferziger, J. H.

    1975-01-01

    The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.

  20. The computation of dynamic fractional difference parameter for S&P500 index

    NASA Astrophysics Data System (ADS)

    Pei, Tan Pei; Cheong, Chin Wen; Galagedera, Don U. A.

    2015-10-01

    This study evaluates the time-varying long memory behaviors of the S&P500 volatility index using dynamic fractional difference parameters. Time-varying fractional difference parameter shows the dynamic of long memory in volatility series for the pre and post subprime mortgage crisis triggered by U.S. The results find an increasing trend in the S&P500 long memory volatility for the pre-crisis period. However, the onset of Lehman Brothers event reduces the predictability of volatility series following by a slight fluctuation of the factional differencing parameters. After that, the U.S. financial market becomes more informationally efficient and follows a non-stationary random process.

  1. Performance of time-series methods in forecasting the demand for red blood cell transfusion.

    PubMed

    Pereira, Arturo

    2004-05-01

    Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.

  2. Time-ordered exponential on the complex plane and Gell-Mann—Low formula as a mathematical theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Futakuchi, Shinichiro; Usui, Kouta

    2016-04-15

    The time-ordered exponential representation of a complex time evolution operator in the interaction picture is studied. Using the complex time evolution, we prove the Gell-Mann—Low formula under certain abstract conditions, in mathematically rigorous manner. We apply the abstract results to quantum electrodynamics with cutoffs.

  3. Exponential propagators for the Schrödinger equation with a time-dependent potential.

    PubMed

    Bader, Philipp; Blanes, Sergio; Kopylov, Nikita

    2018-06-28

    We consider the numerical integration of the Schrödinger equation with a time-dependent Hamiltonian given as the sum of the kinetic energy and a time-dependent potential. Commutator-free (CF) propagators are exponential propagators that have shown to be highly efficient for general time-dependent Hamiltonians. We propose new CF propagators that are tailored for Hamiltonians of the said structure, showing a considerably improved performance. We obtain new fourth- and sixth-order CF propagators as well as a novel sixth-order propagator that incorporates a double commutator that only depends on coordinates, so this term can be considered as cost-free. The algorithms require the computation of the action of exponentials on a vector similar to the well-known exponential midpoint propagator, and this is carried out using the Lanczos method. We illustrate the performance of the new methods on several numerical examples.

  4. How exponential are FREDs?

    NASA Astrophysics Data System (ADS)

    Schaefer, Bradley E.; Dyson, Samuel E.

    1996-08-01

    A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.

  5. The GFZ real-time GNSS precise positioning service system and its adaption for COMPASS

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Zhang, Hongping; Nischan, Thomas; Wickert, Jens

    2013-03-01

    Motivated by the IGS real-time Pilot Project, GFZ has been developing its own real-time precise positioning service for various applications. An operational system at GFZ is now broadcasting real-time orbits, clocks, global ionospheric model, uncalibrated phase delays and regional atmospheric corrections for standard PPP, PPP with ambiguity fixing, single-frequency PPP and regional augmented PPP. To avoid developing various algorithms for different applications, we proposed a uniform algorithm and implemented it into our real-time software. In the new processing scheme, we employed un-differenced raw observations with atmospheric delays as parameters, which are properly constrained by real-time derived global ionospheric model or regional atmospheric corrections and by the empirical characteristics of the atmospheric delay variation in time and space. The positioning performance in terms of convergence time and ambiguity fixing depends mainly on the quality of the received atmospheric information and the spatial and temporal constraints. The un-differenced raw observation model can not only integrate PPP and NRTK into a seamless positioning service, but also syncretize these two techniques into a unique model and algorithm. Furthermore, it is suitable for both dual-frequency and sing-frequency receivers. Based on the real-time data streams from IGS, EUREF and SAPOS reference networks, we can provide services of global precise point positioning (PPP) with 5-10 cm accuracy, PPP with ambiguity-fixing of 2-5 cm accuracy, PPP using single-frequency receiver with accuracy of better than 50 cm and PPP with regional augmentation for instantaneous ambiguity resolution of 1-3 cm accuracy. We adapted the system for current COMPASS to provide PPP service. COMPASS observations from a regional network of nine stations are used for precise orbit determination and clock estimation in simulated real-time mode, the orbit and clock products are applied for real-time precise point positioning. The simulated real-time PPP service confirms that real-time positioning services of accuracy at dm-level and even cm-level is achievable with COMPASS only.

  6. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  7. Progress in multi-dimensional upwind differencing

    NASA Technical Reports Server (NTRS)

    Vanleer, Bram

    1992-01-01

    Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.

  8. Differenced Range Versus Integrated Doppler (DRVID) ionospheric analysis of metric tracking in the Tracking and Data Relay Satellite System (TDRSS)

    NASA Technical Reports Server (NTRS)

    Radomski, M. S.; Doll, C. E.

    1995-01-01

    The Differenced Range (DR) Versus Integrated Doppler (ID) (DRVID) method exploits the opposition of high-frequency signal versus phase retardation by plasma media to obtain information about the plasma's corruption of simultaneous range and Doppler spacecraft tracking measurements. Thus, DR Plus ID (DRPID) is an observable independent of plasma refraction, while actual DRVID (DR minus ID) measures the time variation of the path electron content independently of spacecraft motion. The DRVID principle has been known since 1961. It has been used to observe interplanetary plasmas, is implemented in Deep Space Network tracking hardware, and has recently been applied to single-frequency Global Positioning System user navigation This paper discusses exploration at the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) of DRVID synthesized from simultaneous two-way range and Doppler tracking for low Earth-orbiting missions supported by the Tracking and Data Relay Satellite System (TDRSS) The paper presents comparisons of actual DR and ID residuals and relates those comparisons to predictions of the Bent model. The complications due to the pilot tone influence on relayed Doppler measurements are considered. Further use of DRVID to evaluate ionospheric models is discussed, as is use of DRPID in reducing dependence on ionospheric modeling in orbit determination.

  9. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations: Beam Maps and Window Functions

    NASA Technical Reports Server (NTRS)

    Hill, R.S.; Weiland, J.L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C.L.; Halpern, M.; Kogut, A.; Page, L.; hide

    2008-01-01

    Cosmology and other scientific results from the WMAP mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of approximately 2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of approximately 1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of approximately 2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly. errors in the measured disk temperature are approximately 0.5%.

  10. Adaptive Kalman filter based on variance component estimation for the prediction of ionospheric delay in aiding the cycle slip repair of GNSS triple-frequency signals

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Yao, Yifei; Wang, Qianxin

    2018-01-01

    In order to incorporate the time smoothness of ionospheric delay to aid the cycle slip detection, an adaptive Kalman filter is developed based on variance component estimation. The correlations between measurements at neighboring epochs are fully considered in developing a filtering algorithm for colored measurement noise. Within this filtering framework, epoch-differenced ionospheric delays are predicted. Using this prediction, the potential cycle slips are repaired for triple-frequency signals of global navigation satellite systems. Cycle slips are repaired in a stepwise manner; i.e., for two extra wide lane combinations firstly and then for the third frequency. In the estimation for the third frequency, a stochastic model is followed in which the correlations between the ionospheric delay prediction errors and the errors in the epoch-differenced phase measurements are considered. The implementing details of the proposed method are tabulated. A real BeiDou Navigation Satellite System data set is used to check the performance of the proposed method. Most cycle slips, no matter trivial or nontrivial, can be estimated in float values with satisfactorily high accuracy and their integer values can hence be correctly obtained by simple rounding. To be more specific, all manually introduced nontrivial cycle slips are correctly repaired.

  11. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.

  12. Global exponential stability for switched memristive neural networks with time-varying delays.

    PubMed

    Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia

    2016-08-01

    This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  14. Can Mapping Algorithms Based on Raw Scores Overestimate QALYs Gained by Treatment? A Comparison of Mappings Between the Roland-Morris Disability Questionnaire and the EQ-5D-3L Based on Raw and Differenced Score Data.

    PubMed

    Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E

    2017-05-01

    Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.

  15. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    PubMed

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  17. Exponential synchronization of delayed neutral-type neural networks with Lévy noise under non-Lipschitz condition

    NASA Astrophysics Data System (ADS)

    Ma, Shuo; Kang, Yanmei

    2018-04-01

    In this paper, the exponential synchronization of stochastic neutral-type neural networks with time-varying delay and Lévy noise under non-Lipschitz condition is investigated for the first time. Using the general Itô's formula and the nonnegative semi-martingale convergence theorem, we derive general sufficient conditions of two kinds of exponential synchronization for the drive system and the response system with adaptive control. Numerical examples are presented to verify the effectiveness of the proposed criteria.

  18. Correlating the stretched-exponential and super-Arrhenius behaviors in the structural relaxation of glass-forming liquids.

    PubMed

    Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg

    2011-04-20

    Following the report of a single-exponential activation behavior behind the super-Arrhenius structural relaxation of glass-forming liquids in our preceding paper, we find that the non-exponentiality in the structural relaxation of glass-forming liquids is straightforwardly determined by the relaxation time, and could be calculated from the measured relaxation data. Comparisons between the calculated and measured non-exponentialities for typical glass-forming liquids, from fragile to intermediate, convincingly support the present analysis. Hence the origin of the non-exponentiality and its correlation with liquid fragility become clearer.

  19. Thermal modeling of a cryogenic turbopump for space shuttle applications.

    NASA Technical Reports Server (NTRS)

    Knowles, P. J.

    1971-01-01

    Thermal modeling of a cryogenic pump and a hot-gas turbine in a turbopump assembly proposed for the Space Shuttle is described in this paper. A model, developed by identifying the heat-transfer regimes and incorporating their dependencies into a turbopump system model, included heat transfer for two-phase cryogen, hot-gas (200 R) impingement on turbine blades, gas impingement on rotating disks and parallel plate fluid flow. The ?thermal analyzer' program employed to develop this model was the TRW Systems Improved Numerical Differencing Analyzer (SINDA). This program uses finite differencing with lumped parameter representation for each node. Also discussed are model development, simulations of turbopump startup/shutdown operations, and the effects of varying turbopump parameters on the thermal performance.

  20. Analysis of airfoil transitional separation bubbles

    NASA Technical Reports Server (NTRS)

    Davis, R. L.; Carter, J. E.

    1984-01-01

    A previously developed local inviscid-viscous interaction technique for the analysis of airfoil transitional separation bubbles, ALESEP (Airfoil Leading Edge Separation) has been modified to utilize a more accurate windward finite difference procedure in the reversed flow region, and a natural transition/turbulence model has been incorporated for the prediction of transition within the separation bubble. Numerous calculations and experimental comparisons are presented to demonstrate the effects of the windward differencing scheme and the natural transition/turbulence model. Grid sensitivity and convergence capabilities of this inviscid-viscous interaction technique are briefly addressed. A major contribution of this report is that with the use of windward differencing, a second, counter-rotating eddy has been found to exist in the wall layer of the primary separation bubble.

  1. Turbulent particle transport in streams: can exponential settling be reconciled with fluid mechanics?

    PubMed

    McNair, James N; Newbold, J Denis

    2012-05-07

    Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  3. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  4. A Nonequilibrium Rate Formula for Collective Motions of Complex Molecular Systems

    NASA Astrophysics Data System (ADS)

    Yanao, Tomohiro; Koon, Wang Sang; Marsden, Jerrold E.

    2010-09-01

    We propose a compact reaction rate formula that accounts for a non-equilibrium distribution of residence times of complex molecules, based on a detailed study of the coarse-grained phase space of a reaction coordinate. We take the structural transition dynamics of a six-atom Morse cluster between two isomers as a prototype of multi-dimensional molecular reactions. Residence time distribution of one of the isomers shows an exponential decay, while that of the other isomer deviates largely from the exponential form and has multiple peaks. Our rate formula explains such equilibrium and non-equilibrium distributions of residence times in terms of the rates of diffusions of energy and the phase of the oscillations of the reaction coordinate. Rapid diffusions of energy and the phase generally give rise to the exponential decay of residence time distribution, while slow diffusions give rise to a non-exponential decay with multiple peaks. We finally make a conjecture about a general relationship between the rates of the diffusions and the symmetry of molecular mass distributions.

  5. New exponential synchronization criteria for time-varying delayed neural networks with discontinuous activations.

    PubMed

    Cai, Zuowei; Huang, Lihong; Zhang, Lingling

    2015-05-01

    This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Exponential Acceleration of VT Seismicity in the Years Prior to Major Eruptions of Basaltic Volcanoes

    NASA Astrophysics Data System (ADS)

    Lengline, O.; Marsan, D.; Got, J.; Pinel, V.

    2007-12-01

    The evolution of the seismicity at three basaltic volcanoes (Kilauea, Mauna-Loa and Piton de la Fournaise) is analysed during phases of magma accumulation. We show that the VT seismicity during these time-periods is characterized by an exponential increase at long-time scale (years). Such an exponential acceleration can be explained by a model of seismicity forced by the replenishment of a magmatic reservoir. The increase in stress in the edifice caused by this replenishment is modeled. This stress history leads to a cumulative number of damage, ie VT earthquakes, following the same exponential increase as found for seismicity. A long-term seismicity precursor is thus detected at basaltic volcanoes. Although this precursory signal is not able to predict the onset times of futures eruptions (as no diverging point is present in the model), it may help mitigating volcanic hazards.

  7. Multiserver Queueing Model subject to Single Exponential Vacation

    NASA Astrophysics Data System (ADS)

    Vijayashree, K. V.; Janani, B.

    2018-04-01

    A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.

  8. Low Dissipative High Order Shock-Capturing Methods Using Characteristic-Based Filters

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sandham, N. D.; Djomehri, M. J.

    1998-01-01

    An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Oisson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.

  9. Low Dissipative High Order Shock-Capturing Methods using Characteristic-Based Filters

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sandham, N. D.; Djomehri, M. J.

    1998-01-01

    An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Olsson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.

  10. Thermal instability in post-flare plasmas

    NASA Technical Reports Server (NTRS)

    Antiochos, S. K.

    1976-01-01

    The cooling of post-flare plasmas is discussed and the formation of loop prominences is explained as due to a thermal instability. A one-dimensional model was developed for active loop prominences. Only the motion and heat fluxes parallel to the existing magnetic fields are considered. The relevant size scales and time scales are such that single-fluid MHD equations are valid. The effects of gravity, the geometry of the field and conduction losses to the chromosphere are included. A computer code was constructed to solve the model equations. Basically, the system is treated as an initial value problem (with certain boundary conditions at the chromosphere-corona transition region), and a two-step time differencing scheme is used.

  11. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  12. On the Time-Dependent Analysis of Gamow Decay

    ERIC Educational Resources Information Center

    Durr, Detlef; Grummt, Robert; Kolb, Martin

    2011-01-01

    Gamow's explanation of the exponential decay law uses complex "eigenvalues" and exponentially growing "eigenfunctions". This raises the question, how Gamow's description fits into the quantum mechanical description of nature, which is based on real eigenvalues and square integrable wavefunctions. Observing that the time evolution of any…

  13. [Application of exponential smoothing method in prediction and warning of epidemic mumps].

    PubMed

    Shi, Yun-ping; Ma, Jia-qi

    2010-06-01

    To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.

  14. A nonstationary Poisson point process describes the sequence of action potentials over long time scales in lateral-superior-olive auditory neurons.

    PubMed

    Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C

    1994-01-01

    The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)

  15. Pattern analysis of total item score and item response of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative sample of US adults

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.

    2017-01-01

    Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560

  16. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  17. On the effect of using the Shapiro filter to smooth winds on a sphere

    NASA Technical Reports Server (NTRS)

    Takacs, L. L.; Balgovind, R. C.

    1984-01-01

    Spatial differencing schemes which are not enstrophy conserving nor implicitly damping require global filtering of short waves to eliminate the build-up of energy in the shortest wavelengths due to aliasing. Takacs and Balgovind (1983) have shown that filtering on a sphere with a latitude dependent damping function will cause spurious vorticity and divergence source terms to occur if care is not taken to ensure the irrotationality of the gradients of the stream function and velocity potential. Using a shallow water model with fourth-order energy-conserving spatial differencing, it is found that using a 16th-order Shapiro (1979) filter on the winds and heights to control nonlinear instability also creates spurious source terms when the winds are filtered in the meridional direction.

  18. Black hole evolution by spectral methods

    NASA Astrophysics Data System (ADS)

    Kidder, Lawrence E.; Scheel, Mark A.; Teukolsky, Saul A.; Carlson, Eric D.; Cook, Gregory B.

    2000-10-01

    Current methods of evolving a spacetime containing one or more black holes are plagued by instabilities that prohibit long-term evolution. Some of these instabilities may be due to the numerical method used, traditionally finite differencing. In this paper, we explore the use of a pseudospectral collocation (PSC) method for the evolution of a spherically symmetric black hole spacetime in one dimension using a hyperbolic formulation of Einstein's equations. We demonstrate that our PSC method is able to evolve a spherically symmetric black hole spacetime forever without enforcing constraints, even if we add dynamics via a Klein-Gordon scalar field. We find that, in contrast with finite-differencing methods, black hole excision is a trivial operation using PSC applied to a hyperbolic formulation of Einstein's equations. We discuss the extension of this method to three spatial dimensions.

  19. Best-Practice Criteria for Practical Security of Self-Differencing Avalanche Photodiode Detectors in Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Koehler-Sidki, A.; Dynes, J. F.; Lucamarini, M.; Roberts, G. L.; Sharpe, A. W.; Yuan, Z. L.; Shields, A. J.

    2018-04-01

    Fast-gated avalanche photodiodes (APDs) are the most commonly used single photon detectors for high-bit-rate quantum key distribution (QKD). Their robustness against external attacks is crucial to the overall security of a QKD system, or even an entire QKD network. We investigate the behavior of a gigahertz-gated, self-differencing (In,Ga)As APD under strong illumination, a tactic Eve often uses to bring detectors under her control. Our experiment and modeling reveal that the negative feedback by the photocurrent safeguards the detector from being blinded through reducing its avalanche probability and/or strengthening the capacitive response. Based on this finding, we propose a set of best-practice criteria for designing and operating fast-gated APD detectors to ensure their practical security in QKD.

  20. Enhanced Response Time of Electrowetting Lenses with Shaped Input Voltage Functions.

    PubMed

    Supekar, Omkar D; Zohrabi, Mo; Gopinath, Juliet T; Bright, Victor M

    2017-05-16

    Adaptive optical lenses based on the electrowetting principle are being rapidly implemented in many applications, such as microscopy, remote sensing, displays, and optical communication. To characterize the response of these electrowetting lenses, the dependence upon direct current (DC) driving voltage functions was investigated in a low-viscosity liquid system. Cylindrical lenses with inner diameters of 2.45 and 3.95 mm were used to characterize the dynamic behavior of the liquids under DC voltage electrowetting actuation. With the increase of the rise time of the input exponential driving voltage, the originally underdamped system response can be damped, enabling a smooth response from the lens. We experimentally determined the optimal rise times for the fastest response from the lenses. We have also performed numerical simulations of the lens actuation with input exponential driving voltage to understand the variation in the dynamics of the liquid-liquid interface with various input rise times. We further enhanced the response time of the devices by shaping the input voltage function with multiple exponential rise times. For the 3.95 mm inner diameter lens, we achieved a response time improvement of 29% when compared to the fastest response obtained using single-exponential driving voltage. The technique shows great promise for applications that require fast response times.

  1. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  2. Automated Topographic Change Detection via Dem Differencing at Large Scales Using The Arcticdem Database

    NASA Astrophysics Data System (ADS)

    Candela, S. G.; Howat, I.; Noh, M. J.; Porter, C. C.; Morin, P. J.

    2016-12-01

    In the last decade, high resolution satellite imagery has become an increasingly accessible tool for geoscientists to quantify changes in the Arctic land surface due to geophysical, ecological and anthropomorphic processes. However, the trade off between spatial coverage and spatial-temporal resolution has limited detailed, process-level change detection over large (i.e. continental) scales. The ArcticDEM project utilized over 300,000 Worldview image pairs to produce a nearly 100% coverage elevation model (above 60°N) offering the first polar, high spatial - high resolution (2-8m by region) dataset, often with multiple repeats in areas of particular interest to geo-scientists. A dataset of this size (nearly 250 TB) offers endless new avenues of scientific inquiry, but quickly becomes unmanageable computationally and logistically for the computing resources available to the average scientist. Here we present TopoDiff, a framework for a generalized. automated workflow that requires minimal input from the end user about a study site, and utilizes cloud computing resources to provide a temporally sorted and differenced dataset, ready for geostatistical analysis. This hands-off approach allows the end user to focus on the science, without having to manage thousands of files, or petabytes of data. At the same time, TopoDiff provides a consistent and accurate workflow for image sorting, selection, and co-registration enabling cross-comparisons between research projects.

  3. Study of structural change in volcanic and geothermal areas using seismic tomography

    NASA Astrophysics Data System (ADS)

    Mhana, Najwa; Foulger, Gillian; Julian, Bruce; peirce, Christine

    2014-05-01

    Long Valley caldera is a large silicic volcano. It has been in a state of volcanic and seismic unrest since 1978. Farther escalation of this unrest could pose a threat to the 5,000 residents and the tens of thousands of tourists who visit the area. We have studied the crustal structure beneath 28 km X 16 km area using seismic tomography. We performed tomographic inversions for the years 2009 and 2010 with a view to differencing it with the 1997 result to look for structural changes with time and whether repeat tomography is a capable of determining the changes in structure in volcanic and geothermal reservoirs. Thus, it might provide a useful tool to monitoring physical changes in volcanoes and exploited geothermal reservoirs. Up to 600 earthquakes, selected from the best-quality events, were used for the inversion. The inversions were performed using program simulps12 [Thurber, 1983]. Our initial results show that changes in both V p and V s were consistent with the migration of CO2 into the upper 2 km or so. Our ongoing work will also invert pairs of years simultaneously using a new program, tomo4d [Julian and Foulger, 2010]. This program inverts for the differences in structure between two epochs so it can provide a more reliable measure of structural change than simply differencing the results of individual years.

  4. Stability in Cohen Grossberg-type bidirectional associative memory neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Cao, Jinde; Song, Qiankun

    2006-07-01

    In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.

  5. Impulsive effect on global exponential stability of BAM fuzzy cellular neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Li, Kelin

    2010-02-01

    In this article, a class of impulsive bidirectional associative memory (BAM) fuzzy cellular neural networks (FCNNs) with time-varying delays is formulated and investigated. By employing delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM FCNNs with time-varying delays are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM FCNNs. An example is given to show the effectiveness of the results obtained here.

  6. Preparation of an exponentially rising optical pulse for efficient excitation of single atoms in free space.

    PubMed

    Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian

    2012-08-01

    We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.

  7. Prediction and control of chaotic processes using nonlinear adaptive networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R.D.; Barnes, C.W.; Flake, G.W.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  8. A general algorithm using finite element method for aerodynamic configurations at low speeds

    NASA Technical Reports Server (NTRS)

    Balasubramanian, R.

    1975-01-01

    A finite element algorithm for numerical simulation of two-dimensional, incompressible, viscous flows was developed. The Navier-Stokes equations are suitably modelled to facilitate direct solution for the essential flow parameters. A leap-frog time differencing and Galerkin minimization of these model equations yields the finite element algorithm. The finite elements are triangular with bicubic shape functions approximating the solution space. The finite element matrices are unsymmetrically banded to facilitate savings in storage. An unsymmetric L-U decomposition is performed on the finite element matrices to obtain the solution for the boundary value problem.

  9. Computational design of the basic dynamical processes of the UCLA general circulation model

    NASA Technical Reports Server (NTRS)

    Arakawa, A.; Lamb, V. R.

    1977-01-01

    The 12-layer UCLA general circulation model encompassing troposphere and stratosphere (and superjacent 'sponge layer') is described. Prognostic variables are: surface pressure, horizontal velocity, temperature, water vapor and ozone in each layer, planetary boundary layer (PBL) depth, temperature, moisture and momentum discontinuities at PBL top, ground temperature and water storage, and mass of snow on ground. Selection of space finite-difference schemes for homogeneous incompressible flow, with/without a free surface, nonlinear two-dimensional nondivergent flow, enstrophy conserving schemes, momentum advection schemes, vertical and horizontal difference schemes, and time differencing schemes are discussed.

  10. Implicit time-marching solution of the Navier-Stokes equations for thrust reversing and thrust vectoring nozzle flows

    NASA Technical Reports Server (NTRS)

    Imlay, S. T.

    1986-01-01

    An implicit finite volume method is investigated for the solution of the compressible Navier-Stokes equations for flows within thrust reversing and thrust vectoring nozzles. Thrust reversing nozzles typically have sharp corners, and the rapid expansion and large turning angles near these corners are shown to cause unacceptable time step restrictions when conventional approximate factorization methods are used. In this investigation these limitations are overcome by using second-order upwind differencing and line Gauss-Siedel relaxation. This method is implemented with a zonal mesh so that flows through complex nozzle geometries may be efficiently calculated. Results are presented for five nozzle configurations including two with time varying geometries. Three cases are compared with available experimental data and the results are generally acceptable.

  11. Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.

    ERIC Educational Resources Information Center

    Mandell, Marvin B.; Bretschneider, Stuart I.

    1984-01-01

    The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)

  12. Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow

    NASA Technical Reports Server (NTRS)

    Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.

    1977-01-01

    An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.

  13. Application of Krylov exponential propagation to fluid dynamics equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef; Semeraro, David

    1991-01-01

    An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.

  14. Chronology of Postglacial Eruptive Activity and Calculation of Eruption Probabilities for Medicine Lake Volcano, Northern California

    USGS Publications Warehouse

    Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.

    2007-01-01

    Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.

  15. First off-time treatment prostate-specific antigen kinetics predicts survival in intermittent androgen deprivation for prostate cancer.

    PubMed

    Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier

    2016-01-01

    Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.

  16. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  17. Possible stretched exponential parametrization for humidity absorption in polymers.

    PubMed

    Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O

    2009-04-01

    Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.

  18. Design of a 9-loop quasi-exponential waveform generator

    NASA Astrophysics Data System (ADS)

    Banerjee, Partha; Shukla, Rohit; Shyam, Anurag

    2015-12-01

    We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.

  19. Design of a 9-loop quasi-exponential waveform generator.

    PubMed

    Banerjee, Partha; Shukla, Rohit; Shyam, Anurag

    2015-12-01

    We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.

  20. Anomalous yet Brownian.

    PubMed

    Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve

    2009-09-08

    We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.

  1. Global exponential stability of BAM neural networks with time-varying delays: The discrete-time case

    NASA Astrophysics Data System (ADS)

    Raja, R.; Marshal Anthoni, S.

    2011-02-01

    This paper deals with the problem of stability analysis for a class of discrete-time bidirectional associative memory (BAM) neural networks with time-varying delays. By employing the Lyapunov functional and linear matrix inequality (LMI) approach, a new sufficient conditions is proposed for the global exponential stability of discrete-time BAM neural networks. The proposed LMI based results can be easily checked by LMI control toolbox. Moreover, an example is also provided to demonstrate the effectiveness of the proposed method.

  2. Verification of the exponential model of body temperature decrease after death in pigs.

    PubMed

    Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal

    2005-09-01

    The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.

  3. A method of real-time detection for distant moving obstacles by monocular vision

    NASA Astrophysics Data System (ADS)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  4. Generalized three-dimensional experimental lightning code (G3DXL) user's manual

    NASA Technical Reports Server (NTRS)

    Kunz, Karl S.

    1986-01-01

    Information concerning the programming, maintenance and operation of the G3DXL computer program is presented and the theoretical basis for the code is described. The program computes time domain scattering fields and surface currents and charges induced by a driving function on and within a complex scattering object which may be perfectly conducting or a lossy dielectric. This is accomplished by modeling the object with cells within a three-dimensional, rectangular problem space, enforcing the appropriate boundary conditions and differencing Maxwell's equations in time. In the present version of the program, the driving function can be either the field radiated by a lightning strike or a direct lightning strike. The F-106 B aircraft is used as an example scattering object.

  5. Error reduction program: A progress report

    NASA Technical Reports Server (NTRS)

    Syed, S. A.

    1984-01-01

    Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.

  6. Bi-temporal analysis of landscape changes in the easternmost mediterranean deltas using binary and classified change information.

    PubMed

    Alphan, Hakan

    2013-03-01

    The aim of this study is (1) to quantify landscape changes in the easternmost Mediterranean deltas using bi-temporal binary change detection approach and (2) to analyze relationships between conservation/management designations and various categories of change that indicate type, degree and severity of human impact. For this purpose, image differencing and ratioing were applied to Landsat TM images of 1984 and 2006. A total of 136 candidate change images including normalized difference vegetation index (NDVI) and principal component analysis (PCA) difference images were tested to understand performance of bi-temporal pre-classification analysis procedures in the Mediterranean delta ecosystems. Results showed that visible image algebra provided high accuracies than did NDVI and PCA differencing. On the other hand, Band 5 differencing had one of the lowest change detection performances. Seven superclasses of change were identified using from/to change categories between the earlier and later dates. These classes were used to understand spatial character of anthropogenic impacts in the study area and derive qualitative and quantitative change information within and outside of the conservation/management areas. Change analysis indicated that natural site and wildlife reserve designations fell short of protecting sand dunes from agricultural expansion in the west. East of the study area, however, was exposed to least human impact owing to the fact that nature conservation status kept human interference at a minimum. Implications of these changes were discussed and solutions were proposed to deal with management problems leading to environmental change.

  7. Precise Tracking of the Magellan and Pioneer Venus Orbiters by Same-Beam Interferometry. Part 2: Orbit Determination Analysis

    NASA Technical Reports Server (NTRS)

    Folkner, W. M.; Border, J. S.; Nandi, S.; Zukor, K. S.

    1993-01-01

    A new radio metric positioning technique has demonstrated improved orbit determination accuracy for the Magellan and Pioneer Venus Orbiter orbiters. The new technique, known as Same-Beam Interferometry (SBI), is applicable to the positioning of multiple planetary rovers, landers, and orbiters which may simultaneously be observed in the same beamwidth of Earth-based radio antennas. Measurements of carrier phase are differenced between spacecraft and between receiving stations to determine the plane-of-sky components of the separation vector(s) between the spacecraft. The SBI measurements complement the information contained in line-of-sight Doppler measurements, leading to improved orbit determination accuracy. Orbit determination solutions have been obtained for a number of 48-hour data arcs using combinations of Doppler, differenced-Doppler, and SBI data acquired in the spring of 1991. Orbit determination accuracy is assessed by comparing orbit solutions from adjacent data arcs. The orbit solution differences are shown to agree with expected orbit determination uncertainties. The results from this demonstration show that the orbit determination accuracy for Magellan obtained by using Doppler plus SBI data is better than the accuracy achieved using Doppler plus differenced-Doppler by a factor of four and better than the accuracy achieved using only Doppler by a factor of eighteen. The orbit determination accuracy for Pioneer Venus Orbiter using Doppler plus SBI data is better than the accuracy using only Doppler data by 30 percent.

  8. Global exponential stability of BAM neural networks with time-varying delays and diffusion terms

    NASA Astrophysics Data System (ADS)

    Wan, Li; Zhou, Qinghua

    2007-11-01

    The stability property of bidirectional associate memory (BAM) neural networks with time-varying delays and diffusion terms are considered. By using the method of variation parameter and inequality technique, the delay-independent sufficient conditions to guarantee the uniqueness and global exponential stability of the equilibrium solution of such networks are established.

  9. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  10. Exponential stability preservation in semi-discretisations of BAM networks with nonlinear impulses

    NASA Astrophysics Data System (ADS)

    Mohamad, Sannay; Gopalsamy, K.

    2009-01-01

    This paper demonstrates the reliability of a discrete-time analogue in preserving the exponential convergence of a bidirectional associative memory (BAM) network that is subject to nonlinear impulses. The analogue derived from a semi-discretisation technique with the value of the time-step fixed is treated as a discrete-time dynamical system while its exponential convergence towards an equilibrium state is studied. Thereby, a family of sufficiency conditions governing the network parameters and the impulse magnitude and frequency is obtained for the convergence. As special cases, one can obtain from our results, those corresponding to the non-impulsive discrete-time BAM networks and also those corresponding to continuous-time (impulsive and non-impulsive) systems. A relation between the Lyapunov exponent of the non-impulsive system and that of the impulsive system involving the size of the impulses and the inter-impulse intervals is obtained.

  11. Forecast of Frost Days Based on Monthly Temperatures

    NASA Astrophysics Data System (ADS)

    Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.

    2009-04-01

    Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.

  12. Tidewater dynamics at Store Glacier, West Greenland from daily repeat UAV surveys

    NASA Astrophysics Data System (ADS)

    Ryan, Jonathan; Hubbard, Alun; Toberg, Nick; Box, Jason; Todd, Joe; Christoffersen, Poul; Neal, Snooke

    2017-04-01

    A significant component of the Greenland ice sheet's mass wasteage to sea level rise is attributed to the acceleration and dynamic thinning at its tidewater margins. To improve understanding of the rapid mass loss processes occurring at large tidewater glaciers, we conducted a suite of daily repeat aerial surveys across the terminus of Store Glacier, a large outlet draining the western Greenland Ice Sheet, from May to July 2014 (https://www.youtube.com/watch?v=-y8kauAVAfE). The unmanned aerial vehicles (UAVs) were equipped with digital cameras, which, in combination with onboard GPS, enabled production of high spatial resolution orthophotos and digital elevation models (DEMs) using standard structure-from-motion techniques. These data provide insight into the short-term dynamics of Store Glacier surrounding the break-up of the sea-ice mélange that occurred between 4 and 7 June. Feature tracking of the orthophotos reveals that mean speed of the terminus is 16 - 18 m per day, which was independently verified against a high temporal resolution time-series derived from an expendable/telemetric GPS deployed at the terminus. Differencing the surface area of successive orthophotos enable quantification of daily calving rates, which significantly increase just after melange break-up. Likewise, by differencing bulk freeboard volume of icebergs through time we could also constrain the magnitude and variation of submarine melt. We calculate a mean submarine melt rate of 0.18 m per day throughout the spring period with relatively little supraglacial runoff and no active meltwater plumes to stimulate fjord circulation and upwelling of deeper, warmer water masses. Finally, we relate calving rates to the zonation and depth of water-filled crevasses, which were prominent across parts of the terminus from June onwards.

  13. How bootstrap can help in forecasting time series with more than one seasonal pattern

    NASA Astrophysics Data System (ADS)

    Cordeiro, Clara; Neves, M. Manuela

    2012-09-01

    The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.

  14. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  15. The mechanism of double-exponential growth in hyper-inflation

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Takayasu, M.; Takayasu, H.

    2002-05-01

    Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.

  16. Real-Time Exponential Curve Fits Using Discrete Calculus

    NASA Technical Reports Server (NTRS)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  17. Exponential approximation for daily average solar heating or photolysis. [of stratospheric ozone layer

    NASA Technical Reports Server (NTRS)

    Cogley, A. C.; Borucki, W. J.

    1976-01-01

    When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.

  18. How does temperature affect forest "fungus breath"? Diurnal non-exponential temperature-respiration relationship, and possible longer-term acclimation in fungal sporocarps

    Treesearch

    Erik A. Lilleskov

    2017-01-01

    Fungal respiration contributes substantially to ecosystem respiration, yet its field temperature response is poorly characterized. I hypothesized that at diurnal time scales, temperature-respiration relationships would be better described by unimodal than exponential models, and at longer time scales both Q10 and mass-specific respiration at 10 °...

  19. A statistical study of decaying kink oscillations detected using SDO/AIA

    NASA Astrophysics Data System (ADS)

    Goddard, C. R.; Nisticò, G.; Nakariakov, V. M.; Zimovets, I. V.

    2016-01-01

    Context. Despite intensive studies of kink oscillations of coronal loops in the last decade, a large-scale statistically significant investigation of the oscillation parameters has not been made using data from the Solar Dynamics Observatory (SDO). Aims: We carry out a statistical study of kink oscillations using extreme ultraviolet imaging data from a previously compiled catalogue. Methods: We analysed 58 kink oscillation events observed by the Atmospheric Imaging Assembly (AIA) on board SDO during its first four years of operation (2010-2014). Parameters of the oscillations, including the initial apparent amplitude, period, length of the oscillating loop, and damping are studied for 120 individual loop oscillations. Results: Analysis of the initial loop displacement and oscillation amplitude leads to the conclusion that the initial loop displacement prescribes the initial amplitude of oscillation in general. The period is found to scale with the loop length, and a linear fit of the data cloud gives a kink speed of Ck = (1330 ± 50) km s-1. The main body of the data corresponds to kink speeds in the range Ck = (800-3300) km s-1. Measurements of 52 exponential damping times were made, and it was noted that at least 21 of the damping profiles may be better approximated by a combination of non-exponential and exponential profiles rather than a purely exponential damping envelope. There are nine additional cases where the profile appears to be purely non-exponential and no damping time was measured. A scaling of the exponential damping time with the period is found, following the previously established linear scaling between these two parameters.

  20. Spectral analysis based on fast Fourier transformation (FFT) of surveillance data: the case of scarlet fever in China.

    PubMed

    Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X

    2014-03-01

    Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.

  1. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  2. Evaluating channel morphologic changes and bed-material transport using airborne lidar, upper Colorado River, Rocky Mountain National Park, Colorado

    NASA Astrophysics Data System (ADS)

    Mangano, Joseph F.

    A debris flow associated with the 2003 breach of Grand Ditch in Rocky Mountain National Park, Colorado provided an opportunity to determine controls on channel geomorphic responses following a large sedimentation event. Due to the remote site location and high spatial and temporal variability of processes controlling channel response, repeat airborne lidar surveys in 2004 and 2012 were used to capture conditions along the upper Colorado River and tributary Lulu Creek i) one year following the initial debris flow, and ii) following two bankfull flows (2009 and 2010) and a record-breaking long duration, high intensity snowmelt runoff season (2011). Locations and volumes of aggradation and degradation were determined using lidar differencing. Channel and valley metrics measured from the lidar surveys included water surface slope, valley slope, changes in bankfull width, sinuosity, braiding index, channel migration, valley confinement, height above the water surface along the floodplain, and longitudinal profiles. Reaches of aggradation and degradation along the upper Colorado River are influenced by valley confinement and local controls. Aggradational reaches occurred predominantly in locations where the valley was unconfined and valley slope remained constant through the length of the reach. Channel avulsions, migration, and changes in sinuosity were common in all unconfined reaches, whether aggradational or degradational. Bankfull width in both aggradational and degradational reaches showed greater changes closer to the sediment source, with the magnitude of change decreasing downstream. Local variations in channel morphology, site specific channel conditions, and the distance from the sediment source influence the balance of transport supply and capacity and, therefore, locations of aggradation, degradation, and associated morphologic changes. Additionally, a complex response initially seen in repeat cross-sections is broadly supported by lidar differencing, although the differencing captures only the net change over eight years and not annual changes. Lidar differencing shows great promise because it reveals vertical and horizontal trends in morphologic changes at a high resolution over a large area. Repeat lidar surveys were also used to create a sediment budget along the upper Colorado River by means of the morphologic inverse method. In addition to the geomorphic changes detected by lidar, several levels of attrition of the weak clasts within debris flow sediment were applied to the sediment budget to reduce gaps in expected inputs and outputs. Bed-material estimates using the morphologic inverse method were greater than field-measured transport estimates, but the two were within an order of magnitude. Field measurements and observations are critical for robust interpretation of the lidar-based analyses because applying lidar differencing without field control may not identify local controls on valley and channel geometry and sediment characteristics. The final sediment budget helps define variability in bed-material transport and constrain transport rates through the site, which will be beneficial for restoration planning. The morphologic inverse method approach using repeat lidar surveys appears promising, especially if lidar resolution is similar between sequential surveys.

  3. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  4. Navier-Stokes Aerodynamic Simulation of the V-22 Osprey on the Intel Paragon MPP

    NASA Technical Reports Server (NTRS)

    Vadyak, Joseph; Shrewsbury, George E.; Narramore, Jim C.; Montry, Gary; Holst, Terry; Kwak, Dochan (Technical Monitor)

    1995-01-01

    The paper will describe the Development of a general three-dimensional multiple grid zone Navier-Stokes flowfield simulation program (ENS3D-MPP) designed for efficient execution on the Intel Paragon Massively Parallel Processor (MPP) supercomputer, and the subsequent application of this method to the prediction of the viscous flowfield about the V-22 Osprey tiltrotor vehicle. The flowfield simulation code solves the thin Layer or full Navier-Stoke's equation - for viscous flow modeling, or the Euler equations for inviscid flow modeling on a structured multi-zone mesh. In the present paper only viscous simulations will be shown. The governing difference equations are solved using a time marching implicit approximate factorization method with either TVD upwind or central differencing used for the convective terms and central differencing used for the viscous diffusion terms. Steady state or Lime accurate solutions can be calculated. The present paper will focus on steady state applications, although time accurate solution analysis is the ultimate goal of this effort. Laminar viscosity is calculated using Sutherland's law and the Baldwin-Lomax two layer algebraic turbulence model is used to compute the eddy viscosity. The Simulation method uses an arbitrary block, curvilinear grid topology. An automatic grid adaption scheme is incorporated which concentrates grid points in high density gradient regions. A variety of user-specified boundary conditions are available. This paper will present the application of the scalable and superscalable versions to the steady state viscous flow analysis of the V-22 Osprey using a multiple zone global mesh. The mesh consists of a series of sheared cartesian grid blocks with polar grids embedded within to better simulate the wing tip mounted nacelle. MPP solutions will be shown in comparison to equivalent Cray C-90 results and also in comparison to experimental data. Discussions on meshing considerations, wall clock execution time, load balancing, and scalability will be provided.

  5. Graphical analysis for gel morphology II. New mathematical approach for stretched exponential function with β>1

    NASA Astrophysics Data System (ADS)

    Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu

    2005-10-01

    A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.

  6. On the Time Scale of Nocturnal Boundary Layer Cooling in Valleys and Basins and over Plains

    NASA Astrophysics Data System (ADS)

    de Wekker, Stephan F. J.; Whiteman, C. David

    2006-06-01

    Sequences of vertical temperature soundings over flat plains and in a variety of valleys and basins of different sizes and shapes were used to determine cooling-time-scale characteristics in the nocturnal stable boundary layer under clear, undisturbed weather conditions. An exponential function predicts the cumulative boundary layer cooling well. The fitting parameter or time constant in the exponential function characterizes the cooling of the valley atmosphere and is equal to the time required for the cumulative cooling to attain 63.2% of its total nighttime value. The exponential fit finds time constants varying between 3 and 8 h. Calculated time constants are smallest in basins, are largest over plains, and are intermediate in valleys. Time constants were also calculated from air temperature measurements made at various heights on the sidewalls of a small basin. The variation with height of the time constant exhibited a characteristic parabolic shape in which the smallest time constants occurred near the basin floor and on the upper sidewalls of the basin where cooling was governed by cold-air drainage and radiative heat loss, respectively.

  7. Earth orientation from lunar laser range-differencing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Leick, A.

    1978-01-01

    For the optimal use of high precision lunar laser ranging (LLR), an investigation regarding a clear definition of the underlying coordinate systems, identification of estimable quantities, favorable station geometry and optimal observation schedule is given.

  8. Combination of GPS and GLONASS IN PPP algorithms and its effect on site coordinates determination

    NASA Astrophysics Data System (ADS)

    Hefty, J.; Gerhatova, L.; Burgan, J.

    2011-10-01

    Precise Point Positioning (PPP) approach using the un-differenced code and phase GPS observations, precise orbits and satellite clocks is an important alternative to the analyses based on double differences. We examine the extension of the PPP method by introducing the GLONASS satellites into the processing algorithms. The procedures are demonstrated on the software package ABSOLUTE developed at the Slovak University of Technology. Partial results, like ambiguities and receiver clocks obtained from separate solutions of the two GNSS are mutually compared. Finally, the coordinate time series from combination of GPS and GLONASS observations are compared with GPS-only solutions.

  9. Combining Thermal And Structural Analyses

    NASA Technical Reports Server (NTRS)

    Winegar, Steven R.

    1990-01-01

    Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.

  10. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less

  11. Intrinsic imperfection of self-differencing single-photon detectors harms the security of high-speed quantum cryptography systems

    NASA Astrophysics Data System (ADS)

    Jiang, Mu-Sheng; Sun, Shi-Hai; Tang, Guang-Zhao; Ma, Xiang-Chun; Li, Chun-Yan; Liang, Lin-Mei

    2013-12-01

    Thanks to the high-speed self-differencing single-photon detector (SD-SPD), the secret key rate of quantum key distribution (QKD), which can, in principle, offer unconditionally secure private communications between two users (Alice and Bob), can exceed 1 Mbit/s. However, the SD-SPD may contain loopholes, which can be exploited by an eavesdropper (Eve) to hack into the unconditional security of the high-speed QKD systems. In this paper, we analyze the fact that the SD-SPD can be remotely controlled by Eve in order to spy on full information without being discovered, then proof-of-principle experiments are demonstrated. Here, we point out that this loophole is introduced directly by the operating principle of the SD-SPD, thus, it cannot be removed, except for the fact that some active countermeasures are applied by the legitimate parties.

  12. Computations of the three-dimensional flow and heat transfer within a coolant passage of a radial turbine blade

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Roelke, R. J.; Steinthorsson, E.

    1991-01-01

    A numerical code is developed for computing three-dimensional, turbulent, compressible flow within coolant passages of turbine blades. The code is based on a formulation of the compressible Navier-Stokes equations in a rotating frame of reference in which the velocity dependent variable is specified with respect to the rotating frame instead of the inertial frame. The algorithm employed to obtain solutions to the governing equation is a finite-volume LU algorithm that allows convection, source, as well as diffusion terms to be treated implicitly. In this study, all convection terms are upwind differenced by using flux-vector splitting, and all diffusion terms are centrally differenced. This paper describes the formulation and algorithm employed in the code. Some computed solutions for the flow within a coolant passage of a radial turbine are also presented.

  13. Improved Spatial Differencing Scheme for 2-D DOA Estimation of Coherent Signals with Uniform Rectangular Arrays.

    PubMed

    Shi, Junpeng; Hu, Guoping; Sun, Fenggang; Zong, Binfeng; Wang, Xin

    2017-08-24

    This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions.

  14. Improved Spatial Differencing Scheme for 2-D DOA Estimation of Coherent Signals with Uniform Rectangular Arrays

    PubMed Central

    Hu, Guoping; Zong, Binfeng; Wang, Xin

    2017-01-01

    This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions. PMID:28837115

  15. Gravitational Microlensing Observations of Two New Exoplanets Using the Deep Impact High Resolution Instrument

    NASA Astrophysics Data System (ADS)

    Barry, Richard K.; Bennett, D. P.; Klaasen, K.; Becker, A. C.; Christiansen, J.; Albrow, M.

    2014-01-01

    We have worked to characterize two exoplanets newly detected from the ground: OGLE-2012-BLG-0406 and OGLE-2012-BLG-0838, using microlensing observations of the Galactic Bulge recently obtained by NASA’s Deep Impact (DI) spacecraft, in combination with ground data. These observations of the crowded Bulge fields from Earth and from an observatory at a distance of ~1 AU have permitted the extraction of a microlensing parallax signature - critical for breaking exoplanet model degeneracies. For this effort, we used DI’s High Resolution Instrument, launched with a permanent defocus aberration due to an error in cryogenic testing. We show how the effects of a very large, chromatic PSF can be reduced in differencing photometry. We also compare two approaches to differencing photometry - one of which employs the Bramich algorithm and another using the Fruchter & Hook drizzle algorithm.

  16. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  17. Non-Markovian Infection Spread Dramatically Alters the Susceptible-Infected-Susceptible Epidemic Threshold in Networks

    NASA Astrophysics Data System (ADS)

    Van Mieghem, P.; van de Bovenkamp, R.

    2013-03-01

    Most studies on susceptible-infected-susceptible epidemics in networks implicitly assume Markovian behavior: the time to infect a direct neighbor is exponentially distributed. Much effort so far has been devoted to characterize and precisely compute the epidemic threshold in susceptible-infected-susceptible Markovian epidemics on networks. Here, we report the rather dramatic effect of a nonexponential infection time (while still assuming an exponential curing time) on the epidemic threshold by considering Weibullean infection times with the same mean, but different power exponent α. For three basic classes of graphs, the Erdős-Rényi random graph, scale-free graphs and lattices, the average steady-state fraction of infected nodes is simulated from which the epidemic threshold is deduced. For all graph classes, the epidemic threshold significantly increases with the power exponents α. Hence, real epidemics that violate the exponential or Markovian assumption can behave seriously differently than anticipated based on Markov theory.

  18. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme

    PubMed Central

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093

  19. Five-Year Wilkinson Microwave Anisotropy Probe Observations: Beam Maps and Window Functions

    NASA Astrophysics Data System (ADS)

    Hill, R. S.; Weiland, J. L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C. L.; Halpern, M.; Page, L.; Dunkley, J.; Gold, B.; Jarosik, N.; Kogut, A.; Limon, M.; Nolta, M. R.; Spergel, D. N.; Tucker, G. S.; Wright, E. L.

    2009-02-01

    Cosmology and other scientific results from the Wilkinson Microwave Anisotropy Probe (WMAP) mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of ~2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of ~1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of ~2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly, errors in the measured disk temperature are ~0.5%. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.

  20. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme.

    PubMed

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  1. Proposal for a standardised identification of the mono-exponential terminal phase for orally administered drugs.

    PubMed

    Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte

    2008-04-01

    The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.

  2. Optimality of cycle time and inventory decisions in a two echelon inventory system with exponential price dependent demand under credit period

    NASA Astrophysics Data System (ADS)

    Krugon, Seelam; Nagaraju, Dega

    2017-05-01

    This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.

  3. Universality of accelerating change

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Shlesinger, Michael F.

    2018-03-01

    On large time scales the progress of human technology follows an exponential growth trend that is termed accelerating change. The exponential growth trend is commonly considered to be the amalgamated effect of consecutive technology revolutions - where the progress carried in by each technology revolution follows an S-curve, and where the aging of each technology revolution drives humanity to push for the next technology revolution. Thus, as a collective, mankind is the 'intelligent designer' of accelerating change. In this paper we establish that the exponential growth trend - and only this trend - emerges universally, on large time scales, from systems that combine together two elements: randomness and amalgamation. Hence, the universal generation of accelerating change can be attained by systems with no 'intelligent designer'.

  4. Statistical analysis of low level atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Tieleman, H. W.; Chen, W. W. L.

    1974-01-01

    The statistical properties of low-level wind-turbulence data were obtained with the model 1080 total vector anemometer and the model 1296 dual split-film anemometer, both manufactured by Thermo Systems Incorporated. The data obtained from the above fast-response probes were compared with the results obtained from a pair of Gill propeller anemometers. The digitized time series representing the three velocity components and the temperature were each divided into a number of blocks, the length of which depended on the lowest frequency of interest and also on the storage capacity of the available computer. A moving-average and differencing high-pass filter was used to remove the trend and the low frequency components in the time series. The calculated results for each of the anemometers used are represented in graphical or tabulated form.

  5. Influence of flaps and engines on aircraft wake vortices

    DOT National Transportation Integrated Search

    1974-09-01

    Although pervious investigations have shown that the nature of aircraft wake vortices depends on the aircraft type and flap configuration, the causes for these differences have not been clearly identified. In this Note we show that observed differenc...

  6. Exponential evolution: implications for intelligent extraterrestrial life.

    PubMed

    Russell, D A

    1983-01-01

    Some measures of biologic complexity, including maximal levels of brain development, are exponential functions of time through intervals of 10(6) to 10(9) yrs. Biological interactions apparently stimulate evolution but physical conditions determine the time required to achieve a given level of complexity. Trends in brain evolution suggest that other organisms could attain human levels within approximately 10(7) yrs. The number (N) and longevity (L) terms in appropriate modifications of the Drake Equation, together with trends in the evolution of biological complexity on Earth, could provide rough estimates of the prevalence of life forms at specified levels of complexity within the Galaxy. If life occurs throughout the cosmos, exponential evolutionary processes imply that higher intelligence will soon (10(9) yrs) become more prevalent than it now is. Changes in the physical universe become less rapid as time increases from the Big Bang. Changes in biological complexity may be most rapid at such later times. This lends a unique and symmetrical importance to early and late universal times.

  7. Deadline rush: a time management phenomenon and its mathematical description.

    PubMed

    König, Cornelius J; Kleinmann, Martin

    2005-01-01

    A typical time management phenomenon is the rush before a deadline. Behavioral decision making research can be used to predict how behavior changes before a deadline. People are likely not to work on a project with a deadline in the far future because they generally discount future outcomes. Only when the deadline is close are people likely to work. On the basis of recent intertemporal choice experiments, the authors argue that a hyperbolic function should provide a more accurate description of the deadline rush than an exponential function predicted by an economic model of discounted utility. To show this, the fit of the hyperbolic and the exponential function were compared with data sets that describe when students study for exams. As predicted, the hyperbolic function fit the data significantly better than the exponential function. The implication for time management decisions is that they are most likely to be inconsistent over time (i.e., people make a plan how to use their time but do not follow it).

  8. Comparison of the interfacial energy and pre-exponential factor calculated from the induction time and metastable zone width data based on classical nucleation theory

    NASA Astrophysics Data System (ADS)

    Shiau, Lie-Ding

    2016-09-01

    The pre-exponential factor and interfacial energy obtained from the metastable zone width (MSZW) data using the integral method proposed by Shiau and Lu [1] are compared in this study with those obtained from the induction time data using the conventional method (ti ∝J-1) for three crystallization systems, including potassium sulfate in water in a 200 mL vessel, borax decahydrate in water in a 100 mL vessel and butyl paraben in ethanol in a 5 mL tube. The results indicate that the pre-exponential factor and interfacial energy calculated from the induction time data based on classical nucleation theory are consistent with those calculated from the MSZW data using the same detection technique for the studied systems.

  9. `Un-Darkening' the Cosmos: New laws of physics for an expanding universe

    NASA Astrophysics Data System (ADS)

    George, William

    2017-11-01

    Dark matter is believed to exist because Newton's Laws are inconsistent with the visible matter in galaxies. Dark energy is necessary to explain the universe expansion. (also available from www.turbulence-online.com) suggested that the equations themselves might be in error because they implicitly assume that time is measured in linear increments. This presentation couples the possible non-linearity of time with an expanding universe. Maxwell's equations for an expanding universe with constant speed of light are shown to be invariant only if time itself is non-linear. Both linear and exponential expansion rates are considered. A linearly expanding universe corresponds to logarithmic time, while exponential expansion corresponds to exponentially varying time. Revised Newton's laws using either leads to different definitions of mass and kinetic energy, both of which appear time-dependent if expressed in linear time. And provide the possibility of explaining the astronomical observations without either dark matter or dark energy. We would have never noticed the differences on earth, since the leading term in both expansions is linear in δ /to where to is the current age.

  10. Exponential localization of Wannier functions in insulators.

    PubMed

    Brouder, Christian; Panati, Gianluca; Calandra, Matteo; Mourougane, Christophe; Marzari, Nicola

    2007-01-26

    The exponential localization of Wannier functions in two or three dimensions is proven for all insulators that display time-reversal symmetry, settling a long-standing conjecture. Our proof relies on the equivalence between the existence of analytic quasi-Bloch functions and the nullity of the Chern numbers (or of the Hall current) for the system under consideration. The same equivalence implies that Chern insulators cannot display exponentially localized Wannier functions. An explicit condition for the reality of the Wannier functions is identified.

  11. Note: Attenuation motion of acoustically levitated spherical rotor

    NASA Astrophysics Data System (ADS)

    Lü, P.; Hong, Z. Y.; Yin, J. F.; Yan, N.; Zhai, W.; Wang, H. P.

    2016-11-01

    Here we observe the attenuation motion of spherical rotors levitated by near-field acoustic radiation force and analyze the factors that affect the duration time of free rotation. It is found that the rotating speed of freely rotating rotor decreases exponentially with respect to time. The time constant of exponential attenuation motion depends mainly on the levitation height, the mass of rotor, and the depth of concave ultrasound emitter. Large levitation height, large mass of rotor, and small depth of concave emitter are beneficial to increase the time constant and hence extend the duration time of free rotation.

  12. Note: Attenuation motion of acoustically levitated spherical rotor.

    PubMed

    Lü, P; Hong, Z Y; Yin, J F; Yan, N; Zhai, W; Wang, H P

    2016-11-01

    Here we observe the attenuation motion of spherical rotors levitated by near-field acoustic radiation force and analyze the factors that affect the duration time of free rotation. It is found that the rotating speed of freely rotating rotor decreases exponentially with respect to time. The time constant of exponential attenuation motion depends mainly on the levitation height, the mass of rotor, and the depth of concave ultrasound emitter. Large levitation height, large mass of rotor, and small depth of concave emitter are beneficial to increase the time constant and hence extend the duration time of free rotation.

  13. Nonlinear stability of the 1D Boltzmann equation in a periodic box

    NASA Astrophysics Data System (ADS)

    Wu, Kung-Chien

    2018-05-01

    We study the nonlinear stability of the Boltzmann equation in the 1D periodic box with size , where is the Knudsen number. The convergence rate is for small time region and exponential for large time region. Moreover, the exponential rate depends on the size of the domain (Knudsen number). This problem is highly nonlinear and hence we need more careful analysis to control the nonlinear term.

  14. Understanding Exponential Growth: As Simple as a Drop in a Bucket.

    ERIC Educational Resources Information Center

    Goldberg, Fred; Shuman, James

    1984-01-01

    Provides procedures for a simple laboratory activity on exponential growth and its characteristic doubling time. The equipment needed consists of a large plastic bucket, an eyedropper, a stopwatch, an assortment of containers and graduated cylinders, and a supply of water. (JN)

  15. Memory behaviors of entropy production rates in heat conduction

    NASA Astrophysics Data System (ADS)

    Li, Shu-Nan; Cao, Bing-Yang

    2018-02-01

    Based on the relaxation time approximation and first-order expansion, memory behaviors in heat conduction are found between the macroscopic and Boltzmann-Gibbs-Shannon (BGS) entropy production rates with exponentially decaying memory kernels. In the frameworks of classical irreversible thermodynamics (CIT) and BGS statistical mechanics, the memory dependency on the integrated history is unidirectional, while for the extended irreversible thermodynamics (EIT) and BGS entropy production rates, the memory dependences are bidirectional and coexist with the linear terms. When macroscopic and microscopic relaxation times satisfy a specific relationship, the entropic memory dependences will be eliminated. There also exist initial effects in entropic memory behaviors, which decay exponentially. The second-order term are also discussed, which can be understood as the global non-equilibrium degree. The effects of the second-order term are consisted of three parts: memory dependency, initial value and linear term. The corresponding memory kernels are still exponential and the initial effects of the global non-equilibrium degree also decay exponentially.

  16. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.

  17. Well hydraulics in pumping tests with exponentially decayed rates of abstraction in confined aquifers

    NASA Astrophysics Data System (ADS)

    Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen

    2017-05-01

    Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.

  18. Analysis of volumetric response of pituitary adenomas receiving adjuvant CyberKnife stereotactic radiosurgery with the application of an exponential fitting model.

    PubMed

    Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan

    2017-01-01

    Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.

  19. Exponential H(infinity) synchronization of general discrete-time chaotic neural networks with or without time delays.

    PubMed

    Qi, Donglian; Liu, Meiqin; Qiu, Meikang; Zhang, Senlin

    2010-08-01

    This brief studies exponential H(infinity) synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H(infinity) control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H(infinity) norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H(infinity) synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods.

  20. Multiple types of synchronization analysis for discontinuous Cohen-Grossberg neural networks with time-varying delays.

    PubMed

    Li, Jiarong; Jiang, Haijun; Hu, Cheng; Yu, Zhiyong

    2018-03-01

    This paper is devoted to the exponential synchronization, finite time synchronization, and fixed-time synchronization of Cohen-Grossberg neural networks (CGNNs) with discontinuous activations and time-varying delays. Discontinuous feedback controller and Novel adaptive feedback controller are designed to realize global exponential synchronization, finite time synchronization and fixed-time synchronization by adjusting the values of the parameters ω in the controller. Furthermore, the settling time of the fixed-time synchronization derived in this paper is less conservative and more accurate. Finally, some numerical examples are provided to show the effectiveness and flexibility of the results derived in this paper. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Improved result on stability analysis of discrete stochastic neural networks with time delay

    NASA Astrophysics Data System (ADS)

    Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng

    2009-04-01

    This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.

  2. Flow of 3D Eyring-Powell fluid by utilizing Cattaneo-Christov heat flux model and chemical processes over an exponentially stretching surface

    NASA Astrophysics Data System (ADS)

    Hayat, Tanzila; Nadeem, S.

    2018-03-01

    This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.

  3. Linear or Exponential Number Lines

    ERIC Educational Resources Information Center

    Stafford, Pat

    2011-01-01

    Having decided to spend some time looking at one's understanding of numbers, the author was inspired by "Alex's Adventures in Numberland," by Alex Bellos to look at one's innate appreciation of number. Bellos quotes research studies suggesting that an individual's natural appreciation of numbers is more likely to be exponential rather…

  4. Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement

    PubMed Central

    Gustman, Alan L.; Steinmeier, Thomas L.

    2012-01-01

    This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest. Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used. PMID:22711946

  5. Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.

    PubMed

    Gustman, Alan L; Steinmeier, Thomas L

    2012-06-01

    This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.

  6. Monitoring of the permeable pavement demonstration site at Edison Environmental Center

    EPA Science Inventory

    The EPA’s Urban Watershed Management Branch has installed an instrumented, working full-scale 110-space pervious pavement parking lot and has been monitoring several environmental stressors and runoff. This parking lot demonstration site has allowed the investigation of differenc...

  7. Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.

    PubMed

    Capozziello, S; Lambiase, G; Saridakis, E N

    2017-01-01

    We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.

  8. Non-extensive quantum statistics with particle-hole symmetry

    NASA Astrophysics Data System (ADS)

    Biró, T. S.; Shen, K. M.; Zhang, B. W.

    2015-06-01

    Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.

  9. Very slow lava extrusion continued for more than five years after the 2011 Shinmoedake eruption observed from SAR interferometry

    NASA Astrophysics Data System (ADS)

    Ozawa, T.; Miyagi, Y.

    2017-12-01

    Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.

  10. Discrete-time BAM neural networks with variable delays

    NASA Astrophysics Data System (ADS)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  11. Discrete-time bidirectional associative memory neural networks with variable delays

    NASA Astrophysics Data System (ADS)

    Liang, variable delays [rapid communication] J.; Cao, J.; Ho, D. W. C.

    2005-02-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks.

  12. A kinetic study of solar wind electrons in the transition region from collision dominated to collisionless flow

    NASA Technical Reports Server (NTRS)

    Lie-Svendsen, O.; Leer, E.

    1995-01-01

    We have studied the evolution of the velocity distribution function of a test population of electrons in the solar corona and inner solar wind region, using a recently developed kinetic model. The model solves the time dependent, linear transport equation, with a Fokker-Planck collision operator to describe Coulomb collisions between the 'test population' and a thermal background of charged particles, using a finite differencing scheme. The model provides information on how non-Maxwellian features develop in the distribution function in the transition region from collision dominated to collisionless flow. By taking moments of the distribution the evolution of higher order moments, such as the heat flow, can be studied.

  13. Three-dimensional control of crystal growth using magnetic fields

    NASA Astrophysics Data System (ADS)

    Dulikravich, George S.; Ahuja, Vineet; Lee, Seungsoo

    1993-07-01

    Two coupled systems of partial differential equations governing three-dimensional laminar viscous flow undergoing solidification or melting under the influence of arbitrarily oriented externally applied magnetic fields have been formulated. The model accounts for arbitrary temperature dependence of physical properties including latent heat release, effects of Joule heating, magnetic field forces, and mushy region existence. On the basis of this model a numerical algorithm has been developed and implemented using central differencing on a curvilinear boundary-conforming grid and Runge-Kutta explicit time-stepping. The numerical results clearly demonstrate possibilities for active and practically instantaneous control of melt/solid interface shape, the solidification/melting front propagation speed, and the amount and location of solid accrued.

  14. Artificial dissipation and central difference schemes for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli

    1987-01-01

    An artificial dissipation model, including boundary treatment, that is employed in many central difference schemes for solving the Euler and Navier-Stokes equations is discussed. Modifications of this model such as the eigenvalue scaling suggested by upwind differencing are examined. Multistage time stepping schemes with and without a multigrid method are used to investigate the effects of changes in the dissipation model on accuracy and convergence. Improved accuracy for inviscid and viscous airfoil flow is obtained with the modified eigenvalue scaling. Slower convergence rates are experienced with the multigrid method using such scaling. The rate of convergence is improved by applying a dissipation scaling function that depends on mesh cell aspect ratio.

  15. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  16. Effect of local minima on adiabatic quantum optimization.

    PubMed

    Amin, M H S

    2008-04-04

    We present a perturbative method to estimate the spectral gap for adiabatic quantum optimization, based on the structure of the energy levels in the problem Hamiltonian. We show that, for problems that have an exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.

  17. Exponential model for option prices: Application to the Brazilian market

    NASA Astrophysics Data System (ADS)

    Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.

    2016-03-01

    In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.

  18. On non-exponential cosmological solutions with two factor spaces of dimensions m and 1 in the Einstein-Gauss-Bonnet model with a Λ-term

    NASA Astrophysics Data System (ADS)

    Ernazarov, K. K.

    2017-12-01

    We consider a (m + 2)-dimensional Einstein-Gauss-Bonnet (EGB) model with the cosmological Λ-term. We restrict the metrics to be diagonal ones and find for certain Λ = Λ(m) class of cosmological solutions with non-exponential time dependence of two scale factors of dimensions m > 2 and 1. Any solution from this class describes an accelerated expansion of m-dimensional subspace and tends asymptotically to isotropic solution with exponential dependence of scale factors.

  19. On exponential stability of linear Levin-Nohel integro-differential equations

    NASA Astrophysics Data System (ADS)

    Tien Dung, Nguyen

    2015-02-01

    The aim of this paper is to investigate the exponential stability for linear Levin-Nohel integro-differential equations with time-varying delays. To the best of our knowledge, the exponential stability for such equations has not yet been discussed. In addition, since we do not require that the kernel and delay are continuous, our results improve those obtained in Becker and Burton [Proc. R. Soc. Edinburgh, Sect. A: Math. 136, 245-275 (2006)]; Dung [J. Math. Phys. 54, 082705 (2013)]; and Jin and Luo [Comput. Math. Appl. 57(7), 1080-1088 (2009)].

  20. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-09-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  1. FBST for Cointegration Problems

    NASA Astrophysics Data System (ADS)

    Diniz, M.; Pereira, C. A. B.; Stern, J. M.

    2008-11-01

    In order to estimate causal relations, the time series econometrics has to be aware of spurious correlation, a problem first mentioned by Yule [21]. To solve the problem, one can work with differenced series or use multivariate models like VAR or VEC models. In this case, the analysed series are going to present a long run relation i.e. a cointegration relation. Even though the Bayesian literature about inference on VAR/VEC models is quite advanced, Bauwens et al. [2] highlight that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results." This paper presents the Full Bayesian Significance Test applied to cointegration rank selection tests in multivariate (VAR/VEC) time series models and shows how to implement it using available in the literature and simulated data sets. A standard non-informative prior is assumed.

  2. Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli

    1997-01-01

    A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacon, D.P.

    This review talk describes the OMEGA code, used for weather simulation and the modeling of aerosol transport through the atmosphere. Omega employs a 3D mesh of wedge shaped elements (triangles when viewed from above) that adapt with time. Because wedges are laid out in layers of triangular elements, the scheme can utilize structured storage and differencing techniques along the elevation coordinate, and is thus a hybrid of structured and unstructured methods. The utility of adaptive gridding in this moded, near geographic features such as coastlines, where material properties change discontinuously, is illustrated. Temporal adaptivity was used additionally to track movingmore » internal fronts, such as clouds of aerosol contaminants. The author also discusses limitations specific to this problem, including manipulation of huge data bases and fixed turn-around times. In practice, the latter requires a carefully tuned optimization between accuracy and computation speed.« less

  4. Development of an efficient procedure for calculating the aerodynamic effects of planform variation

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Geller, E. W.

    1981-01-01

    Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.

  5. Modulation of cosmic microwave background polarization with a warm rapidly rotating half-wave plate on the Atacama B-Mode Search instrument.

    PubMed

    Kusaka, A; Essinger-Hileman, T; Appel, J W; Gallardo, P; Irwin, K D; Jarosik, N; Nolta, M R; Page, L A; Parker, L P; Raghunathan, S; Sievers, J L; Simon, S M; Staggs, S T; Visnjic, K

    2014-02-01

    We evaluate the modulation of cosmic microwave background polarization using a rapidly rotating, half-wave plate (HWP) on the Atacama B-Mode Search. After demodulating the time-ordered-data (TOD), we find a significant reduction of atmospheric fluctuations. The demodulated TOD is stable on time scales of 500-1000 s, corresponding to frequencies of 1-2 mHz. This facilitates recovery of cosmological information at large angular scales, which are typically available only from balloon-borne or satellite experiments. This technique also achieves a sensitive measurement of celestial polarization without differencing the TOD of paired detectors sensitive to two orthogonal linear polarizations. This is the first demonstration of the ability to remove atmospheric contamination at these levels from a ground-based platform using a rapidly rotating HWP.

  6. Experimental Magnetohydrodynamic Energy Extraction from a Pulsed Detonation

    DTIC Science & Technology

    2015-03-01

    experimental data taken in this thesis will follow voltage profiles similar to Fig. 2. Notice the initial section in Fig. 2 shows exponential decay consistent...equal that time constant. The exponential curves in Fig. 2 show how changing the time constant can change the charge and/or discharge rate of the...see Fig. 1), at a sampling rate of 1 MHz. Shielded wire and a common ground were used throughout the DAQ system to avoid capacitive issues in the

  7. Evo-SETI: A Mathematical Tool for Cladistics, Evolution, and SETI.

    PubMed

    Maccone, Claudio

    2017-04-06

    The discovery of new exoplanets makes us wonder where each new exoplanet stands along its way to develop life as we know it on Earth. Our Evo-SETI Theory is a mathematical way to face this problem. We describe cladistics and evolution by virtue of a few statistical equations based on lognormal probability density functions (pdf) in the time . We call b -lognormal a lognormal pdf starting at instant b (birth). Then, the lifetime of any living being becomes a suitable b -lognormal in the time . Next, our "Peak-Locus Theorem" translates cladistics : each species created by evolution is a b -lognormal whose peak lies on the exponentially growing number of living species. This exponential is the mean value of a stochastic process called "Geometric Brownian Motion" (GBM). Past mass extinctions were all-lows of this GBM. In addition, the Shannon Entropy (with a reversed sign) of each b -lognormal is the measure of how evolved that species is, and we call it EvoEntropy. The "molecular clock" is re-interpreted as the EvoEntropy straight line in the time whenever the mean value is exactly the GBM exponential. We were also able to extend the Peak-Locus Theorem to any mean value other than the exponential. For example, we derive in this paper for the first time the EvoEntropy corresponding to the Markov-Korotayev (2007) "cubic" evolution: a curve of logarithmic increase.

  8. Heavy tailed bacterial motor switching statistics define macroscopic transport properties during upstream contamination by E. coli

    NASA Astrophysics Data System (ADS)

    Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.

    The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.

  9. SINDA, Systems Improved Numerical Differencing Analyzer

    NASA Technical Reports Server (NTRS)

    Fink, L. C.; Pan, H. M. Y.; Ishimoto, T.

    1972-01-01

    Computer program has been written to analyze group of 100-node areas and then provide for summation of any number of 100-node areas to obtain temperature profile. SINDA program options offer user variety of methods for solution of thermal analog modes presented in network format.

  10. Field programmable analog array based on current differencing transconductance amplifiers and its application to high-order filter

    NASA Astrophysics Data System (ADS)

    He, Haizhen; Luo, Rongming; Hu, Zhenhua; Wen, Lei

    2017-07-01

    A current-mode field programmable analog array(FPAA) is presented in this paper. The proposed FPAA consists of 9 configurable analog blocks(CABs) which are based on current differencing transconductance amplifiers (CDTA) and trans-impedance amplifier (TIA). The proposed CABs interconnect through global lines. These global lines contain some bridge switches, which used to reduce the parasitic capacitance effectively. High-order current-mode low-pass and band-pass filter with transmission zeros based on the simulation of general passive RLC ladder prototypes is proposed and mapped into the FPAA structure in order to demonstrate the versatility of the FPAA. These filters exhibit good performance on bandwidth. Filter's cutoff frequency can be tuned from 1.2MHz to 40MHz.The proposed FPAA is simulated in a standard Charted 0.18μm CMOS process with +/-1.2V power supply to confirm the presented theory, and the results have good agreement with the theoretical analysis.

  11. Integration of Landsat TM and SPOT HRG Images for Vegetation Change Detection in the Brazilian Amazon

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Moran, Emilio

    2009-01-01

    Traditional change detection approaches have been proven to be difficult in detecting vegetation changes in the moist tropical regions with multitemporal images. This paper explores the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data for vegetation change detection in the Brazilian Amazon. A principal component analysis was used to integrate TM and HRG panchromatic data. Vegetation change/non-change was detected with the image differencing approach based on the TM and HRG fused image and the corresponding TM image. A rule-based approach was used to classify the TM and HRG multispectral images into thematic maps with three coarse land-cover classes: forest, non-forest vegetation, and non-vegetation lands. A hybrid approach combining image differencing and post-classification comparison was used to detect vegetation change trajectories. This research indicates promising vegetation change techniques, especially for vegetation gain and loss, even if very limited reference data are available. PMID:19789721

  12. Method of resolving radio phase ambiguity in satellite orbit determination

    NASA Technical Reports Server (NTRS)

    Councelman, Charles C., III; Abbot, Richard I.

    1989-01-01

    For satellite orbit determination, the most accurate observable available today is microwave radio phase, which can be differenced between observing stations and between satellites to cancel both transmitter- and receiver-related errors. For maximum accuracy, the integer cycle ambiguities of the doubly differenced observations must be resolved. To perform this ambiguity resolution, a bootstrapping strategy is proposed. This strategy requires the tracking stations to have a wide ranging progression of spacings. By conventional 'integrated Doppler' processing of the observations from the most widely spaced stations, the orbits are determined well enough to permit resolution of the ambiguities for the most closely spaced stations. The resolution of these ambiguities reduces the uncertainty of the orbit determination enough to enable ambiguity resolution for more widely spaced stations, which further reduces the orbital uncertainty. In a test of this strategy with six tracking stations, both the formal and the true errors of determining Global Positioning System satellite orbits were reduced by a factor of 2.

  13. Numerical and Experimental Studies of the Natural Convection Flow Within a Horizontal Cylinder Subjected to a Uniformly Cold Wall Boundary Condition. Ph.D. Thesis - Va. Poly. Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Stewart, R. B.

    1972-01-01

    Numberical solutions are obtained for the quasi-compressible Navier-Stokes equations governing the time dependent natural convection flow within a horizontal cylinder. The early time flow development and wall heat transfer is obtained after imposing a uniformly cold wall boundary condition on the cylinder. Solutions are also obtained for the case of a time varying cold wall boundary condition. Windware explicit differ-encing is used for the numerical solutions. The viscous truncation error associated with this scheme is controlled so that first order accuracy is maintained in time and space. The results encompass a range of Grashof numbers from 8.34 times 10,000 to 7 times 10 to the 7th power which is within the laminar flow regime for gravitationally driven fluid flows. Experiments within a small scale instrumented horizontal cylinder revealed the time development of the temperature distribution across the boundary layer and also the decay of wall heat transfer with time.

  14. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    PubMed

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  15. Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Forys, John W., Jr.

    1986-01-01

    Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)

  16. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  17. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  18. Fractional Stability of Trunk Acceleration Dynamics of Daily-Life Walking: Toward a Unified Concept of Gait Stability

    PubMed Central

    Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.

    2017-01-01

    Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400

  19. Orbit Determination of the Thermosphere, Ionosphere, Mesosphere, Energetics and Dynamics (TIMED) Mission Using Differenced One-way Doppler (DOWD)Tracking Data from the Tracking and Data Relay Satellite System (TDRSS)

    NASA Technical Reports Server (NTRS)

    Marr, Greg C.; Maher, Michael; Blizzard, Michael; Showell, Avanaugh; Asher, Mark; Devereux, Will

    2004-01-01

    Over an approximately 48-hour period from September 26 to 28,2002, the Thermosphere, Ionosphere, Mesosphere, Energetics and Dynamics (TIMED) mission was intensively supported by the Tracking and Data Relay Satellite System (TDRSS). The TIMED satellite is in a nearly circular low-Earth orbit with a semimajor axis of approximately 7000 km and an inclination of approximately 74 degrees. The objective was to provide TDRSS tracking support for orbit determination (OD) to generate a definitive ephemeris of 24-hour duration or more with a 3-sigma position error no greater than 100 meters, and this tracking campaign was successful. An ephemeris was generated by Goddard Space Flight Center (GSFC) personnel using the TDRSS tracking data and was compared with an ephemeris generated by the Johns Hopkins University's Applied Physics Lab (APL) using TIMED Global Positioning System (GPS) data. Prior to the tracking campaign OD error analysis was performed to justify scheduling the TDRSS support.

  20. Analyzing a stochastic time series obeying a second-order differential equation.

    PubMed

    Lehle, B; Peinke, J

    2015-06-01

    The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series.

  1. On new non-modal hydrodynamic stability modes and resulting non-exponential growth rates - a Lie symmetry approach

    NASA Astrophysics Data System (ADS)

    Oberlack, Martin; Nold, Andreas; Sanjon, Cedric Wilfried; Wang, Yongqi; Hau, Jan

    2016-11-01

    Classical hydrodynamic stability theory for laminar shear flows, no matter if considering long-term stability or transient growth, is based on the normal-mode ansatz, or, in other words, on an exponential function in space (stream-wise direction) and time. Recently, it became clear that the normal mode ansatz and the resulting Orr-Sommerfeld equation is based on essentially three fundamental symmetries of the linearized Euler and Navier-Stokes equations: translation in space and time and scaling of the dependent variable. Further, Kelvin-mode of linear shear flows seemed to be an exception in this context as it admits a fourth symmetry resulting in the classical Kelvin mode which is rather different from normal-mode. However, very recently it was discovered that most of the classical canonical shear flows such as linear shear, Couette, plane and round Poiseuille, Taylor-Couette, Lamb-Ossen vortex or asymptotic suction boundary layer admit more symmetries. This, in turn, led to new problem specific non-modal ansatz functions. In contrast to the exponential growth rate in time of the modal-ansatz, the new non-modal ansatz functions usually lead to an algebraic growth or decay rate, while for the asymptotic suction boundary layer a double-exponential growth or decay is observed.

  2. Transient photoresponse in amorphous In-Ga-Zn-O thin films under stretched exponential analysis

    NASA Astrophysics Data System (ADS)

    Luo, Jiajun; Adler, Alexander U.; Mason, Thomas O.; Bruce Buchholz, D.; Chang, R. P. H.; Grayson, M.

    2013-04-01

    We investigated transient photoresponse and Hall effect in amorphous In-Ga-Zn-O thin films and observed a stretched exponential response which allows characterization of the activation energy spectrum with only three fit parameters. Measurements of as-grown films and 350 K annealed films were conducted at room temperature by recording conductivity, carrier density, and mobility over day-long time scales, both under illumination and in the dark. Hall measurements verify approximately constant mobility, even as the photoinduced carrier density changes by orders of magnitude. The transient photoconductivity data fit well to a stretched exponential during both illumination and dark relaxation, but with slower response in the dark. The inverse Laplace transforms of these stretched exponentials yield the density of activation energies responsible for transient photoconductivity. An empirical equation is introduced, which determines the linewidth of the activation energy band from the stretched exponential parameter β. Dry annealing at 350 K is observed to slow the transient photoresponse.

  3. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  4. Spatially explicit rangeland erosion monitoring using high-resolution digital aerial imagery

    USGS Publications Warehouse

    Gillan, Jeffrey K.; Karl, Jason W.; Barger, Nichole N.; Elaksher, Ahmed; Duniway, Michael C.

    2016-01-01

    Nearly all of the ecosystem services supported by rangelands, including production of livestock forage, carbon sequestration, and provisioning of clean water, are negatively impacted by soil erosion. Accordingly, monitoring the severity, spatial extent, and rate of soil erosion is essential for long-term sustainable management. Traditional field-based methods of monitoring erosion (sediment traps, erosion pins, and bridges) can be labor intensive and therefore are generally limited in spatial intensity and/or extent. There is a growing effort to monitor natural resources at broad scales, which is driving the need for new soil erosion monitoring tools. One remote-sensing technique that can be used to monitor soil movement is a time series of digital elevation models (DEMs) created using aerial photogrammetry methods. By geographically coregistering the DEMs and subtracting one surface from the other, an estimate of soil elevation change can be created. Such analysis enables spatially explicit quantification and visualization of net soil movement including erosion, deposition, and redistribution. We constructed DEMs (12-cm ground sampling distance) on the basis of aerial photography immediately before and 1 year after a vegetation removal treatment on a 31-ha Piñon-Juniper woodland in southeastern Utah to evaluate the use of aerial photography in detecting soil surface change. On average, we were able to detect surface elevation change of ± 8−9cm and greater, which was sufficient for the large amount of soil movement exhibited on the study area. Detecting more subtle soil erosion could be achieved using the same technique with higher-resolution imagery from lower-flying aircraft such as unmanned aerial vehicles. DEM differencing and process-focused field methods provided complementary information and a more complete assessment of soil loss and movement than any single technique alone. Photogrammetric DEM differencing could be used as a technique to quantitatively monitor surface change over time relative to management activities.

  5. Quantum mechanical generalized phase-shift approach to atom-surface scattering: a Feshbach projection approach to dealing with closed channel effects.

    PubMed

    Maji, Kaushik; Kouri, Donald J

    2011-03-28

    We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.

  6. Exponential Methods for the Time Integration of Schroedinger Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cano, B.; Gonzalez-Pachon, A.

    2010-09-30

    We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.

  7. Analysis of volumetric response of pituitary adenomas receiving adjuvant CyberKnife stereotactic radiosurgery with the application of an exponential fitting model

    PubMed Central

    Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan

    2017-01-01

    Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913

  8. Fundamental Flux Equations for Fracture-Matrix Interactions with Linear Diffusion

    NASA Astrophysics Data System (ADS)

    Oldenburg, C. M.; Zhou, Q.; Rutqvist, J.; Birkholzer, J. T.

    2017-12-01

    The conventional dual-continuum models are only applicable for late-time behavior of pressure propagation in fractured rock, while discrete-fracture-network models may explicitly deal with matrix blocks at high computational expense. To address these issues, we developed a unified-form diffusive flux equation for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular matrix blocks (squares, cubes, rectangles, and rectangular parallelepipeds) by partitioning the entire dimensionless-time domain (Zhou et al., 2017a, b). For each matrix block, this flux equation consists of the early-time solution up until a switch-over time after which the late-time solution is applied to create continuity from early to late time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the coefficients dependent on dimensionless area-to-volume ratio and aspect ratios for rectangular blocks. For the late-time solutions, one exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic blocks. The time-partitioning method was also used for calculating pressure/concentration/temperature distribution within a matrix block. The approximate solution contains an error-function solution for early times and an exponential solution for late times, with relative errors less than 0.003. These solutions form the kernel of multirate and multidimensional hydraulic, solute and thermal diffusion in fractured reservoirs.

  9. Volcanic activity at Etna volcano, Sicily, Italy between June 2011 and March 2017 studied with TanDEM-X SAR interferometry

    NASA Astrophysics Data System (ADS)

    Kubanek, J.; Raible, B.; Westerhaus, M.; Heck, B.

    2017-12-01

    High-resolution and up-to-date topographic data are of high value in volcanology and can be used in a variety of applications such as volcanic flow modeling or hazard assessment. Furthermore, time-series of topographic data can provide valuable insights into the dynamics of an ongoing eruption. Differencing topographic data acquired at different times enables to derive areal coverage of lava, flow volumes, and lava extrusion rates, the most important parameters during ongoing eruptions for estimating hazard potential, yet most difficult to determine. Anyhow, topographic data acquisition and provision is a challenge. Very often, high-resolution data only exists within a small spatial extension, or the available data is already outdated when the final product is provided. This is especially true for very dynamic landscapes, such as volcanoes. The bistatic TanDEM-X radar satellite mission enables for the first time to generate up-to-date and high-resolution digital elevation models (DEMs) repeatedly using the interferometric phase. The repeated acquisition of TanDEM-X data facilitates the generation of a time-series of DEMs. Differencing DEMs generated from bistatic TanDEM-X data over time can contribute to monitor topographic changes at active volcanoes, and can help to estimate magmatic ascent rates. Here, we use the bistatic TanDEM-X data to investigate the activity of Etna volcano in Sicily, Italy. Etna's activity is characterized by lava fountains and lava flows with ash plumes from four major summit crater areas. Especially the newest crater, the New South East Crater (NSEC) that was formed in 2011 has been highly active in recent years. Over one hundred bistatic TanDEM-X data pairs were acquired between January 2011 and March 2017 in StripMap mode, covering episodes of lava fountaining and lava flow emplacement at Etna's NSEC and its surrounding area. Generating DEMs of every bistatic data pair enables us to assess areal extension of the lava flows, to calculate lava flow volume, and lava extrusion rates. TanDEM-X data have been acquired at Etna during almost every overflight of the TanDEM-X satellite mission, resulting in a high-temporal resolution of DEMs giving highly valuable insights into Etna's volcanic activity of the last six years.

  10. Space Monitoring of urban sprawl

    NASA Astrophysics Data System (ADS)

    Nole, G.; Lanorte, A.; Murgante, B.; Lasaponara, R.

    2012-04-01

    Space Monitoring of urban sprawl Gabriele Nolè (1,2), Antonio Lanorte (1), , Beniamino Murgante (2) and Rosa Lasaponara (1) , (1,2) Institute of Methodologies for Environmental Analysis, National Research Council, Italy (2) Laboratory of Urban and Territorial Systems, University of Basilicata, During the last few decades, in many regions throughout the world abandonment of agricultural land has induced a high concentration of people in densely populated urban areas. The deep social, economic and environmental changes have caused strong and extensive land cover changes. This is regarded as a pressing issue that calls for a clear understanding of the ongoing trends and future urban expansion. The main issue of great importance in modelling urban growth includes spatial and temporal dynamics, scale dynamics, man-induced land use changes. Although urban growth is perceived as necessary for a sustainable economy, uncontrolled or sprawling urban growth can cause various problems, such as, the loss of open space, landscape alteration, environmental pollution, traffic congestion, infrastructure pressure, and other social and economical issues. To face these drawbacks, a continuous monitoring of the urban growth evolution in terms of type and extent of changes over time are essential for supporting planners and decision makers in future urban planning. A critical point for the understanding and monitoring urban expansion processes is the availability of both (i) time-series data set and (ii) updated information relating to the current urban spatial structure a to define and locate the evolution trends. In such a context, an effective contribution can be offered by satellite remote sensing technologies, which are able to provide both historical data archive and up-to-date imagery. Satellite technologies represent a cost-effective mean for obtaining useful data that can be easily and systematically updated for the whole globe. Nowadays medium resolution satellite images, such as Landsat TM or ASTER can be downloaded free of charge from the NASA web site. The use of satellite imagery along with robust data analysis techniques can be used for the monitoring and planning purposes as these enable the reporting of ongoing trends of urban growth at a detailed level. Nevertheless, the exploitation of satellite Earth Observation in the field of the urban growth monitoring is a relatively new tool, although during the last three decades great efforts have been addressed to the application of remote sensing in detecting land use and land cover changes using a number of data analyses, such as: (i) Spectral enhancement based on vegetation index differencing, principal component analysis, Image differencing and visual interpretation and/or classification, (ii) post-classification change differencing and a combination of image enhancement and post-classification comparison, (iii) mixture analysis, (iv) artificial neural networks, (v) landscape metrics (patchiness and map density) and (vi) the integration of geographical information system and remote sensing data. In this paper a comparison of the methods listed before is carried out using satellite time series made up of Landsat MSS, TM, ETM+ASTER for some test areas selected in South of Italy and Cairo in order to extract and quantify urban sprawl and its spatial and temporal feature patterns.

  11. pth moment exponential stability of stochastic memristor-based bidirectional associative memory (BAM) neural networks with time delays.

    PubMed

    Wang, Fen; Chen, Yuanlong; Liu, Meichun

    2018-02-01

    Stochastic memristor-based bidirectional associative memory (BAM) neural networks with time delays play an increasingly important role in the design and implementation of neural network systems. Under the framework of Filippov solutions, the issues of the pth moment exponential stability of stochastic memristor-based BAM neural networks are investigated. By using the stochastic stability theory, Itô's differential formula and Young inequality, the criteria are derived. Meanwhile, with Lyapunov approach and Cauchy-Schwarz inequality, we derive some sufficient conditions for the mean square exponential stability of the above systems. The obtained results improve and extend previous works on memristor-based or usual neural networks dynamical systems. Four numerical examples are provided to illustrate the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Exact simulation of integrate-and-fire models with exponential currents.

    PubMed

    Brette, Romain

    2007-10-01

    Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

  13. Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control.

    PubMed

    Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong

    2018-08-01

    This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Master-slave exponential synchronization of delayed complex-valued memristor-based neural networks via impulsive control.

    PubMed

    Li, Xiaofan; Fang, Jian-An; Li, Huiyuan

    2017-09-01

    This paper investigates master-slave exponential synchronization for a class of complex-valued memristor-based neural networks with time-varying delays via discontinuous impulsive control. Firstly, the master and slave complex-valued memristor-based neural networks with time-varying delays are translated to two real-valued memristor-based neural networks. Secondly, an impulsive control law is constructed and utilized to guarantee master-slave exponential synchronization of the neural networks. Thirdly, the master-slave synchronization problems are transformed into the stability problems of the master-slave error system. By employing linear matrix inequality (LMI) technique and constructing an appropriate Lyapunov-Krasovskii functional, some sufficient synchronization criteria are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the obtained theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Global exponential stability of inertial memristor-based neural networks with time-varying delays and impulses.

    PubMed

    Zhang, Wei; Huang, Tingwen; He, Xing; Li, Chuandong

    2017-11-01

    In this study, we investigate the global exponential stability of inertial memristor-based neural networks with impulses and time-varying delays. We construct inertial memristor-based neural networks based on the characteristics of the inertial neural networks and memristor. Impulses with and without delays are considered when modeling the inertial neural networks simultaneously, which are of great practical significance in the current study. Some sufficient conditions are derived under the framework of the Lyapunov stability method, as well as an extended Halanay differential inequality and a new delay impulsive differential inequality, which depend on impulses with and without delays, in order to guarantee the global exponential stability of the inertial memristor-based neural networks. Finally, two numerical examples are provided to illustrate the efficiency of the proposed methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. On the analytical determination of relaxation modulus of viscoelastic materials by Prony's interpolation method

    NASA Technical Reports Server (NTRS)

    Rodriguez, Pedro I.

    1986-01-01

    A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.

  17. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  18. An explicit asymptotic model for the surface wave in a viscoelastic half-space based on applying Rabotnov's fractional exponential integral operators

    NASA Astrophysics Data System (ADS)

    Wilde, M. V.; Sergeeva, N. V.

    2018-05-01

    An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.

  19. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  20. Humans Can Adopt Optimal Discounting Strategy under Real-Time Constraints

    PubMed Central

    Schweighofer, N; Shishida, K; Han, C. E; Okamoto, Y; Tanaka, S. C; Yamawaki, S; Doya, K

    2006-01-01

    Critical to our many daily choices between larger delayed rewards, and smaller more immediate rewards, are the shape and the steepness of the function that discounts rewards with time. Although research in artificial intelligence favors exponential discounting in uncertain environments, studies with humans and animals have consistently shown hyperbolic discounting. We investigated how humans perform in a reward decision task with temporal constraints, in which each choice affects the time remaining for later trials, and in which the delays vary at each trial. We demonstrated that most of our subjects adopted exponential discounting in this experiment. Further, we confirmed analytically that exponential discounting, with a decay rate comparable to that used by our subjects, maximized the total reward gain in our task. Our results suggest that the particular shape and steepness of temporal discounting is determined by the task that the subject is facing, and question the notion of hyperbolic reward discounting as a universal principle. PMID:17096592

  1. Voter model with non-Poissonian interevent intervals

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Masuda, Naoki

    2011-09-01

    Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.

  2. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  3. Lichen ecology and diversity of a sagebrush steppe in Oregon: 1977 to the present

    USDA-ARS?s Scientific Manuscript database

    A lichen checklist is presented of 141 species from the Lawrence Memorial Grassland Preserve and nearby lands in Wasco County, Oregon, based on collections made in the 1970s and 1990s. Collections include epiphytic, lignicolous, saxicolous, muscicolous and terricolous species. To evaluate differenc...

  4. Interdependence of PRECIS Role Operators: A Quantitative Analysis of Their Associations.

    ERIC Educational Resources Information Center

    Mahapatra, Manoranjan; Biswas, Subal Chandra

    1986-01-01

    Analyzes associations among different role operators quantitatively by taking input strings from 200 abstracts, each related to subject fields of taxation, genetic psychology, and Shakespearean drama, and subjecting them to the Chi-square test. Significant associations by other differencing operators and connectives are discussed. A schema of role…

  5. Comparing fire severity models from post-fire and pre/post-fire differenced imagery

    USDA-ARS?s Scientific Manuscript database

    Wildland fires are common in rangelands worldwide. The potential for high severity fires to affect long-term changes in rangelands is considerable, and for this reason assessing fire severity shortly after the fire is critical. Such assessments are typically carried out following Burned Area Emergen...

  6. Continuous electrocardiogram reveals differenced in the short-term cardiotoxic response of Wistar-Kyoto and spontaneously hypertensive rats to doxorubicin

    EPA Science Inventory

    Electrocardiography (ECG) is one of the standard technologies used to monitor and assess cardiac function, and provide insight into the mechanisms driving myocardial pathology. Increased understanding of the effects of cardiovascular disease on rat ECG may help make ECG assessmen...

  7. Using Approximate Dynamic Programming to Solve the Stochastic Demand Military Inventory Routing Problem with Direct Delivery

    DTIC Science & Technology

    due to the dangers of utilizing convoy operations. However, enemy actions, austere conditions, and inclement weather pose a significant risk to a...squares temporal differencing for policy evaluation. We construct a representative problem instance based on an austere combat environment in order to

  8. Evo-SETI: A Mathematical Tool for Cladistics, Evolution, and SETI

    PubMed Central

    Maccone, Claudio

    2017-01-01

    The discovery of new exoplanets makes us wonder where each new exoplanet stands along its way to develop life as we know it on Earth. Our Evo-SETI Theory is a mathematical way to face this problem. We describe cladistics and evolution by virtue of a few statistical equations based on lognormal probability density functions (pdf) in the time. We call b-lognormal a lognormal pdf starting at instant b (birth). Then, the lifetime of any living being becomes a suitable b-lognormal in the time. Next, our “Peak-Locus Theorem” translates cladistics: each species created by evolution is a b-lognormal whose peak lies on the exponentially growing number of living species. This exponential is the mean value of a stochastic process called “Geometric Brownian Motion” (GBM). Past mass extinctions were all-lows of this GBM. In addition, the Shannon Entropy (with a reversed sign) of each b-lognormal is the measure of how evolved that species is, and we call it EvoEntropy. The “molecular clock” is re-interpreted as the EvoEntropy straight line in the time whenever the mean value is exactly the GBM exponential. We were also able to extend the Peak-Locus Theorem to any mean value other than the exponential. For example, we derive in this paper for the first time the EvoEntropy corresponding to the Markov-Korotayev (2007) “cubic” evolution: a curve of logarithmic increase. PMID:28383497

  9. Compressed exponential relaxation in liquid silicon: Universal feature of the crossover from ballistic to diffusive behavior in single-particle dynamics

    NASA Astrophysics Data System (ADS)

    Morishita, Tetsuya

    2012-07-01

    We report a first-principles molecular-dynamics study of the relaxation dynamics in liquid silicon (l-Si) over a wide temperature range (1000-2200 K). We find that the intermediate scattering function for l-Si exhibits a compressed exponential decay above 1200 K including the supercooled regime, which is in stark contrast to that for normal "dense" liquids which typically show stretched exponential decay in the supercooled regime. The coexistence of particles having ballistic-like motion and those having diffusive-like motion is demonstrated, which accounts for the compressed exponential decay in l-Si. An attempt to elucidate the crossover from the ballistic to the diffusive regime in the "time-dependent" diffusion coefficient is made and the temperature-independent universal feature of the crossover is disclosed.

  10. A nanostructured surface increases friction exponentially at the solid-gas interface.

    PubMed

    Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E; Prashanthi, Kovur; Thundat, Thomas

    2016-09-06

    According to Stokes' law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.

  11. A nanostructured surface increases friction exponentially at the solid-gas interface

    NASA Astrophysics Data System (ADS)

    Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E.; Prashanthi, Kovur; Thundat, Thomas

    2016-09-01

    According to Stokes’ law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.

  12. Matrix exponential-based closures for the turbulent subgrid-scale stress tensor.

    PubMed

    Li, Yi; Chevillard, Laurent; Eyink, Gregory; Meneveau, Charles

    2009-01-01

    Two approaches for closing the turbulence subgrid-scale stress tensor in terms of matrix exponentials are introduced and compared. The first approach is based on a formal solution of the stress transport equation in which the production terms can be integrated exactly in terms of matrix exponentials. This formal solution of the subgrid-scale stress transport equation is shown to be useful to explore special cases, such as the response to constant velocity gradient, but neglecting pressure-strain correlations and diffusion effects. The second approach is based on an Eulerian-Lagrangian change of variables, combined with the assumption of isotropy for the conditionally averaged Lagrangian velocity gradient tensor and with the recent fluid deformation approximation. It is shown that both approaches lead to the same basic closure in which the stress tensor is expressed as the matrix exponential of the resolved velocity gradient tensor multiplied by its transpose. Short-time expansions of the matrix exponentials are shown to provide an eddy-viscosity term and particular quadratic terms, and thus allow a reinterpretation of traditional eddy-viscosity and nonlinear stress closures. The basic feasibility of the matrix-exponential closure is illustrated by implementing it successfully in large eddy simulation of forced isotropic turbulence. The matrix-exponential closure employs the drastic approximation of entirely omitting the pressure-strain correlation and other nonlinear scrambling terms. But unlike eddy-viscosity closures, the matrix exponential approach provides a simple and local closure that can be derived directly from the stress transport equation with the production term, and using physically motivated assumptions about Lagrangian decorrelation and upstream isotropy.

  13. Pore‐Scale Hydrodynamics in a Progressively Bioclogged Three‐Dimensional Porous Medium: 3‐D Particle Tracking Experiments and Stochastic Transport Modeling

    PubMed Central

    Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.

    2018-01-01

    Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184

  14. Pore-Scale Hydrodynamics in a Progressively Bioclogged Three-Dimensional Porous Medium: 3-D Particle Tracking Experiments and Stochastic Transport Modeling

    NASA Astrophysics Data System (ADS)

    Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2018-03-01

    Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.

  15. Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue

    PubMed Central

    Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.

    2004-01-01

    The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455

  16. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  17. A fourth order accurate finite difference scheme for the computation of elastic waves

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.

    1986-01-01

    A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.

  18. Assessment of an Unstructured-Grid Method for Predicting 3-D Turbulent Viscous Flows

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.

    1996-01-01

    A method Is presented for solving turbulent flow problems on three-dimensional unstructured grids. Spatial discretization Is accomplished by a cell-centered finite-volume formulation using an accurate lin- ear reconstruction scheme and upwind flux differencing. Time is advanced by an implicit backward- Euler time-stepping scheme. Flow turbulence effects are modeled by the Spalart-Allmaras one-equation model, which is coupled with a wall function to reduce the number of cells in the sublayer region of the boundary layer. A systematic assessment of the method is presented to devise guidelines for more strategic application of the technology to complex problems. The assessment includes the accuracy In predictions of skin-friction coefficient, law-of-the-wall behavior, and surface pressure for a flat-plate turbulent boundary layer, and for the ONERA M6 wing under a high Reynolds number, transonic, separated flow condition.

  19. Assessment of an Unstructured-Grid Method for Predicting 3-D Turbulent Viscous Flows

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.

    1996-01-01

    A method is presented for solving turbulent flow problems on three-dimensional unstructured grids. Spatial discretization is accomplished by a cell-centered finite-volume formulation using an accurate linear reconstruction scheme and upwind flux differencing. Time is advanced by an implicit backward-Euler time-stepping scheme. Flow turbulence effects are modeled by the Spalart-Allmaras one-equation model, which is coupled with a wall function to reduce the number of cells in the sublayer region of the boundary layer. A systematic assessment of the method is presented to devise guidelines for more strategic application of the technology to complex problems. The assessment includes the accuracy in predictions of skin-friction coefficient, law-of-the-wall behavior, and surface pressure for a flat-plate turbulent boundary layer, and for the ONERA M6 wing under a high Reynolds number, transonic, separated flow condition.

  20. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  1. A new look at atmospheric carbon dioxide

    NASA Astrophysics Data System (ADS)

    Hofmann, David J.; Butler, James H.; Tans, Pieter P.

    Carbon dioxide is increasing in the atmosphere and is of considerable concern in global climate change because of its greenhouse gas warming potential. The rate of increase has accelerated since measurements began at Mauna Loa Observatory in 1958 where carbon dioxide increased from less than 1 part per million per year (ppm yr -1) prior to 1970 to more than 2 ppm yr -1 in recent years. Here we show that the anthropogenic component (atmospheric value reduced by the pre-industrial value of 280 ppm) of atmospheric carbon dioxide has been increasing exponentially with a doubling time of about 30 years since the beginning of the industrial revolution (˜1800). Even during the 1970s, when fossil fuel emissions dropped sharply in response to the "oil crisis" of 1973, the anthropogenic atmospheric carbon dioxide level continued increasing exponentially at Mauna Loa Observatory. Since the growth rate (time derivative) of an exponential has the same characteristic lifetime as the function itself, the carbon dioxide growth rate is also doubling at the same rate. This explains the observation that the linear growth rate of carbon dioxide has more than doubled in the past 40 years. The accelerating growth rate is simply the outcome of exponential growth in carbon dioxide with a nearly constant doubling time of about 30 years (about 2%/yr) and appears to have tracked human population since the pre-industrial era.

  2. H∞ control problem of linear periodic piecewise time-delay systems

    NASA Astrophysics Data System (ADS)

    Xie, Xiaochen; Lam, James; Li, Panshuo

    2018-04-01

    This paper investigates the H∞ control problem based on exponential stability and weighted L2-gain analyses for a class of continuous-time linear periodic piecewise systems with time delay. A periodic piecewise Lyapunov-Krasovskii functional is developed by integrating a discontinuous time-varying matrix function with two global terms. By applying the improved constraints to the stability and L2-gain analyses, sufficient delay-dependent exponential stability and weighted L2-gain criteria are proposed for the periodic piecewise time-delay system. Based on these analyses, an H∞ control scheme is designed under the considerations of periodic state feedback control input and iterative optimisation. Finally, numerical examples are presented to illustrate the effectiveness of our proposed conditions.

  3. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  4. Elastically driven intermittent microscopic dynamics in soft solids

    NASA Astrophysics Data System (ADS)

    Bouzid, Mehdi; Colombo, Jader; Barbosa, Lucas Vieira; Del Gado, Emanuela

    2017-06-01

    Soft solids with tunable mechanical response are at the core of new material technologies, but a crucial limit for applications is their progressive aging over time, which dramatically affects their functionalities. The generally accepted paradigm is that such aging is gradual and its origin is in slower than exponential microscopic dynamics, akin to the ones in supercooled liquids or glasses. Nevertheless, time- and space-resolved measurements have provided contrasting evidence: dynamics faster than exponential, intermittency and abrupt structural changes. Here we use 3D computer simulations of a microscopic model to reveal that the timescales governing stress relaxation, respectively, through thermal fluctuations and elastic recovery are key for the aging dynamics. When thermal fluctuations are too weak, stress heterogeneities frozen-in upon solidification can still partially relax through elastically driven fluctuations. Such fluctuations are intermittent, because of strong correlations that persist over the timescale of experiments or simulations, leading to faster than exponential dynamics.

  5. Exponential bound in the quest for absolute zero

    NASA Astrophysics Data System (ADS)

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  6. Exponential bound in the quest for absolute zero.

    PubMed

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  7. Phenomenology of stochastic exponential growth

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  8. Weak associations between the daily number of suicide cases and amount of daily sunlight.

    PubMed

    Seregi, Bernadett; Kapitány, Balázs; Maróti-Agóts, Ákos; Rihmer, Zoltán; Gonda, Xénia; Döme, Péter

    2017-02-06

    Several environmental factors with periodic changes in intensity during the calendar year have been put forward to explain the increase in suicide frequency during spring and summer. In the current study we investigated the effect of averaged daily sunshine duration of periods with different lengths and 'lags' (i.e. the number of days between the last day of the period for which the averaged sunshine duration was calculated and the day of suicide) on suicide risk. We obtained data on daily numbers of suicide cases and daily sunshine duration in Hungary from 1979 to 2013. In order to remove the seasonal components from the two time series (i.e. numbers of suicide and sunshine hours) we used the differencing method. Pearson correlations (n=22,950) were calculated to reveal associations between sunshine duration and suicide risk. The final sample consisted of 122,116 suicide cases. Regarding the entire investigated period, after differencing, sunshine duration and number of suicides on the same days showed a distinctly weak, but highly significant positive correlation in the total sample (r=0.067; p=1.17*10 -13 ). Positive significant correlations (p˂0.0001) between suicide risk on the index day and averaged sunshine duration in the previous days (up to 11days) were also found in the total sample. Our results from a large sample strongly support the hypothesis that sunshine has a prompt, but very weak increasing effect on the risk of suicide (especially violent cases among males). The main limitation is that possible confounding factors were not controlled for. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Array-based satellite phase bias sensing: theory and GPS/BeiDou/QZSS results

    NASA Astrophysics Data System (ADS)

    Khodabandeh, A.; Teunissen, P. J. G.

    2014-09-01

    Single-receiver integer ambiguity resolution (IAR) is a measurement concept that makes use of network-derived non-integer satellite phase biases (SPBs), among other corrections, to recover and resolve the integer ambiguities of the carrier-phase data of a single GNSS receiver. If it is realized, the very precise integer ambiguity-resolved carrier-phase data would then contribute to the estimation of the receiver’s position, thus making (near) real-time precise point positioning feasible. Proper definition and determination of the SPBs take a leading part in developing the idea of single-receiver IAR. In this contribution, the concept of array-based between-satellite single-differenced (SD) SPB determination is introduced, which is aimed to reduce the code-dominated precision of the SD-SPB corrections. The underlying model is realized by giving the role of the local reference network to an array of antennas, mounted on rigid platforms, that are separated by short distances so that the same ionospheric delay is assumed to be experienced by all the antennas. To that end, a closed-form expression of the array-aided SD-SPB corrections is presented, thereby proposing a simple strategy to compute the SD-SPBs. After resolving double-differenced ambiguities of the array’s data, the variance of the SD-SPB corrections is shown to be reduced by a factor equal to the number of antennas. This improvement in precision is also affirmed by numerical results of the three GNSSs GPS, BeiDou and QZSS. Experimental results demonstrate that the integer-recovered ambiguities converge to integers faster, upon increasing the number of antennas aiding the SD-SPB corrections.

  10. Time Correlations in Mode Hopping of Coupled Oscillators

    NASA Astrophysics Data System (ADS)

    Heltberg, Mathias L.; Krishna, Sandeep; Jensen, Mogens H.

    2017-05-01

    We study the dynamics in a system of coupled oscillators when Arnold Tongues overlap. By varying the initial conditions, the deterministic system can be attracted to different limit cycles. Adding noise, the mode hopping between different states become a dominating part of the dynamics. We simplify the system through a Poincare section, and derive a 1D model to describe the dynamics. We explain that for some parameter values of the external oscillator, the time distribution of occupancy in a state is exponential and thus memoryless. In the general case, on the other hand, it is a sum of exponential distributions characteristic of a system with time correlations.

  11. Flows in a tube structure: Equation on the graph

    NASA Astrophysics Data System (ADS)

    Panasenko, Grigory; Pileckas, Konstantin

    2014-08-01

    The steady-state Navier-Stokes equations in thin structures lead to some elliptic second order equation for the macroscopic pressure on a graph. At the nodes of the graph the pressure satisfies Kirchoff-type junction conditions. In the non-steady case the problem for the macroscopic pressure on the graph becomes nonlocal in time. In the paper we study the existence and uniqueness of a solution to such one-dimensional model on the graph for a pipe-wise network. We also prove the exponential decay of the solution with respect to the time variable in the case when the data decay exponentially with respect to time.

  12. On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases

    NASA Astrophysics Data System (ADS)

    Khodabandeh, A.; Teunissen, P. J. G.

    2018-06-01

    Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.

  13. On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases

    NASA Astrophysics Data System (ADS)

    Khodabandeh, A.; Teunissen, P. J. G.

    2017-11-01

    Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.

  14. CFD analyses of combustor and nozzle flowfields

    NASA Astrophysics Data System (ADS)

    Tsuei, Hsin-Hua; Merkle, Charles L.

    1993-11-01

    The objectives of the research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustion processes. A Computational Fluid Dynamic (CFD) technique is employed to model the flowfields within the combustor, nozzle, and near plume field. The computational modeling of the rocket engine flowfields requires the application of the complete Navier-Stokes equations, coupled with species diffusion equations. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an ongoing experimental program at NASA LeRC. The numerical procedure is performed on both time-marching and time-accurate algorithms, using an LU approximate factorization in time, flux split upwinding differencing in space. The integrity of fuel film cooling along the wall, its effectiveness in the mixing with the core flow including unsteady large scale effects, the resultant impact on performance and the assessment of the near plume flow expansion to finite pressure altitude chamber are addressed.

  15. Computational simulation of the creep-rupture process in filamentary composite materials

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.; Hackett, Robert M.

    1991-01-01

    A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.

  16. Discrete Variational Approach for Modeling Laser-Plasma Interactions

    NASA Astrophysics Data System (ADS)

    Reyes, J. Paxon; Shadwick, B. A.

    2014-10-01

    The traditional approach for fluid models of laser-plasma interactions begins by approximating fields and derivatives on a grid in space and time, leading to difference equations that are manipulated to create a time-advance algorithm. In contrast, by introducing the spatial discretization at the level of the action, the resulting Euler-Lagrange equations have particular differencing approximations that will exactly satisfy discrete versions of the relevant conservation laws. For example, applying a spatial discretization in the Lagrangian density leads to continuous-time, discrete-space equations and exact energy conservation regardless of the spatial grid resolution. We compare the results of two discrete variational methods using the variational principles from Chen and Sudan and Brizard. Since the fluid system conserves energy and momentum, the relative errors in these conserved quantities are well-motivated physically as figures of merit for a particular method. This work was supported by the U. S. Department of Energy under Contract No. DE-SC0008382 and by the National Science Foundation under Contract No. PHY-1104683.

  17. Time scale defined by the fractal structure of the price fluctuations in foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Kumagai, Yoshiaki

    2010-04-01

    In this contribution, a new time scale named C-fluctuation time is defined by price fluctuations observed at a given resolution. The intraday fractal structures and the relations of the three time scales: real time (physical time), tick time and C-fluctuation time, in foreign exchange markets are analyzed. The data set used is trading prices of foreign exchange rates; US dollar (USD)/Japanese yen (JPY), USD/Euro (EUR), and EUR/JPY. The accuracy of the data is one minute and data within a minute are recorded in order of transaction. The series of instantaneous velocity of C-fluctuation time flowing are exponentially distributed for small C when they are measured by real time and for tiny C when they are measured by tick time. When the market is volatile, for larger C, the series of instantaneous velocity are exponentially distributed.

  18. Rapid growth of seed black holes in the early universe by supra-exponential accretion.

    PubMed

    Alexander, Tal; Natarajan, Priyamvada

    2014-09-12

    Mass accretion by black holes (BHs) is typically capped at the Eddington rate, when radiation's push balances gravity's pull. However, even exponential growth at the Eddington-limited e-folding time t(E) ~ few × 0.01 billion years is too slow to grow stellar-mass BH seeds into the supermassive luminous quasars that are observed when the universe is 1 billion years old. We propose a dynamical mechanism that can trigger supra-exponential accretion in the early universe, when a BH seed is bound in a star cluster fed by the ubiquitous dense cold gas flows. The high gas opacity traps the accretion radiation, while the low-mass BH's random motions suppress the formation of a slowly draining accretion disk. Supra-exponential growth can thus explain the puzzling emergence of supermassive BHs that power luminous quasars so soon after the Big Bang. Copyright © 2014, American Association for the Advancement of Science.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  20. Fatty acid composition of intramuscular fat from pastoral yak and Tibetan sheep

    USDA-ARS?s Scientific Manuscript database

    Fatty acid (FA) composition of intramuscular fat from mature male yak (n=6) and mature Tibetan sheep (n=6) grazed on the same pasture in the Qinghai-Tibetan Plateau was analyzed by gas chromatograph/mass spectrometer to characterize fat composition of these species and to evaluate possible differenc...

  1. Website-based PNG image steganography using the modified Vigenere Cipher, least significant bit, and dictionary based compression methods

    NASA Astrophysics Data System (ADS)

    Rojali, Salman, Afan Galih; George

    2017-08-01

    Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.

  2. Detection of urban expansion in an urban-rural landscape with multitemporal QuickBird images

    PubMed Central

    Lu, Dengsheng; Hetrick, Scott; Moran, Emilio; Li, Guiying

    2011-01-01

    Accurately detecting urban expansion with remote sensing techniques is a challenge due to the complexity of urban landscapes. This paper explored methods for detecting urban expansion with multitemporal QuickBird images in Lucas do Rio Verde, Mato Grosso, Brazil. Different techniques, including image differencing, principal component analysis (PCA), and comparison of classified impervious surface images with the matched filtering method, were used to examine urbanization detection. An impervious surface image classified with the hybrid method was used to modify the urbanization detection results. As a comparison, the original multispectral image and segmentation-based mean-spectral images were used during the detection of urbanization. This research indicates that the comparison of classified impervious surface images with matched filtering method provides the best change detection performance, followed by the image differencing method based on segmentation-based mean spectral images. The PCA is not a good method for urban change detection in this study. Shadows and high spectral variation within the impervious surfaces represent major challenges to the detection of urban expansion when high spatial resolution images are used. PMID:21799706

  3. Mass Loss of Larsen B Tributary Glaciers (Antarctic Peninsula) Unabated Since 2002

    NASA Technical Reports Server (NTRS)

    Berthier, Etienne; Scambos, Ted; Shuman, Christopher A.

    2012-01-01

    Ice mass loss continues at a high rate among the large glacier tributaries of the Larsen B Ice Shelf following its disintegration in 2002. We evaluate recent mass loss by mapping elevation changes between 2006 and 201011 using differencing of digital elevation models (DEMs). The measurement accuracy of these elevation changes is confirmed by a null test, subtracting DEMs acquired within a few weeks. The overall 2006201011 mass loss rate (9.0 2.1 Gt a-1) is similar to the 2001022006 rate (8.8 1.6 Gt a-1), derived using DEM differencing and laser altimetry. This unchanged overall loss masks a varying pattern of thinning and ice loss for individual glacier basins. On Crane Glacier, the thinning pulse, initially greatest near the calving front, is now broadening and migrating upstream. The largest losses are now observed for the HektoriaGreen glacier basin, having increased by 33 since 2006. Our method has enabled us to resolve large residual uncertainties in the Larsen B sector and confirm its state of ongoing rapid mass loss.

  4. SP mountain data analysis

    NASA Technical Reports Server (NTRS)

    Rawson, R. F.; Hamilton, R. E.; Liskow, C. L.; Dias, A. R.; Jackson, P. L.

    1981-01-01

    An analysis of synthetic aperture radar data of SP Mountain was undertaken to demonstrate the use of digital image processing techniques to aid in geologic interpretation of SAR data. These data were collected with the ERIM X- and L-band airborne SAR using like- and cross-polarizations. The resulting signal films were used to produce computer compatible tapes, from which four-channel imagery was generated. Slant range-to-ground range and range-azimuth-scale corrections were made in order to facilitate image registration; intensity corrections were also made. Manual interpretation of the imagery showed that L-band represented the geology of the area better than X-band. Several differences between the various images were also noted. Further digital analysis of the corrected data was done for enhancement purposes. This analysis included application of an MSS differencing routine and development of a routine for removal of relief displacement. It was found that accurate registration of the SAR channels is critical to the effectiveness of the differencing routine. Use of the relief displacement algorithm on the SP Mountain data demonstrated the feasibility of the technique.

  5. Solutions of the Taylor-Green Vortex Problem Using High-Resolution Explicit Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2013-01-01

    A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.

  6. Numerical simulation of axisymmetric turbulent flow in combustors and diffusors. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Yung, Chain Nan

    1988-01-01

    A method for predicting turbulent flow in combustors and diffusers is developed. The Navier-Stokes equations, incorporating a turbulence kappa-epsilon model equation, were solved in a nonorthogonal curvilinear coordinate system. The solution applied the finite volume method to discretize the differential equations and utilized the SIMPLE algorithm iteratively to solve the differenced equations. A zonal grid method, wherein the flow field was divided into several subsections, was developed. This approach permitted different computational schemes to be used in the various zones. In addition, grid generation was made a more simple task. However, treatment of the zonal boundaries required special handling. Boundary overlap and interpolating techniques were used and an adjustment of the flow variables was required to assure conservation of mass, momentum and energy fluxes. The numerical accuracy was assessed using different finite differencing methods, i.e., hybrid, quadratic upwind and skew upwind, to represent the convection terms. Flows in different geometries of combustors and diffusers were simulated and results compared with experimental data and good agreement was obtained.

  7. New insights into soil temperature time series modeling: linear or nonlinear?

    NASA Astrophysics Data System (ADS)

    Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram

    2018-03-01

    Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.

  8. Delay time correction of the gas analyzer in the calculation of anatomical dead space of the lung.

    PubMed

    Okubo, T; Shibata, H; Takishima, T

    1983-07-01

    By means of a mathematical model, we have studied a way to correct the delay time of the gas analyzer in order to calculate the anatomical dead space using Fowler's graphical method. The mathematical model was constructed of ten tubes of equal diameter but unequal length, so that the amount of dead space varied from tube to tube; the tubes were emptied sequentially. The gas analyzer responds with a time lag from the input of the gas signal to the beginning of the response, followed by an exponential response output. The single breath expired volume-concentration relationship was examined with three types of expired flow patterns of which were constant, exponential and sinusoidal. The results indicate that the time correction by the lag time plus time constant of the exponential response of the gas analyzer gives an accurate estimation of anatomical dead space. Time correction less inclusive than this, e.g. lag time only or lag time plus 50% response time, gives an overestimation, and a correction larger than this results in underestimation. The magnitude of error is dependent on the flow pattern and flow rate. The time correction in this study is only for the calculation of dead space, as the corrected volume-concentration curves does not coincide with the true curve. Such correction of the output of the gas analyzer is extremely important when one needs to compare the dead spaces of different gas species at a rather faster flow rate.

  9. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  10. A Spectral Lyapunov Function for Exponentially Stable LTV Systems

    NASA Technical Reports Server (NTRS)

    Zhu, J. Jim; Liu, Yong; Hang, Rui

    2010-01-01

    This paper presents the formulation of a Lyapunov function for an exponentially stable linear timevarying (LTV) system using a well-defined PD-spectrum and the associated PD-eigenvectors. It provides a bridge between the first and second methods of Lyapunov for stability assessment, and will find significant applications in the analysis and control law design for LTV systems and linearizable nonlinear time-varying systems.

  11. Exponential integration algorithms applied to viscoplasticity

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.

  12. Adiabatic approximation with exponential accuracy for many-body systems and quantum computation

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Rezakhani, Ali T.; Hamma, Alioscia

    2009-10-01

    We derive a version of the adiabatic theorem that is especially suited for applications in adiabatic quantum computation, where it is reasonable to assume that the adiabatic interpolation between the initial and final Hamiltonians is controllable. Assuming that the Hamiltonian is analytic in a finite strip around the real-time axis, that some number of its time derivatives vanish at the initial and final times, and that the target adiabatic eigenstate is nondegenerate and separated by a gap from the rest of the spectrum, we show that one can obtain an error between the final adiabatic eigenstate and the actual time-evolved state which is exponentially small in the evolution time, where this time itself scales as the square of the norm of the time derivative of the Hamiltonian divided by the cube of the minimal gap.

  13. Proportional Feedback Control of Energy Intake During Obesity Pharmacotherapy.

    PubMed

    Hall, Kevin D; Sanghvi, Arjun; Göbel, Britta

    2017-12-01

    Obesity pharmacotherapies result in an exponential time course for energy intake whereby large early decreases dissipate over time. This pattern of declining drug efficacy to decrease energy intake results in a weight loss plateau within approximately 1 year. This study aimed to elucidate the physiology underlying the exponential decay of drug effects on energy intake. Placebo-subtracted energy intake time courses were examined during long-term obesity pharmacotherapy trials for 14 different drugs or drug combinations within the theoretical framework of a proportional feedback control system regulating human body weight. Assuming each obesity drug had a relatively constant effect on average energy intake and did not affect other model parameters, our model correctly predicted that long-term placebo-subtracted energy intake was linearly related to early reductions in energy intake according to a prespecified equation with no free parameters. The simple model explained about 70% of the variance between drug studies with respect to the long-term effects on energy intake, although a significant proportional bias was evident. The exponential decay over time of obesity pharmacotherapies to suppress energy intake can be interpreted as a relatively constant effect of each drug superimposed on a physiological feedback control system regulating body weight. © 2017 The Obesity Society.

  14. Rate laws of the self-induced aggregation kinetics of Brownian particles

    NASA Astrophysics Data System (ADS)

    Mondal, Shrabani; Sen, Monoj Kumar; Baura, Alendu; Bag, Bidhan Chandra

    2016-03-01

    In this paper we have studied the self induced aggregation kinetics of Brownian particles in the presence of both multiplicative and additive noises. In addition to the drift due to the self aggregation process, the environment may induce a drift term in the presence of a multiplicative noise. Then there would be an interplay between the two drift terms. It may account qualitatively the appearance of the different laws of aggregation process. At low strength of white multiplicative noise, the cluster number decreases as a Gaussian function of time. If the noise strength becomes appreciably large then the variation of cluster number with time is fitted well by the mono exponentially decaying function of time. For additive noise driven case, the decrease of cluster number can be described by the power law. But in case of multiplicative colored driven process, cluster number decays multi exponentially. However, we have explored how the rate constant (in the mono exponentially cluster number decaying case) depends on strength of interference of the noises and their intensity. We have also explored how the structure factor at long time depends on the strength of the cross correlation (CC) between the additive and the multiplicative noises.

  15. The diffusion of a Ga atom on GaAs(001)β2(2 × 4): Local superbasin kinetic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Lin, Yangzheng; Fichthorn, Kristen A.

    2017-10-01

    We use first-principles density-functional theory to characterize the binding sites and diffusion mechanisms for a Ga adatom on the GaAs(001)β 2(2 × 4) surface. Diffusion in this system is a complex process involving eleven unique binding sites and sixteen different hops between neighboring binding sites. Among the binding sites, we can identify four different superbasins such that the motion between binding sites within a superbasin is much faster than hops exiting the superbasin. To describe diffusion, we use a recently developed local superbasin kinetic Monte Carlo (LSKMC) method, which accelerates a conventional kinetic Monte Carlo (KMC) simulation by describing the superbasins as absorbing Markov chains. We find that LSKMC is up to 4300 times faster than KMC for the conditions probed in this study. We characterize the distribution of exit times from the superbasins and find that these are sometimes, but not always, exponential and we characterize the conditions under which the superbasin exit-time distribution should be exponential. We demonstrate that LSKMC simulations assuming an exponential superbasin exit-time distribution yield the same diffusion coefficients as conventional KMC.

  16. Performance Assessment of Two GPS Receivers on Space Shuttle

    NASA Technical Reports Server (NTRS)

    Schroeder, Christine A.; Schutz, Bob E.

    1996-01-01

    Space Shuttle STS-69 was launched on September 7, 1995, carrying the Wake Shield Facility (WSF-02) among its payloads. The mission included two GPS receivers: a Collins 3M receiver onboard the Endeavour and an Osborne flight TurboRogue, known as the TurboStar, onboard the WSF-02. Two of the WSF-02 GPS Experiment objectives were to: (1) assess the ability to use GPS in a relative satellite positioning mode using the receivers on Endeavour and WSF-02; and (2) assess the performance of the receivers to support high precision orbit determination at the 400 km altitude. Three ground tests of the receivers were conducted in order to characterize the respective receivers. The analysis of the tests utilized the Double Differencing technique. A similar test in orbit was conducted during STS-69 while the WSF-02 was held by the Endeavour robot arm for a one hour period. In these tests, biases were observed in the double difference pseudorange measurements, implying that biases up to 140 m exist which do not cancel in double differencing. These biases appear to exist in the Collins receiver, but their effect can be mitigated by including measurement bias parameters to accommodate them in an estimation process. An additional test was conducted in which the orbit of the combined Endeavour/WSF-02 was determined independently with each receiver. These one hour arcs were based on forming double differences with 13 TurboRogue receivers in the global IGS network and estimating pseudorange biases for the Collins. Various analyses suggest the TurboStar overall orbit accuracy is about one to two meters for this period, based on double differenced phase residuals of 34 cm. These residuals indicate the level of unmodeled forces on Endeavour produced by gravitational and nongravitational effects. The rms differences between the two independently determined orbits are better than 10 meters, thereby demonstrating the accuracy of the Collins-determined orbit at this level as well as the accuracy of the relative positioning using these two receivers.

  17. An improved analysis of gravity drainage experiments for estimating the unsaturated soil hydraulic functions

    NASA Astrophysics Data System (ADS)

    Sisson, James B.; van Genuchten, Martinus Th.

    1991-04-01

    The unsaturated hydraulic properties are important parameters in any quantitative description of water and solute transport in partially saturated soils. Currently, most in situ methods for estimating the unsaturated hydraulic conductivity (K) are based on analyses that require estimates of the soil water flux and the pressure head gradient. These analyses typically involve differencing of field-measured pressure head (h) and volumetric water content (θ) data, a process that can significantly amplify instrumental and measurement errors. More reliable methods result when differencing of field data can be avoided. One such method is based on estimates of the gravity drainage curve K'(θ) = dK/dθ which may be computed from observations of θ and/or h during the drainage phase of infiltration drainage experiments assuming unit gradient hydraulic conditions. The purpose of this study was to compare estimates of the unsaturated soil hydraulic functions on the basis of different combinations of field data θ, h, K, and K'. Five different data sets were used for the analysis: (1) θ-h, (2) K-θ, (3) K'-θ (4) K-θ-h, and (5) K'-θ-h. The analysis was applied to previously published data for the Norfolk, Troup, and Bethany soils. The K-θ-h and K'-θ-h data sets consistently produced nearly identical estimates of the hydraulic functions. The K-θ and K'-θ data also resulted in similar curves, although results in this case were less consistent than those produced by the K-θ-h and K'-θ-h data sets. We conclude from this study that differencing of field data can be avoided and hence that there is no need to calculate soil water fluxes and pressure head gradients from inherently noisy field-measured θ and h data. The gravity drainage analysis also provides results over a much broader range of hydraulic conductivity values than is possible with the more standard instantaneous profile analysis, especially when augmented with independently measured soil water retention data.

  18. An exactly solvable, spatial model of mutation accumulation in cancer

    NASA Astrophysics Data System (ADS)

    Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej

    2016-12-01

    One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.

  19. Weblog patterns and human dynamics with decreasing interest

    NASA Astrophysics Data System (ADS)

    Guo, J.-L.; Fan, C.; Guo, Z.-H.

    2011-06-01

    In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.

  20. Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.

    PubMed

    van Elburg, Ronald A J; van Ooyen, Arjen

    2009-07-01

    An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.

  1. Analysis of two production inventory systems with buffer, retrials and different production rates

    NASA Astrophysics Data System (ADS)

    Jose, K. P.; Nair, Salini S.

    2017-09-01

    This paper considers the comparison of two ( {s,S} ) production inventory systems with retrials of unsatisfied customers. The time for producing and adding each item to the inventory is exponentially distributed with rate β. However, a production rate α β higher than β is used at the beginning of the production. The higher production rate will reduce customers' loss when inventory level approaches zero. The demand from customers is according to a Poisson process. Service times are exponentially distributed. Upon arrival, the customers enter into a buffer of finite capacity. An arriving customer, who finds the buffer full, moves to an orbit. They can retry from there and inter-retrial times are exponentially distributed. The two models differ in the capacity of the buffer. The aim is to find the minimum value of total cost by varying different parameters and compare the efficiency of the models. The optimum value of α corresponding to minimum total cost is an important evaluation. Matrix analytic method is used to find an algorithmic solution to the problem. We also provide several numerical or graphical illustrations.

  2. Fast dynamics in glass-forming polymers revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colmenero, J.; Arbe, A.; Mijangos, C.

    1997-12-31

    The so called fast-dynamics of glass-forming systems as observed by time of flight (TOF) neutron scattering techniques is revisited. TOF-results corresponding to several glass-forming polymers with different chemical microstructure and glass-transition temperature are presented together with the theoretical framework proposed by the authors to interpret these results. The main conclusion is that the TOF-data can be explained in terms of quasiharmonic vibrations and the particular short time behavior of the segmental dynamics. The segmental dynamics display in the very short time range (t {approx} 2 ps) a crossover from a simple exponential behavior towards a non-exponential regime. The first exponentialmore » decay, which is controlled by C-C rotational barriers, can be understood as a trace of the behavior of the system in absence of the effects (correlations, cooperativity, memory effects {hor_ellipsis}) which characterize the dense supercooled liquid like state against the normal liquid state. The non-exponential regime at t > 2 ps corresponds to what is usually understood as {alpha} and {beta} relaxations. Some implications of these results are also discussed.« less

  3. Precise Point Positioning Using Triple GNSS Constellations in Various Modes

    PubMed Central

    Afifi, Akram; El-Rabbany, Ahmed

    2016-01-01

    This paper introduces a new dual-frequency precise point positioning (PPP) model, which combines the observations from three different global navigation satellite system (GNSS) constellations, namely GPS, Galileo, and BeiDou. Combining measurements from different GNSS systems introduces additional biases, including inter-system bias and hardware delays, which require rigorous modelling. Our model is based on the un-differenced and between-satellite single-difference (BSSD) linear combinations. BSSD linear combination cancels out some receiver-related biases, including receiver clock error and non-zero initial phase bias of the receiver oscillator. Forming the BSSD linear combination requires a reference satellite, which can be selected from any of the GPS, Galileo, and BeiDou systems. In this paper three BSSD scenarios are tested; each considers a reference satellite from a different GNSS constellation. Natural Resources Canada’s GPSPace PPP software is modified to enable a combined GPS, Galileo, and BeiDou PPP solution and to handle the newly introduced biases. A total of four data sets collected at four different IGS stations are processed to verify the developed PPP model. Precise satellite orbit and clock products from the International GNSS Service Multi-GNSS Experiment (IGS-MGEX) network are used to correct the GPS, Galileo, and BeiDou measurements in the post-processing PPP mode. A real-time PPP solution is also obtained, which is referred to as RT-PPP in the sequel, through the use of the IGS real-time service (RTS) for satellite orbit and clock corrections. However, only GPS and Galileo observations are used for the RT-PPP solution, as the RTS-IGS satellite products are not presently available for BeiDou system. All post-processed and real-time PPP solutions are compared with the traditional un-differenced GPS-only counterparts. It is shown that combining the GPS, Galileo, and BeiDou observations in the post-processing mode improves the PPP convergence time by 25% compared with the GPS-only counterpart, regardless of the linear combination used. The use of BSSD linear combination improves the precision of the estimated positioning parameters by about 25% in comparison with the GPS-only PPP solution. Additionally, the solution convergence time is reduced to 10 minutes for the BSSD model, which represents about 50% reduction, in comparison with the GPS-only PPP solution. The GNSS RT-PPP solution, on the other hand, shows a similar convergence time and precision to the GPS-only counterpart. PMID:27240376

  4. Precise Point Positioning Using Triple GNSS Constellations in Various Modes.

    PubMed

    Afifi, Akram; El-Rabbany, Ahmed

    2016-05-28

    This paper introduces a new dual-frequency precise point positioning (PPP) model, which combines the observations from three different global navigation satellite system (GNSS) constellations, namely GPS, Galileo, and BeiDou. Combining measurements from different GNSS systems introduces additional biases, including inter-system bias and hardware delays, which require rigorous modelling. Our model is based on the un-differenced and between-satellite single-difference (BSSD) linear combinations. BSSD linear combination cancels out some receiver-related biases, including receiver clock error and non-zero initial phase bias of the receiver oscillator. Forming the BSSD linear combination requires a reference satellite, which can be selected from any of the GPS, Galileo, and BeiDou systems. In this paper three BSSD scenarios are tested; each considers a reference satellite from a different GNSS constellation. Natural Resources Canada's GPSPace PPP software is modified to enable a combined GPS, Galileo, and BeiDou PPP solution and to handle the newly introduced biases. A total of four data sets collected at four different IGS stations are processed to verify the developed PPP model. Precise satellite orbit and clock products from the International GNSS Service Multi-GNSS Experiment (IGS-MGEX) network are used to correct the GPS, Galileo, and BeiDou measurements in the post-processing PPP mode. A real-time PPP solution is also obtained, which is referred to as RT-PPP in the sequel, through the use of the IGS real-time service (RTS) for satellite orbit and clock corrections. However, only GPS and Galileo observations are used for the RT-PPP solution, as the RTS-IGS satellite products are not presently available for BeiDou system. All post-processed and real-time PPP solutions are compared with the traditional un-differenced GPS-only counterparts. It is shown that combining the GPS, Galileo, and BeiDou observations in the post-processing mode improves the PPP convergence time by 25% compared with the GPS-only counterpart, regardless of the linear combination used. The use of BSSD linear combination improves the precision of the estimated positioning parameters by about 25% in comparison with the GPS-only PPP solution. Additionally, the solution convergence time is reduced to 10 minutes for the BSSD model, which represents about 50% reduction, in comparison with the GPS-only PPP solution. The GNSS RT-PPP solution, on the other hand, shows a similar convergence time and precision to the GPS-only counterpart.

  5. Rounded stretched exponential for time relaxation functions.

    PubMed

    Powles, J G; Heyes, D M; Rickayzen, G; Evans, W A B

    2009-12-07

    A rounded stretched exponential function is introduced, C(t)=exp{(tau(0)/tau(E))(beta)[1-(1+(t/tau(0))(2))(beta/2)]}, where t is time, and tau(0) and tau(E) are two relaxation times. This expression can be used to represent the relaxation function of many real dynamical processes, as at long times, t>tau(0), the function converges to a stretched exponential with normalizing relaxation time, tau(E), yet its expansion is even or symmetric in time, which is a statistical mechanical requirement. This expression fits well the shear stress relaxation function for model soft soft-sphere fluids near coexistence, with tau(E)

  6. Are infant mortality rate declines exponential? The general pattern of 20th century infant mortality rate decline

    PubMed Central

    Bishai, David; Opuni, Marjorie

    2009-01-01

    Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144

  7. Compact continuous-variable entanglement distillation.

    PubMed

    Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A

    2012-02-10

    We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.

  8. Investigation of non-Gaussian effects in the Brazilian option market

    NASA Astrophysics Data System (ADS)

    Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.

    2018-04-01

    An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.

  9. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  10. Fine Grained Chaos in AdS2 Gravity

    NASA Astrophysics Data System (ADS)

    Haehl, Felix M.; Rozali, Moshe

    2018-03-01

    Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time u^*. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes two-dimensional anti-de Sitter space (AdS2 ) gravity and the low-energy dynamics of the Sachdev-Ye-Kitaev model. We identify a particular set of 2 k -point functions, characterized as being both "maximally braided" and "k -out of time order," which exhibit exponential growth until progressively longer time scales u^*(k)˜(k -1 )u^*. We suggest an interpretation as scrambling of increasingly fine grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values.

  11. Fine Grained Chaos in AdS_{2} Gravity.

    PubMed

    Haehl, Felix M; Rozali, Moshe

    2018-03-23

    Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time u[over ^]_{*}. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes two-dimensional anti-de Sitter space (AdS_{2}) gravity and the low-energy dynamics of the Sachdev-Ye-Kitaev model. We identify a particular set of 2k-point functions, characterized as being both "maximally braided" and "k-out of time order," which exhibit exponential growth until progressively longer time scales u[over ^]_{*}^{(k)}∼(k-1)u[over ^]_{*}. We suggest an interpretation as scrambling of increasingly fine grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values.

  12. On the origin of non-exponential fluorescence decays in enzyme-ligand complex

    NASA Astrophysics Data System (ADS)

    Wlodarczyk, Jakub; Kierdaszuk, Borys

    2004-05-01

    Complex fluorescence decays have usually been analyzed with the aid of a multi-exponential model, but interpretation of the individual exponential terms has not been adequately characterized. In such cases the intensity decays were also analyzed in terms of the continuous lifetime distribution as a consequence of an interaction of fluorophore with environment, conformational heterogeneity or their dynamical nature. We show that non-exponential fluorescence decay of the enzyme-ligand complexes may results from time dependent energy transport. The latter, to our opinion, may be accounted for by electron transport from the protein tyrosines to their neighbor residues. We introduce the time-dependent hopping rate in the form v(t)~(a+bt)-1. This in turn leads to the luminescence decay function in the form I(t)=Ioexp(-t/τ1)(1+lt/γτ2)-γ. Such a decay function provides good fits to highly complex fluorescence decays. The power-like tail implies the time hierarchy in migration energy process due to the hierarchical energy-level structure. Moreover, such a power-like term is a manifestation of so called Tsallis nonextensive statistic and is suitable for description of the systems with long-range interactions, memory effect as well as with fluctuations of characteristic lifetime of fluorescence. The proposed decay function was applied in analysis of fluorescence decays of tyrosine protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate).

  13. Hypersurface Homogeneous Cosmological Model in Modified Theory of Gravitation

    NASA Astrophysics Data System (ADS)

    Katore, S. D.; Hatkar, S. P.; Baxi, R. J.

    2016-12-01

    We study a hypersurface homogeneous space-time in the framework of the f (R, T) theory of gravitation in the presence of a perfect fluid. Exact solutions of field equations are obtained for exponential and power law volumetric expansions. We also solve the field equations by assuming the proportionality relation between the shear scalar (σ ) and the expansion scalar (θ ). It is observed that in the exponential model, the universe approaches isotropy at large time (late universe). The investigated model is notably accelerating and expanding. The physical and geometrical properties of the investigated model are also discussed.

  14. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  15. Time-splitting combined with exponential wave integrator fourier pseudospectral method for Schrödinger-Boussinesq system

    NASA Astrophysics Data System (ADS)

    Liao, Feng; Zhang, Luming; Wang, Shanshan

    2018-02-01

    In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.

  16. Optical solver of combinatorial problems: nanotechnological approach.

    PubMed

    Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor

    2013-09-01

    We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.

  17. Treatment of late time instabilities in finite difference EMP scattering codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.T.; Arman, S.; Holland, R.

    1982-12-01

    Time-domain solutions to the finite-differenced Maxwell's equations give rise to several well-known nonphysical propagation anomalies. In particular, when a radiative electric-field look back scheme is employed to terminate the calculation, a high-frequency, growing, numerical instability is introduced. This paper describes the constraints made on the mesh to minimize this instability, and a technique of applying an absorbing sheet to damp out this instability without altering the early time solution. Also described are techniques to extend the data record in the presence of high-frequency noise through application of a low-pass digital filter and the fitting of a damped sinusoid to themore » late-time tail of the data record. An application of these techniques is illustrated with numerical models of the FB-111 aircraft and the B-52 aircraft in the in-flight refueling configuration using the THREDE finite difference computer code. Comparisons are made with experimental scale model measurements with agreement typically on the order of 3 to 6 dB near the fundamental resonances.« less

  18. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe

    2017-08-01

    The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.

  19. Exponential quantum spreading in a class of kicked rotor systems near high-order resonances

    NASA Astrophysics Data System (ADS)

    Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin

    2013-11-01

    Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.

  20. Similarity solutions for unsteady flow behind an exponential shock in a self-gravitating non-ideal gas with azimuthal magnetic field

    NASA Astrophysics Data System (ADS)

    Nath, G.; Pathak, R. P.; Dutta, Mrityunjoy

    2018-01-01

    Similarity solutions for the flow of a non-ideal gas behind a strong exponential shock driven out by a piston (cylindrical or spherical) moving with time according to an exponential law is obtained. Solutions are obtained, in both the cases, when the flow between the shock and the piston is isothermal or adiabatic. The shock wave is driven by a piston moving with time according to an exponential law. Similarity solutions exist only when the surrounding medium is of constant density. The effects of variation of ambient magnetic field, non-idealness of the gas, adiabatic exponent and gravitational parameter are worked out in detail. It is shown that the increase in the non-idealness of the gas or the adiabatic exponent of the gas or presence of magnetic field have decaying effect on the shock wave. Consideration of the isothermal flow and the self-gravitational field increase the shock strength. Also, the consideration of isothermal flow or the presence of magnetic field removes the singularity in the density distribution, which arises in the case of adiabatic flow. The result of our study may be used to interpret measurements carried out by space craft in the solar wind and in neighborhood of the Earth's magnetosphere.

  1. Mathematical Modeling of Extinction of Inhomogeneous Populations

    PubMed Central

    Karev, G.P.; Kareva, I.

    2016-01-01

    Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117

  2. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  3. Satellite mapping of Nile Delta coastal changes

    NASA Technical Reports Server (NTRS)

    Blodget, H. W.; Taylor, P. T.; Roark, J. H.

    1989-01-01

    Multitemporal, multispectral scanner (MSS) landsat data have been used to monitor erosion and sedimentation along the Rosetta Promontory of the Nile Delta. These processes have accelerated significantly since the completion of the Aswan High Dam in 1964. Digital differencing of four MSS data sets, using standard algorithms, show that changes observed over a single year period generally occur as strings of single mixed pixels along the coast. Therefore, these can only be used qualitatively to indicate areas where changes occur. Areas of change recorded over a multi-year period are generally larger and thus identified by clusters of pixels; this reduces errors introduced by mixed pixels. Satellites provide a synoptic perspective utilizing data acquired at frequent time intervals. This permits multiple year monitoring of delta evolution on a regional scale.

  4. Algorithmic Extensions of Low-Dispersion Scheme and Modeling Effects for Acoustic Wave Simulation. Revised

    NASA Technical Reports Server (NTRS)

    Kaushik, Dinesh K.; Baysal, Oktay

    1997-01-01

    Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.

  5. Non-Poissonian Distribution of Tsunami Waiting Times

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2007-12-01

    Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.

  6. Child-Led and Interest-Inspired Learning, Home Education, Learning Differences and the Impact of Regulation

    ERIC Educational Resources Information Center

    Liberto, Giuliana

    2016-01-01

    Research into the impact of non-consultative home education regulatory change in New South Wales (NSW), Australia, identified clear benefits of a child-led, interest-inspired approach to learning and a negative impact on student learning and well-being outcomes, particularly for learning-differenced children, of restricted practice freedom.…

  7. Sediment Grain Size Measurements: Is There a Differenc Between Digested and Un-digested Samples? And Does the Organic Carbon of the Sample Play a Role

    EPA Science Inventory

    Grain size is a physical measurement commonly made in the analysis of many benthic systems. Grain size influences benthic community composition, can influence contaminant loading and can indicate the energy regime of a system. We have recently investigated the relationship betw...

  8. Field comparison of the installation and cost of placement of epoxy-coated and MMFX 2 steel deck reinforcement : establishing a baseline for future deck monitoring.

    DOT National Transportation Integrated Search

    2009-01-01

    As part of the Innovative Bridge Research and Construction Program (IBRCP), this study was conducted to use the full-scale construction project of the Route 123 Bridge over the Occoquan River in Northern Virginia to identify and compare any differenc...

  9. Races of Heliconius erato (Nymphalidae: Heliconiinae) found on different sides of the Andes show wing size differences

    USDA-ARS?s Scientific Manuscript database

    Differences in wing size in geographical races of Heliconius erato distributed over the western and eastern sides of the Andes are reported on here. Individuals from the eastern side of the Andes are statistically larger in size than the ones on the western side of the Andes. A statistical differenc...

  10. Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G

    NASA Technical Reports Server (NTRS)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1972-01-01

    New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.

  11. Approximate solutions for diffusive fracture-matrix transfer: Application to storage of dissolved CO 2 in fractured rocks

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.; ...

    2017-01-05

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  12. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  13. Theoretical analysis of exponential transversal method of lines for the diffusion equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salazar, A.; Raydan, M.; Campo, A.

    1996-12-31

    Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less

  14. Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.

    PubMed

    Liu, Meiqin

    2009-09-01

    This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.

  15. Regimes of stability and scaling relations for the removal time in the asteroid belt: a simple kinetic model and numerical tests

    NASA Astrophysics Data System (ADS)

    Cubrovic, Mihailo

    2005-02-01

    We report on our theoretical and numerical results concerning the transport mechanisms in the asteroid belt. We first derive a simple kinetic model of chaotic diffusion and show how it gives rise to some simple correlations (but not laws) between the removal time (the time for an asteroid to experience a qualitative change of dynamical behavior and enter a wide chaotic zone) and the Lyapunov time. The correlations are shown to arise in two different regimes, characterized by exponential and power-law scalings. We also show how is the so-called “stable chaos” (exponential regime) related to anomalous diffusion. Finally, we check our results numerically and discuss their possible applications in analyzing the motion of particular asteroids.

  16. On the non-exponentiality of the dielectric Debye-like relaxation of monoalcohols

    NASA Astrophysics Data System (ADS)

    Arrese-Igor, S.; Alegría, A.; Colmenero, J.

    2017-03-01

    We have investigated the Debye-like relaxation in a series of monoalcohols (MAs) by broadband dielectric spectroscopy and thermally stimulated depolarization current techniques in order to get further insight on the time dispersion of this intriguing relaxation. Results indicate that the Debye-like relaxation of MAs is not always of exponential type and conforms well to a dispersion of Cole-Davidson type. Apart from the already reported non-exponentiality of the Debye-like relaxation in 2-hexyl-1-decanol and 2-butyl-1-octanol, a detailed analysis of the dielectric permittivity of 5-methyl-3-heptanol shows that this MA also presents some extent of dispersion on its Debye-like relaxation which strongly depends on the temperature. Results suggest that the non-exponential character of the Debye-like relaxation might be a general characteristic in the case of not so intense Debye-like relaxations relative to the α relaxation. Finally, we briefly discuss on the T-dependence and possible origin for the observed dispersion.

  17. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 3; Constant Stress and Cyclic Stress Experiments

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  18. Measurement of cellular copper levels in Bacillus megaterium during exponential growth and sporulation.

    PubMed

    Krueger, W B; Kolodziej, B J

    1976-01-01

    Both atomic absorption spectrophotometry (AAS) and neutron activation analysis have been utilized to determine cellular Cu levels in Bacillus megaterium ATCC 19213. Both methods were selected for their sensitivity to detection of nanogram quantities of Cu. Data from both methods demonstrated identical patterms of Cu uptake during exponenetial growth and sporulation. Late exponential phase cells contained less Cu than postexponential t2 cells while t5 cells contained amounts equivalent to exponential cells. The t11 phase-bright forespore containing cells had a higher Cu content than those of earlier time periods, and the free spores had the highest Cu content. Analysis of the culture medium by AAS corroborated these data by showing concomitant Cu uptake during exponential growth and into t2 postexponential phase of sporulation. From t2 to t4, Cu egressed from the cells followed by a secondary uptake during the maturation of phase-dark forespores into phase-bright forespores (t6--t9).

  19. Bi-periodicity evoked by periodic external inputs in delayed Cohen-Grossberg-type bidirectional associative memory networks

    NASA Astrophysics Data System (ADS)

    Cao, Jinde; Wang, Yanyan

    2010-05-01

    In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.

  20. New exponential stability criteria for stochastic BAM neural networks with impulses

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Samidurai, R.; Anthoni, S. M.

    2010-10-01

    In this paper, we study the global exponential stability of time-delayed stochastic bidirectional associative memory neural networks with impulses and Markovian jumping parameters. A generalized activation function is considered, and traditional assumptions on the boundedness, monotony and differentiability of activation functions are removed. We obtain a new set of sufficient conditions in terms of linear matrix inequalities, which ensures the global exponential stability of the unique equilibrium point for stochastic BAM neural networks with impulses. The Lyapunov function method with the Itô differential rule is employed for achieving the required result. Moreover, a numerical example is provided to show that the proposed result improves the allowable upper bound of delays over some existing results in the literature.

  1. Bound-preserving modified exponential Runge-Kutta discontinuous Galerkin methods for scalar hyperbolic equations with stiff source terms

    NASA Astrophysics Data System (ADS)

    Huang, Juntao; Shu, Chi-Wang

    2018-05-01

    In this paper, we develop bound-preserving modified exponential Runge-Kutta (RK) discontinuous Galerkin (DG) schemes to solve scalar hyperbolic equations with stiff source terms by extending the idea in Zhang and Shu [43]. Exponential strong stability preserving (SSP) high order time discretizations are constructed and then modified to overcome the stiffness and preserve the bound of the numerical solutions. It is also straightforward to extend the method to two dimensions on rectangular and triangular meshes. Even though we only discuss the bound-preserving limiter for DG schemes, it can also be applied to high order finite volume schemes, such as weighted essentially non-oscillatory (WENO) finite volume schemes as well.

  2. Global exponential stability and lag synchronization for delayed memristive fuzzy Cohen-Grossberg BAM neural networks with impulses.

    PubMed

    Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar

    2018-02-01

    This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Evaluation of the matrix exponential for use in ground-water-flow and solute-transport simulations; theoretical framework

    USGS Publications Warehouse

    Umari, A.M.; Gorelick, S.M.

    1986-01-01

    It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)

  4. Personality influences temporal discounting preferences: behavioral and brain evidence.

    PubMed

    Manning, Joshua; Hedden, Trey; Wickens, Nina; Whitfield-Gabrieli, Susan; Prelec, Drazen; Gabrieli, John D E

    2014-09-01

    Personality traits are stable predictors of many life outcomes that are associated with important decisions that involve tradeoffs over time. Therefore, a fundamental question is how tradeoffs over time vary from person to person in relation to stable personality traits. We investigated the influence of personality, as measured by the Five-Factor Model, on time preferences and on neural activity engaged by intertemporal choice. During functional magnetic resonance imaging (fMRI), participants made choices between smaller-sooner and larger-later monetary rewards. For each participant, we estimated a constant-sensitivity discount function that dissociates impatience (devaluation of future consequences) from time sensitivity (consistency with rational, exponential discounting). Overall, higher neuroticism was associated with a relatively greater preference for immediate rewards and higher conscientiousness with a relatively greater preference for delayed rewards. Specifically, higher conscientiousness correlated positively with lower short-term impatience and more exponential time preferences, whereas higher neuroticism (lower emotional stability) correlated positively with higher short-term impatience and less exponential time preferences. Cognitive-control and reward brain regions were more activated when higher conscientiousness participants selected a smaller-sooner reward and, conversely, when higher neuroticism participants selected a larger-later reward. The greater activations that occurred when choosing rewards that contradicted personality predispositions may reflect the greater recruitment of mental resources needed to override those predispositions. These findings reveal that stable personality traits fundamentally influence how rewards are chosen over time. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  6. Properties of single NMDA receptor channels in human dentate gyrus granule cells

    PubMed Central

    Lieberman, David N; Mody, Istvan

    1999-01-01

    Cell-attached single-channel recordings of NMDA channels were carried out in human dentate gyrus granule cells acutely dissociated from slices prepared from hippocampi surgically removed for the treatment of temporal lobe epilepsy (TLE). The channels were activated by l-aspartate (250–500 nm) in the presence of saturating glycine (8 μm). The main conductance was 51 ± 3 pS. In ten of thirty granule cells, clear subconductance states were observed with a mean conductance of 42 ± 3 pS, representing 8 ± 2% of the total openings. The mean open times varied from cell to cell, possibly owing to differences in the epileptogenicity of the tissue of origin. The mean open time was 2.70 ± 0.95 ms (range, 1.24–4.78 ms). In 87% of the cells, three exponential components were required to fit the apparent open time distributions. In the remaining neurons, as in control rat granule cells, two exponentials were sufficient. Shut time distributions were fitted by five exponential components. The average numbers of openings in bursts (1.74 ± 0.09) and clusters (3.06 ± 0.26) were similar to values obtained in rodents. The mean burst (6.66 ± 0.9 ms), cluster (20.1 ± 3.3 ms) and supercluster lengths (116.7 ± 17.5 ms) were longer than those in control rat granule cells, but approached the values previously reported for TLE (kindled) rats. As in rat NMDA channels, adjacent open and shut intervals appeared to be inversely related to each other, but it was only the relative areas of the three open time constants that changed with adjacent shut time intervals. The long openings of human TLE NMDA channels resembled those produced by calcineurin inhibitors in control rat granule cells. Yet the calcineurin inhibitor FK-506 (500 nm) did not prolong the openings of human channels, consistent with a decreased calcineurin activity in human TLE. Many properties of the human NMDA channels resemble those recorded in rat hippocampal neurons. Both have similar slope conductances, five exponential shut time distributions, complex groupings of openings, and a comparable number of openings per grouping. Other properties of human TLE NMDA channels correspond to those observed in kindling; the openings are considerably long, requiring an additional exponential component to fit their distributions, and inhibition of calcineurin is without effect in prolonging the openings. PMID:10373689

  7. Total Motion Across the East African Rift Viewed From the Southwest Indian Ridge

    NASA Astrophysics Data System (ADS)

    Royer, J.; Gordon, R. G.

    2005-05-01

    The Nubian plate is known to have been separating from the Somalian plate along the East African Rift since Oligocene time. Recent works have shown that the spreading rates and spreading directions since 11 Ma along the Southwest Indian Ridge (SWIR) record Nubia-Antarctica motion west of the Andrew Bain Fracture Zone complex (ABFZ; between 25E and 35E) and Somalia-Antarctica motion east of it. Nubia-Somalia motion can be determined by differencing Nubia-Antarctica and Somalia-Antarctica motion. To estimate the total motion across the East African Rift, we estimated and differenced Nubia-Antarctica motion and Somalia-Antarctica motion for times that preceded the initiation of Nubia-Somalia motion. We analyze anomalies 24n.3o (53 Ma), 21o (48 Ma), 18o (40 Ma) and 13o (34 Ma). Preliminary results show that the poles of the finite rotations that describe the Nubia-Somalia motions cluster near 30E, 42S. Angles of rotation range from 2.7 to 4.0 degrees. The uncertainty regions are large. The lower estimate predicts a total extension of 245 km at the latitude of the Ethiopian rift (41E, 9N) in a direction N104, perpendicular to the mean trend of the rift. Assuming an age of 34 Ma for the initiation of rifting, the average rate of motion would be 7 mm/a, near the 9 mm/a deduced from present-day geodetic measurements [e.g. synthesis of Fernandes et al., 2004]. Although these results require further analysis, particularly on the causes of the large uncertainties, they represent the first independent estimate of the total extension across the rift. Among other remaining questions are the following: How significant are the differences between these estimates and those for younger chrons (5 or 6 ; respectively 11 and 20 Ma), i.e. is the start of extension datable? Is the region east of the ABFZ part of the Somalian plate or does it form a distinct component plate of Somalia, as postulated by Hartnady (2004)? How has motion between two or more component plates within the African composite plate affected estimates of India-Eurasia motion and of Pacific-North America motion?

  8. Global exponential stability of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays.

    PubMed

    Huang, Haiying; Du, Qiaosheng; Kang, Xibing

    2013-11-01

    In this paper, a class of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays is investigated. The jumping parameters are modeled as a continuous-time finite-state Markov chain. At first, the existence of equilibrium point for the addressed neural networks is studied. By utilizing the Lyapunov stability theory, stochastic analysis theory and linear matrix inequality (LMI) technique, new delay-dependent stability criteria are presented in terms of linear matrix inequalities to guarantee the neural networks to be globally exponentially stable in the mean square. Numerical simulations are carried out to illustrate the main results. © 2013 ISA. Published by ISA. All rights reserved.

  9. Unusually large Stokes shift for a near-infrared emitting DNA-stabilized silver nanocluster

    NASA Astrophysics Data System (ADS)

    Ammitzbøll Bogh, Sidsel; Carro-Temboury, Miguel R.; Cerretani, Cecilia; Swasey, Steven M.; Copp, Stacy M.; Gwinn, Elisabeth G.; Vosch, Tom

    2018-04-01

    In this paper we present a new near-IR emitting silver nanocluster (NIR-DNA-AgNC) with an unusually large Stokes shift between absorption and emission maximum (211 nm or 5600 cm-1). We studied the effect of viscosity and temperature on the steady state and time-resolved emission. The time-resolved results on NIR-DNA-AgNC show that the relaxation dynamics slow down significantly with increasing viscosity of the solvent. In high viscosity solution, the spectral relaxation stretches well into the nanosecond scale. As a result of this slow spectral relaxation in high viscosity solutions, a multi-exponential fluorescence decay time behavior is observed, in contrast to the more mono-exponential decay in low viscosity solution.

  10. Exponential lag function projective synchronization of memristor-based multidirectional associative memory neural networks via hybrid control

    NASA Astrophysics Data System (ADS)

    Yuan, Manman; Wang, Weiping; Luo, Xiong; Li, Lixiang; Kurths, Jürgen; Wang, Xiao

    2018-03-01

    This paper is concerned with the exponential lag function projective synchronization of memristive multidirectional associative memory neural networks (MMAMNNs). First, we propose a new model of MMAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying discrete delays and distributed time delays. Second, we design two kinds of hybrid controllers. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the controllers are carefully designed to confirm the process of different types of synchronization in the MMAMNNs. Third, sufficient criteria guaranteeing the synchronization of system are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.

  11. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  12. Effective equilibrium picture in the x y model with exponentially correlated noise

    NASA Astrophysics Data System (ADS)

    Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio

    2018-02-01

    We study the effect of exponentially correlated noise on the x y model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ , indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.

  13. Recognizing Physisorption and Chemisorption in Carbon Nanotubes Gas Sensors by Double Exponential Fitting of the Response.

    PubMed

    Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio

    2016-05-19

    Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.

  14. Effective equilibrium picture in the xy model with exponentially correlated noise.

    PubMed

    Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio

    2018-02-01

    We study the effect of exponentially correlated noise on the xy model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ, indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.

  15. Time-resolved photoluminescence investigation of (Mg, Zn) O alloy growth on a non-polar plane

    NASA Astrophysics Data System (ADS)

    Mohammed Ali, Mohammed Jassim; Chauveau, J. M.; Bretagnon, T.

    2018-04-01

    Excitons recombination dynamics in ZnMgO alloy have been studied by time-resolved photoluminescence according to temperature. At low temperature, localisation effects of the exciton are found to play a significant role. The photoluminescence (PL) decays are bi-exponential. The short lifetime has a constant value, whereas the long lifetime shows a dependency with temperature. For temperature higher than 100 K the declines show a mono-exponential decay. The PL declines are dominated by non-radiative process at temperatures above 150 K. The PL lifetime dependancy with temperature is analysed using a model including localisation effects and non-radiative recombinations.

  16. In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.

    PubMed

    Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A

    2018-04-01

    Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.

  17. High-Resolution Free-Energy Landscape Analysis of α-Helical Protein Folding: HP35 and Its Double Mutant

    PubMed Central

    2013-01-01

    The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A.2012, 109, 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k0–1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation. PMID:24348206

  18. High-Resolution Free-Energy Landscape Analysis of α-Helical Protein Folding: HP35 and Its Double Mutant.

    PubMed

    Banushkina, Polina V; Krivov, Sergei V

    2013-12-10

    The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A. 2012 , 109 , 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k 0 -1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation.

  19. Efficient full decay inversion of MRS data with a stretched-exponential approximation of the ? distribution

    NASA Astrophysics Data System (ADS)

    Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.

    2012-08-01

    We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining significant differences between the three approaches. All together, a full MRS forward response is calculated in about 20 s and scales so that on 10 processors the calculation time is reduced to about 3-4 s. The proposed approach is examined through synthetic data and through a field example, which demonstrate the capability of the scheme. The results of the field example agree well the information from an in-site borehole.

  20. A Long-Lived Oscillatory Space-Time Correlation Function of Two Dimensional Colloids

    NASA Astrophysics Data System (ADS)

    Kim, Jeongmin; Sung, Bong June

    2014-03-01

    Diffusion of a colloid in solution has drawn significant attention for a century. A well-known behavior of the colloid is called Brownian motion : the particle displacement probability distribution (PDPD) is Gaussian and the mean-square displacement (MSD) is linear with time. However, recent simulation and experimental studies revealed the heterogeneous dynamics of colloids near glass transitions or in complex environments such as entangled actin, PDPD exhibited the exponential tail at a large length instead of being Gaussian at all length scales. More interestingly, PDPD is still exponential even when MSD was still linear with time. It requires a refreshing insight on the colloidal diffusion in the complex environments. In this work, we study heterogeneous dynamics of two dimensional (2D) colloids using molecular dynamics simulations. Unlike in three dimensions, 2D solids do not follow the Lindemann melting criterion. The Kosterlitz-Thouless-Halperin-Nelson-Young theory predicts two-step phase transitions with an intermediate phase, the hexatic phase between isotropic liquids and solids. Near solid-hexatic transition, PDPD shows interesting oscillatory behavior between a central Gaussian part and an exponential tail. Until 12 times longer than translational relaxation time, the oscillatory behavior still persists even after entering the Fickian regime. We also show that multi-layered kinetic clusters account for heterogeneous dynamics of 2D colloids with the long-lived anomalous oscillatory PDPD.

  1. Heterogeneous characters modeling of instant message services users’ online behavior

    PubMed Central

    Fang, Yajun; Horn, Berthold

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327

  2. Heterogeneous characters modeling of instant message services users' online behavior.

    PubMed

    Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.

  3. Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warshaw, S I

    2001-07-15

    In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less

  4. Multi-time series RNA-seq analysis of Enterobacter lignolyticus SCF1 during growth in lignin-amended medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng

    The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growthmore » conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.« less

  5. Multi-time series RNA-seq analysis of Enterobacter lignolyticus SCF1 during growth in lignin-amended medium.

    PubMed

    Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng; Mitchell, Hugh; Gaffrey, Matt; Orr, Galya; DeAngelis, Kristen M

    2017-01-01

    The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growth conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.

  6. Multi-time series RNA-seq analysis of Enterobacter lignolyticus SCF1 during growth in lignin-amended medium

    PubMed Central

    Chaput, Gina; Markillie, Lye Meng; Mitchell, Hugh; Gaffrey, Matt; Orr, Galya; DeAngelis, Kristen M.

    2017-01-01

    The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growth conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future. PMID:29049419

  7. Multi-time series RNA-seq analysis of Enterobacter lignolyticus SCF1 during growth in lignin-amended medium

    DOE PAGES

    Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng; ...

    2017-10-19

    The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growthmore » conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.« less

  8. Evaluating the potential of Landsat TM/ETM+ imagery for assessing fire severity in Alaskan black spruce forests

    Treesearch

    Elizabeth E. Hoy; Nancy H.F. French; Merritt R. Turetsky; Simon N. Trigg; Eric S. Kasischke

    2008-01-01

    Satellite remotely sensed data of fire disturbance offers important information; however, current methods to study fire severity may need modifications for boreal regions. We assessed the potential of the differenced Normalized Burn Ratio (dNBR) and other spectroscopic indices and image transforms derived from Landsat TM/ETM+ data for mapping fire severity in Alaskan...

  9. Viking S-band Doppler RMS phase fluctuations used to calibrate the mean 1976 equatorial corona

    NASA Technical Reports Server (NTRS)

    Berman, A. L.; Wackley, J. A.

    1977-01-01

    Viking S-band Doppler RMS phase fluctuations (noise) and comparisons of Viking Doppler noise to Viking differenced S-X range measurements are used to construct a mean equatorial electron density model for 1976. Using Pioneer Doppler noise results (at high heliographic latitudes, also from 1976), an equivalent nonequatorial electron density model is approximated.

  10. Finite difference methods for the solution of unsteady potential flows

    NASA Technical Reports Server (NTRS)

    Caradonna, F. X.

    1982-01-01

    Various problems which are confronted in the development of an unsteady finite difference potential code are reviewed mainly in the context of what is done for a typical small disturbance and full potential method. The issues discussed include choice of equations, linearization and conservation, differencing schemes, and algorithm development. A number of applications, including unsteady three dimensional rotor calculations, are demonstrated.

  11. Modeling of multi-strata forest fire severity using Landsat TM data

    Treesearch

    Q. Meng; R.K. Meentemeyer

    2011-01-01

    Most of fire severity studies use field measures of composite burn index (CBI) to represent forest fire severity and fit the relationships between CBI and Landsat imagery derived differenced normalized burn ratio (dNBR) to predict and map fire severity at unsampled locations. However, less attention has been paid on the multi-strata forest fire severity, which...

  12. Vegetation, topography and daily weather influenced burn severity in central Idaho and western Montana forests

    Treesearch

    Donovan S. Birch; Penelope Morgan; Crystal A. Kolden; John T. Abatzoglou; Gregory K. Dillon; Andrew T. Hudak; Alistair M. S. Smith

    2015-01-01

    Burn severity as inferred from satellite-derived differenced Normalized Burn Ratio (dNBR) is useful for evaluating fire impacts on ecosystems but the environmental controls on burn severity across large forest fires are both poorly understood and likely to be different than those influencing fire extent. We related dNBR to environmental variables including vegetation,...

  13. Model Specifications for Estimating Labor Market Returns to Associate Degrees: How Robust Are Fixed Effects Estimates? A CAPSEE Working Paper

    ERIC Educational Resources Information Center

    Belfield, Clive; Bailey, Thomas

    2017-01-01

    Recently, studies have adopted fixed effects modeling to identify the returns to college. This method has the advantage over ordinary least squares estimates in that unobservable, individual-level characteristics that may bias the estimated returns are differenced out. But the method requires extensive longitudinal data and involves complex…

  14. Application of an Upwind High Resolution Finite-Differencing Scheme and Multigrid Method in Steady-State Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Yang, Cheng I.; Guo, Yan-Hu; Liu, C.- H.

    1996-01-01

    The analysis and design of a submarine propulsor requires the ability to predict the characteristics of both laminar and turbulent flows to a higher degree of accuracy. This report presents results of certain benchmark computations based on an upwind, high-resolution, finite-differencing Navier-Stokes solver. The purpose of the computations is to evaluate the ability, the accuracy and the performance of the solver in the simulation of detailed features of viscous flows. Features of interest include flow separation and reattachment, surface pressure and skin friction distributions. Those features are particularly relevant to the propulsor analysis. Test cases with a wide range of Reynolds numbers are selected; therefore, the effects of the convective and the diffusive terms of the solver can be evaluated separately. Test cases include flows over bluff bodies, such as circular cylinders and spheres, at various low Reynolds numbers, flows over a flat plate with and without turbulence effects, and turbulent flows over axisymmetric bodies with and without propulsor effects. Finally, to enhance the iterative solution procedure, a full approximation scheme V-cycle multigrid method is implemented. Preliminary results indicate that the method significantly reduces the computational effort.

  15. An RGB colour image steganography scheme using overlapping block-based pixel-value differencing

    PubMed Central

    Pal, Arup Kumar

    2017-01-01

    This paper presents a steganographic scheme based on the RGB colour cover image. The secret message bits are embedded into each colour pixel sequentially by the pixel-value differencing (PVD) technique. PVD basically works on two consecutive non-overlapping components; as a result, the straightforward conventional PVD technique is not applicable to embed the secret message bits into a colour pixel, since a colour pixel consists of three colour components, i.e. red, green and blue. Hence, in the proposed scheme, initially the three colour components are represented into two overlapping blocks like the combination of red and green colour components, while another one is the combination of green and blue colour components, respectively. Later, the PVD technique is employed on each block independently to embed the secret data. The two overlapping blocks are readjusted to attain the modified three colour components. The notion of overlapping blocks has improved the embedding capacity of the cover image. The scheme has been tested on a set of colour images and satisfactory results have been achieved in terms of embedding capacity and upholding the acceptable visual quality of the stego-image. PMID:28484623

  16. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  17. SELECTION OF BURST-LIKE TRANSIENTS AND STOCHASTIC VARIABLES USING MULTI-BAND IMAGE DIFFERENCING IN THE PAN-STARRS1 MEDIUM-DEEP SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, S.; Gezari, S.; Heinis, S.

    2015-03-20

    We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and anmore » analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.« less

  18. Performance and state-space analyses of systems using Petri nets

    NASA Technical Reports Server (NTRS)

    Watson, James Francis, III

    1992-01-01

    The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.

  19. Observing Bridge Dynamic Deflection in Green Time by Information Technology

    NASA Astrophysics Data System (ADS)

    Yu, Chengxin; Zhang, Guojian; Zhao, Yongqian; Chen, Mingzhi

    2018-01-01

    As traditional surveying methods are limited to observe bridge dynamic deflection; information technology is adopted to observe bridge dynamic deflection in Green time. Information technology used in this study means that we use digital cameras to photograph the bridge in red time as a zero image. Then, a series of successive images are photographed in green time. Deformation point targets are identified and located by Hough transform. With reference to the control points, the deformation values of these deformation points are obtained by differencing the successive images with a zero image, respectively. Results show that the average measurement accuracies of C0 are 0.46 pixels, 0.51 pixels and 0.74 pixels in X, Z and comprehensive direction. The average measurement accuracies of C1 are 0.43 pixels, 0.43 pixels and 0.67 pixels in X, Z and comprehensive direction in these tests. The maximal bridge deflection is 44.16mm, which is less than 75mm (Bridge deflection tolerance value). Information technology in this paper can monitor bridge dynamic deflection and depict deflection trend curves of the bridge in real time. It can provide data support for the site decisions to the bridge structure safety.

  20. Real-time kinematic PPP GPS for structure monitoring applied on the Severn Suspension Bridge, UK

    NASA Astrophysics Data System (ADS)

    Tang, Xu; Roberts, Gethin Wyn; Li, Xingxing; Hancock, Craig Matthew

    2017-09-01

    GPS is widely used for monitoring large civil engineering structures in real time or near real time. In this paper the use of PPP GPS for monitoring large structures is investigated. The bridge deformation results estimated using double differenced measurements is used as the truth against which the performance of kinematic PPP in a real-time scenario for bridge monitoring is assessed. The towers' datasets with millimetre level movement and suspension cable dataset with centimetre/decimetre level movement were processed by both PPP and DD data processing methods. The consistency of tower PPP time series indicated that the wet tropospheric delay is the major obstacle for small deflection extraction. The results of suspension cable survey points indicate that an ionospheric-free linear measurement is competent for bridge deformation by PPP kinematic model, the frequency domain analysis yields very similar results using either PPP or DD. This gives evidence that PPP can be used as an alternative method to DD for large structure monitoring when DD is difficult or impossible because of large baseline lengths, power outages or natural disasters. The PPP residual tropospheric wet delays can be applied to improve the capacity of small movement extraction.

  1. The Deep Lens Survey : Real--time Optical Transient and Moving Object Detection

    NASA Astrophysics Data System (ADS)

    Becker, Andy; Wittman, David; Stubbs, Chris; Dell'Antonio, Ian; Loomba, Dinesh; Schommer, Robert; Tyson, J. Anthony; Margoniner, Vera; DLS Collaboration

    2001-12-01

    We report on the real-time optical transient program of the Deep Lens Survey (DLS). Meeting the DLS core science weak-lensing objective requires repeated visits to the same part of the sky, 20 visits for 63 sub-fields in 4 filters, on a 4-m telescope. These data are reduced in real-time, and differenced against each other on all available timescales. Our observing strategy is optimized to allow sensitivity to transients on several minute, one day, one month, and one year timescales. The depth of the survey allows us to detect and classify both moving and stationary transients down to ~ 25th magnitude, a relatively unconstrained region of astronomical variability space. All transients and moving objects, including asteroids, Kuiper belt (or trans-Neptunian) objects, variable stars, supernovae, 'unknown' bursts with no apparent host, orphan gamma-ray burst afterglows, as well as airplanes, are posted on the web in real-time for use by the community. We emphasize our sensitivity to detect and respond in real-time to orphan afterglows of gamma-ray bursts, and present one candidate orphan in the field of Abell 1836. See http://dls.bell-labs.com/transients.html.

  2. Research on the exponential growth effect on network topology: Theoretical and empirical analysis

    NASA Astrophysics Data System (ADS)

    Li, Shouwei; You, Zongjun

    Integrated circuit (IC) industry network has been built in Yangtze River Delta with the constant expansion of IC industry. The IC industry network grows exponentially with the establishment of new companies and the establishment of contacts with old firms. Based on preferential attachment and exponential growth, the paper presents the analytical results in which the vertices degree of scale-free network follows power-law distribution p(k)˜k‑γ (γ=2β+1) and parameter β satisfies 0.5≤β≤1. At the same time, we find that the preferential attachment takes place in a dynamic local world and the size of the dynamic local world is in direct proportion to the size of whole networks. The paper also gives the analytical results of no-preferential attachment and exponential growth on random networks. The computer simulated results of the model illustrate these analytical results. Through some investigations on the enterprises, this paper at first presents the distribution of IC industry, composition of industrial chain and service chain firstly. Then, the correlative network and its analysis of industrial chain and service chain are presented. The correlative analysis of the whole IC industry is also presented at the same time. Based on the theory of complex network, the analysis and comparison of industrial chain network and service chain network in Yangtze River Delta are provided in the paper.

  3. In vivo chlorine and sodium MRI of rat brain at 21.1 T.

    PubMed

    Schepkin, Victor D; Elumalai, Malathy; Kitchen, Jason A; Qian, Chunqi; Gor'kov, Peter L; Brey, William W

    2014-02-01

    MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. MRI of (35)Cl and (23)Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. T 1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T 1a = 4.8 ms (0.7) T 1b = 24.4 ± 7 ms (0.3) and compared with sodium (T 1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of [Formula: see text] ms and [Formula: see text] ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression.

  4. The art of spacecraft design: A multidisciplinary challenge

    NASA Technical Reports Server (NTRS)

    Abdi, F.; Ide, H.; Levine, M.; Austel, L.

    1989-01-01

    Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.

  5. A single-stage flux-corrected transport algorithm for high-order finite-volume methods

    DOE PAGES

    Chaplin, Christopher; Colella, Phillip

    2017-05-08

    We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less

  6. Human detection in sensitive security areas through recognition of omega shapes using MACH filters

    NASA Astrophysics Data System (ADS)

    Rehman, Saad; Riaz, Farhan; Hassan, Ali; Liaquat, Muwahida; Young, Rupert

    2015-03-01

    Human detection has gained considerable importance in aggravated security scenarios over recent times. An effective security application relies strongly on detailed information regarding the scene under consideration. A larger accumulation of humans than the number of personal authorized to visit a security controlled area must be effectively detected, amicably alarmed and immediately monitored. A framework involving a novel combination of some existing techniques allows an immediate detection of an undesirable crowd in a region under observation. Frame differencing provides a clear visibility of moving objects while highlighting those objects in each frame acquired by a real time camera. Training of a correlation pattern recognition based filter on desired shapes such as elliptical representations of human faces (variants of an Omega Shape) yields correct detections. The inherent ability of correlation pattern recognition filters caters for angular rotations in the target object and renders decision regarding the existence of the number of persons exceeding an allowed figure in the monitored area.

  7. Computation of incompressible viscous flows through artificial heart devices with moving boundaries

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Rogers, Stuart; Kwak, Dochan; Chang, I.-DEE

    1991-01-01

    The extension of computational fluid dynamics techniques to artificial heart flow simulations is illustrated. Unsteady incompressible Navier-Stokes equations written in 3-D generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. The efficiency and robustness of the time accurate formulation of the algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated with experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapping grid embedding scheme are used, respectively. Steady state solutions for the flow through a tilting disk heart valve was compared against experimental measurements. Good agreement was obtained. The flow computation during the valve opening and closing is carried out to illustrate the moving boundary capability.

  8. Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets

    PubMed Central

    Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda

    2013-01-01

    Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626

  9. Treatment of late time instabilities in finite-difference EMP scattering codes

    NASA Astrophysics Data System (ADS)

    Simpson, L. T.; Holland, R.; Arman, S.

    1982-12-01

    Constraints applicable to a finite difference mesh for solution of Maxwell's equations are defined. The equations are applied in the time domain for computing electromagnetic coupling to complex structures, e.g., rectangular, cylindrical, or spherical. In a spatially varying grid, the amplitude growth of high frequency waves becomes exponential through multiple reflections from the outer boundary in cases of late-time solution. The exponential growth of the numerical noise exceeds the value of the real signal. The correction technique employs an absorbing surface and a radiating boundary, along with tailored selection of the grid mesh size. High frequency noise is removed through use of a low-pass digital filter, a linear least squares fit is made to thy low frequency filtered response, and the original, filtered, and fitted data are merged to preserve the high frequency early-time response.

  10. Lithium diffusion in polyether ether ketone and polyimide stimulated by in situ electron irradiation and studied by the neutron depth profiling method

    NASA Astrophysics Data System (ADS)

    Vacik, J.; Hnatowicz, V.; Attar, F. M. D.; Mathakari, N. L.; Dahiwale, S. S.; Dhole, S. D.; Bhoraskar, V. N.

    2014-10-01

    Diffusion of lithium from a LiCl aqueous solution into polyether ether ketone (PEEK) and polyimide (PI) assisted by in situ irradiation with 6.5 MeV electrons was studied by the neutron depth profiling method. The number of the Li atoms was found to be roughly proportional to the diffusion time. Regardless of the diffusion time, the measured depth profiles in PEEK exhibit a nearly exponential form, indicating achievement of a steady-state phase of a diffusion-reaction process specified in the text. The form of the profiles in PI is more complex and it depends strongly on the diffusion time. For the longer diffusion time, the profile consists of near-surface bell-shaped part due to Fickian-like diffusion and deeper exponential part.

  11. Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Chaudhri, Anuj; Lukes, Jennifer R.

    2010-02-01

    The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.

  12. Daily water and sediment discharges from selected rivers of the eastern United States; a time-series modeling approach

    USGS Publications Warehouse

    Fitzgerald, Michael G.; Karlinger, Michael R.

    1983-01-01

    Time-series models were constructed for analysis of daily runoff and sediment discharge data from selected rivers of the Eastern United States. Logarithmic transformation and first-order differencing of the data sets were necessary to produce second-order, stationary time series and remove seasonal trends. Cyclic models accounted for less than 42 percent of the variance in the water series and 31 percent in the sediment series. Analysis of the apparent oscillations of given frequencies occurring in the data indicates that frequently occurring storms can account for as much as 50 percent of the variation in sediment discharge. Components of the frequency analysis indicate that a linear representation is reasonable for the water-sediment system. Models that incorporate lagged water discharge as input prove superior to univariate techniques in modeling and prediction of sediment discharges. The random component of the models includes errors in measurement and model hypothesis and indicates no serial correlation. An index of sediment production within or between drain-gage basins can be calculated from model parameters.

  13. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1991-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  14. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1990-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  15. Algorithms in Discrepancy Theory and Lattices

    NASA Astrophysics Data System (ADS)

    Ramadas, Harishchandra

    This thesis deals with algorithmic problems in discrepancy theory and lattices, and is based on two projects I worked on while at the University of Washington in Seattle. A brief overview is provided in Chapter 1 (Introduction). Chapter 2 covers joint work with Avi Levy and Thomas Rothvoss in the field of discrepancy minimization. A well-known theorem of Spencer shows that any set system with n sets over n elements admits a coloring of discrepancy O(√n). While the original proof was non-constructive, recent progress brought polynomial time algorithms by Bansal, Lovett and Meka, and Rothvoss. All those algorithms are randomized, even though Bansal's algorithm admitted a complicated derandomization. We propose an elegant deterministic polynomial time algorithm that is inspired by Lovett-Meka as well as the Multiplicative Weight Update method. The algorithm iteratively updates a fractional coloring while controlling the exponential weights that are assigned to the set constraints. A conjecture by Meka suggests that Spencer's bound can be generalized to symmetric matrices. We prove that n x n matrices that are block diagonal with block size q admit a coloring of discrepancy O(√n . √log(q)). Bansal, Dadush and Garg recently gave a randomized algorithm to find a vector x with entries in {-1,1} with ∥Ax∥infinity ≤ O(√log n) in polynomial time, where A is any matrix whose columns have length at most 1. We show that our method can be used to deterministically obtain such a vector. In Chapter 3, we discuss a result in the broad area of lattices and integer optimization, in joint work with Rebecca Hoberg, Thomas Rothvoss and Xin Yang. The number balancing (NBP) problem is the following: given real numbers a1,...,an in [0,1], find two disjoint subsets I1,I2 of [ n] so that the difference |sumi∈I1a i - sumi∈I2ai| of their sums is minimized. An application of the pigeonhole principle shows that there is always a solution where the difference is at most O √n/2n). Finding the minimum, however, is NP-hard. In polynomial time, the differencing algorithm by Karmarkar and Karp from 1982 can produce a solution with difference at most n-theta(log n), but no further improvement has been made since then. We show a relationship between NBP and Minkowski's Theorem. First we show that an approximate oracle for Minkowski's Theorem gives an approximate NBP oracle. Perhaps more surprisingly, we show that an approximate NBP oracle gives an approximate Minkowski oracle. In particular, we prove that any polynomial time algorithm that guarantees a solution of difference at most 2√n/2 n would give a polynomial approximation for Minkowski as well as a polynomial factor approximation algorithm for the Shortest Vector Problem.

  16. A hybridized method for computing high-Reynolds-number hypersonic flow about blunt bodies

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Hamilton, H. H., II

    1979-01-01

    A hybridized method for computing the flow about blunt bodies is presented. In this method the flow field is split into its viscid and inviscid parts. The forebody flow field about a parabolic body is computed. For the viscous solution, the Navier-Stokes equations are solved on orthogonal parabolic coordinates using explicit finite differencing. The inviscid flow is determined by using a Moretti type scheme in which the Euler equations are solved, using explicit finite differences, on a nonorthogonal coordinate system which uses the bow shock as an outer boundary. The two solutions are coupled along a common data line and are marched together in time until a converged solution is obtained. Computed results, when compared with experimental and analytical results, indicate the method works well over a wide range of Reynolds numbers and Mach numbers.

  17. GPS receiver CODE bias estimation: A comparison of two methods

    NASA Astrophysics Data System (ADS)

    McCaffrey, Anthony M.; Jayachandran, P. T.; Themens, D. R.; Langley, R. B.

    2017-04-01

    The Global Positioning System (GPS) is a valuable tool in the measurement and monitoring of ionospheric total electron content (TEC). To obtain accurate GPS-derived TEC, satellite and receiver hardware biases, known as differential code biases (DCBs), must be estimated and removed. The Center for Orbit Determination in Europe (CODE) provides monthly averages of receiver DCBs for a significant number of stations in the International Global Navigation Satellite Systems Service (IGS) network. A comparison of the monthly receiver DCBs provided by CODE with DCBs estimated using the minimization of standard deviations (MSD) method on both daily and monthly time intervals, is presented. Calibrated TEC obtained using CODE-derived DCBs, is accurate to within 0.74 TEC units (TECU) in differenced slant TEC (sTEC), while calibrated sTEC using MSD-derived DCBs results in an accuracy of 1.48 TECU.

  18. Salient features of dependence in daily US stock market indices

    NASA Astrophysics Data System (ADS)

    Gil-Alana, Luis A.; Cunado, Juncal; de Gracia, Fernando Perez

    2013-08-01

    This paper deals with the analysis of long range dependence in the US stock market. We focus first on the log-values of the Dow Jones Industrial Average, Standard and Poors 500 and Nasdaq indices, daily from February, 1971 to February, 2007. The volatility processes are examined based on the squared and the absolute values of the returns series, and the stability of the parameters across time is also investigated in both the level and the volatility processes. A method that permits us to estimate fractional differencing parameters in the context of structural breaks is conducted in this paper. Finally, the “day of the week” effect is examined by looking at the order of integration for each day of the week, providing also a new modeling approach to describe the dependence in this context.

  19. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  20. CFD applications in hypersonic flight

    NASA Technical Reports Server (NTRS)

    Edwards, T. A.

    1992-01-01

    Design studies are underway for a variety of hypersonic flight vehicles. The National Aero-Space Plane will provide a reusable, single-stage-to-orbit capability for routine access to low earth orbit. Flight-capable satellites will dip into the atmosphere to maneuver to new orbits, while planetary probes will decelerate at their destination by atmospheric aerobraking. To supplement limited experimental capabilities in the hypersonic regime, CFD is being used to analyze the flow about these configurations. The governing equations include fluid dynamic as well as chemical species equations, which are solved with robust upwind differencing schemes. Examples of CFD applications to hypersonic vehicles suggest an important role this technology will play in the development of future aerospace systems. The computational resources needed to obtain solutions are large, but various strategies are being exploited to reduce the time required for complete vehicle simulations.

  1. Comparing Landsat-7 ETM+ and ASTER Imageries to Estimate Daily Evapotranspiration Within a Mediterranean Vineyard Watershed

    NASA Technical Reports Server (NTRS)

    Montes, Carlo; Jacob, Frederic

    2017-01-01

    We compared the capabilities of Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) imageries for mapping daily evapotranspiration (ET) within a Mediterranean vineyard watershed. We used Landsat and ASTER data simultaneously collected on four dates in 2007 and 2008, along with the simplified surface energy balance index (S-SEBI) model. We used previously ground-validated good quality ASTER estimates as reference, and we analyzed the differences with Landsat retrievals in light of the instrumental factors and methodology. Although Landsat and ASTER retrievals of S-SEBI inputs were different, estimates of daily ET from the two imageries were similar. This is ascribed to the S-SEBI spatial differencing in temperature, and opens the path for using historical Landsat time series over vineyards.

  2. Bayesian exponential random graph modelling of interhospital patient referral networks.

    PubMed

    Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro

    2017-08-15

    Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. A fractal process of hydrogen diffusion in a-Si:H with exponential energy distribution

    NASA Astrophysics Data System (ADS)

    Hikita, Harumi; Ishikawa, Hirohisa; Morigaki, Kazuo

    2017-04-01

    Hydrogen diffusion in a-Si:H with exponential distribution of the states in energy exhibits the fractal structure. It is shown that a probability P(t) of the pausing time t has a form of tα (α: fractal dimension). It is shown that the fractal dimension α = Tr/T0 (Tr: hydrogen temperature, T0: a temperature corresponding to the width of exponential distribution of the states in energy) is in agreement with the Hausdorff dimension. A fractal graph for the case of α ≤ 1 is like the Cantor set. A fractal graph for the case of α > 1 is like the Koch curves. At α = ∞, hydrogen migration exhibits Brownian motion. Hydrogen diffusion in a-Si:H should be the fractal process.

  4. Photoluminescence study of MBE grown InGaN with intentional indium segregation

    NASA Astrophysics Data System (ADS)

    Cheung, Maurice C.; Namkoong, Gon; Chen, Fei; Furis, Madalina; Pudavar, Haridas E.; Cartwright, Alexander N.; Doolittle, W. Alan

    2005-05-01

    Proper control of MBE growth conditions has yielded an In0.13Ga0.87N thin film sample with emission consistent with In-segregation. The photoluminescence (PL) from this epilayer showed multiple emission components. Moreover, temperature and power dependent studies of the PL demonstrated that two of the components were excitonic in nature and consistent with indium phase separation. At 15 K, time resolved PL showed a non-exponential PL decay that was well fitted with the stretched exponential solution expected for disordered systems. Consistent with the assumed carrier hopping mechanism of this model, the effective lifetime, , and the stretched exponential parameter, , decrease with increasing emission energy. Finally, room temperature micro-PL using a confocal microscope showed spatial clustering of low energy emission.

  5. Scalar-fluid interacting dark energy: Cosmological dynamics beyond the exponential potential

    NASA Astrophysics Data System (ADS)

    Dutta, Jibitesh; Khyllep, Wompherdeiki; Tamanini, Nicola

    2017-01-01

    We extend the dynamical systems analysis of scalar-fluid interacting dark energy models performed in C. G. Boehmer et al., Phys. Rev. D 91, 123002 (2015), 10.1103/PhysRevD.91.123002 by considering scalar field potentials beyond the exponential type. The properties and stability of critical points are examined using a combination of linear analysis, computational methods and advanced mathematical techniques, such as center manifold theory. We show that the interesting results obtained with an exponential potential can generally be recovered also for more complicated scalar field potentials. In particular, employing power law and hyperbolic potentials as examples, we find late time accelerated attractors, transitions from dark matter to dark energy domination with specific distinguishing features, and accelerated scaling solutions capable of solving the cosmic coincidence problem.

  6. Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay

    NASA Astrophysics Data System (ADS)

    Chunodkar, Apurva A.; Akella, Maruthi R.

    2013-12-01

    This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.

  7. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.

    2017-11-01

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.

  8. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  9. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...

    2017-10-24

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  10. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  11. ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations

    NASA Astrophysics Data System (ADS)

    Merkel, M.; Niyonzima, I.; Schöps, S.

    2017-12-01

    Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.

  12. Single-exponential activation behavior behind the super-Arrhenius relaxations in glass-forming liquids.

    PubMed

    Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg

    2010-11-17

    The reported relaxation time for several typical glass-forming liquids was analyzed by using a kinetic model for liquids which invoked a new kind of atomic cooperativity--thermodynamic cooperativity. The broadly studied 'cooperative length' was recognized as the kinetic cooperativity. Both cooperativities were conveniently quantified from the measured relaxation data. A single-exponential activation behavior was uncovered behind the super-Arrhenius relaxations for the liquids investigated. Hence the mesostructure of these liquids and the atomic mechanism of the glass transition became clearer.

  13. A mechanical model of bacteriophage DNA ejection

    NASA Astrophysics Data System (ADS)

    Arun, Rahul; Ghosal, Sandip

    2017-08-01

    Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.

  14. Atmospheric cloud physics thermal systems analysis

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Engineering analyses performed on the Atmospheric Cloud Physics (ACPL) Science Simulator expansion chamber and associated thermal control/conditioning system are reported. Analyses were made to develop a verified thermal model and to perform parametric thermal investigations to evaluate systems performance characteristics. Thermal network representations of solid components and the complete fluid conditioning system were solved simultaneously using the Systems Improved Numerical Differencing Analyzer (SINDA) computer program.

  15. Interior Fluid Dynamics of Liquid-Filled Projectiles

    DTIC Science & Technology

    1989-12-01

    the Sandia code. The previous codes are primarily based on finite-difference approximations with relatively coarse grid and were designed without...exploits Chorin’s method of artificial compressibility. The steady solution at 11 X 24 X 21 grid points in r, 0, z-direction is obtained by integrating...differences in radial and axial direction and pseudoepectral differencing in the azimuthal direction. Nonuniform grids are introduced for increased

  16. Domain Derivatives in Dielectric Rough Surface Scattering

    DTIC Science & Technology

    2015-01-01

    and require the gradient of the objective function in the unknown model parameter vector at each stage of iteration. For large N, finite...differencing becomes numerically intensive, and an efficient alternative is domain differentiation in which the full gradient is obtained by solving a single...derivative calculation of the gradient for a locally perturbed dielectric interface. The method is non-variational, and algebraic in nature in that it

  17. Wave Current Interactions and Wave-blocking Predictions Using NHWAVE Model

    DTIC Science & Technology

    2013-03-01

    Navier-Stokes equation. In this approach, as with previous modeling techniques, there is difficulty in simulating the free surface that inhibits accurate...hydrostatic, free - surface , rotational flows in multiple dimensions. It is useful in predicting transformations of surface waves and rapidly varied...Stelling, G., and M. Zijlema, 2003: An accurate and efficient finite-differencing algorithm for non-hydrostatic free surface flow with application to

  18. Evaluation of linear spectral unmixing and deltaNBR for predicting post-fire recovery in a North American ponderosa pine forest

    Treesearch

    A. M. S. Smith; L. B. Lenilte; A. T. Hudak; P. Morgan

    2007-01-01

    The Differenced Normalized Burn Ratio (deltaNBR) is widely used to map post-fire effects in North America from multispectral satellite imagery, but has not been rigorously validated across the great diversity in vegetation types. The importance of these maps to fire rehabilitation crews highlights the need for continued assessment of alternative remote sensing...

  19. Relating fire-caused change in forest structure to remotely sensed estimates of fire severity

    Treesearch

    Jamie M. Lydersen; Brandon M. Collins; Jay D. Miller; Danny L. Fry; Scott L. Stephens

    2016-01-01

    Fire severity maps are an important tool for understanding fire effects on a landscape. The relative differenced normalized burn ratio (RdNBR) is a commonly used severity index in California forests, and is typically divided into four categories: unchanged, low, moderate, and high. RdNBR is often calculated twice--from images collected the year of the fire (initial...

  20. Progress in Multi-Dimensional Upwind Differencing

    DTIC Science & Technology

    1992-09-01

    Fligure 4a a shiock less t raii~so ilc soliit ion is reached from Ii itial val ies conitaining 1shiocks anid S sonic poinits: agarin. thle residiial...8217 j~ vi are chu i.ewil thtii( lealst alliilied \\wit the conivectiloll direct nwi. in1 .3 tlimeIros alj’uled. (hiul *IS a treg h11f,1r p( ir~to1to .3

Top