Sample records for time integration methods

  1. Exponential integrators in time-dependent density-functional calculations

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  2. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  3. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  4. An integration time adaptive control method for atmospheric composition detection of occultation

    NASA Astrophysics Data System (ADS)

    Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin

    2018-01-01

    When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.

  5. Development and application of a local linearization algorithm for the integration of quaternion rate equations in real-time flight simulation problems

    NASA Technical Reports Server (NTRS)

    Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.

    1973-01-01

    High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.

  6. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  7. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  8. Time-of-flight depth image enhancement using variable integration time

    NASA Astrophysics Data System (ADS)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  9. Exponential Methods for the Time Integration of Schroedinger Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cano, B.; Gonzalez-Pachon, A.

    2010-09-30

    We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.

  10. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  11. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1983-01-01

    The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  12. 40 CFR 53.32 - Test procedures for methods for SO2, CO, O3, and NO2.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... continuous period divided by the time period. Integration of the instantaneous concentration may be performed.../fall time differences between the candidate and reference methods. Details of the means of integration... instantaneous concentration over a 24-hour continuous period divided by the time period. This integration may be...

  13. 40 CFR 53.32 - Test procedures for methods for SO2, CO, O3, and NO2.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... continuous period divided by the time period. Integration of the instantaneous concentration may be performed.../fall time differences between the candidate and reference methods. Details of the means of integration... instantaneous concentration over a 24-hour continuous period divided by the time period. This integration may be...

  14. 40 CFR 53.32 - Test procedures for methods for SO2, CO, O3, and NO2.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... continuous period divided by the time period. Integration of the instantaneous concentration may be performed.../fall time differences between the candidate and reference methods. Details of the means of integration... instantaneous concentration over a 24-hour continuous period divided by the time period. This integration may be...

  15. 40 CFR 53.32 - Test procedures for methods for SO2, CO, O3, and NO2.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... continuous period divided by the time period. Integration of the instantaneous concentration may be performed.../fall time differences between the candidate and reference methods. Details of the means of integration... instantaneous concentration over a 24-hour continuous period divided by the time period. This integration may be...

  16. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  17. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  18. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  19. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  20. Time integration algorithms for the two-dimensional Euler equations on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Slack, David C.; Whitaker, D. L.; Walters, Robert W.

    1994-01-01

    Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.

  1. A Gas Dynamics Method Based on The Spectral Deferred Corrections (SDC) Time Integration Technique and The Piecewise Parabolic Method (PPM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samet Y. Kadioglu

    2011-12-01

    We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows,more » and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.« less

  2. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE PAGES

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less

  3. A high-order boundary integral method for surface diffusions on elastically stressed axisymmetric rods.

    PubMed

    Li, Xiaofan; Nie, Qing

    2009-07-01

    Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integral method, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.

  4. Integrals for IBS and beam cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.; /Fermilab

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  5. Integrals for IBS and Beam Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  6. First-principles X-ray absorption dose calculation for time-dependent mass and optical density.

    PubMed

    Berejnov, Viatcheslav; Rubinstein, Boris; Melo, Lis G A; Hitchcock, Adam P

    2018-05-01

    A dose integral of time-dependent X-ray absorption under conditions of variable photon energy and changing sample mass is derived from first principles starting with the Beer-Lambert (BL) absorption model. For a given photon energy the BL dose integral D(e, t) reduces to the product of an effective time integral T(t) and a dose rate R(e). Two approximations of the time-dependent optical density, i.e. exponential A(t) = c + aexp(-bt) for first-order kinetics and hyperbolic A(t) = c + a/(b + t) for second-order kinetics, were considered for BL dose evaluation. For both models three methods of evaluating the effective time integral are considered: analytical integration, approximation by a function, and calculation of the asymptotic behaviour at large times. Data for poly(methyl methacrylate) and perfluorosulfonic acid polymers measured by scanning transmission soft X-ray microscopy were used to test the BL dose calculation. It was found that a previous method to calculate time-dependent dose underestimates the dose in mass loss situations, depending on the applied exposure time. All these methods here show that the BL dose is proportional to the exposure time D(e, t) ≃ K(e)t.

  7. An inexpensive technique for the time resolved laser induced plasma spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, Rizwan, E-mail: rizwan.ahmed@ncp.edu.pk; Ahmed, Nasar; Iqbal, J.

    We present an efficient and inexpensive method for calculating the time resolved emission spectrum from the time integrated spectrum by monitoring the time evolution of neutral and singly ionized species in the laser produced plasma. To validate our assertion of extracting time resolved information from the time integrated spectrum, the time evolution data of the Cu II line at 481.29 nm and the molecular bands of AlO in the wavelength region (450–550 nm) have been studied. The plasma parameters were also estimated from the time resolved and time integrated spectra. A comparison of the results clearly reveals that the time resolved informationmore » about the plasma parameters can be extracted from the spectra registered with a time integrated spectrograph. Our proposed method will make the laser induced plasma spectroscopy robust and a low cost technique which is attractive for industry and environmental monitoring.« less

  8. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  9. Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Debojyoti; Constantinescu, Emil M.

    2016-06-23

    Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less

  10. A Time Integration Algorithm Based on the State Transition Matrix for Structures with Time Varying and Nonlinear Properties

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2003-01-01

    A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.

  11. Towards the Real-Time Evaluation of Collaborative Activities: Integration of an Automatic Rater of Collaboration Quality in the Classroom from the Teacher's Perspective

    ERIC Educational Resources Information Center

    Chounta, Irene-Angelica; Avouris, Nikolaos

    2016-01-01

    This paper presents the integration of a real time evaluation method of collaboration quality in a monitoring application that supports teachers in class orchestration. The method is implemented as an automatic rater of collaboration quality and studied in a real time scenario of use. We argue that automatic and semi-automatic methods which…

  12. High-efficiency non-uniformity correction for wide dynamic linear infrared radiometry system

    NASA Astrophysics Data System (ADS)

    Li, Zhou; Yu, Yi; Tian, Qi-Jie; Chang, Song-Tao; He, Feng-Yun; Yin, Yan-He; Qiao, Yan-Feng

    2017-09-01

    Several different integration times are always set for a wide dynamic linear and continuous variable integration time infrared radiometry system, therefore, traditional calibration-based non-uniformity correction (NUC) are usually conducted one by one, and furthermore, several calibration sources required, consequently makes calibration and process of NUC time-consuming. In this paper, the difference of NUC coefficients between different integration times have been discussed, and then a novel NUC method called high-efficiency NUC, which combines the traditional calibration-based non-uniformity correction, has been proposed. It obtains the correction coefficients of all integration times in whole linear dynamic rangesonly by recording three different images of a standard blackbody. Firstly, mathematical procedure of the proposed non-uniformity correction method is validated and then its performance is demonstrated by a 400 mm diameter ground-based infrared radiometry system. Experimental results show that the mean value of Normalized Root Mean Square (NRMS) is reduced from 3.78% to 0.24% by the proposed method. In addition, the results at 4 ms and 70 °C prove that this method has a higher accuracy compared with traditional calibration-based NUC. In the meantime, at other integration time and temperature there is still a good correction effect. Moreover, it greatly reduces the number of correction time and temperature sampling point, and is characterized by good real-time performance and suitable for field measurement.

  13. Solution of the advection-dispersion equation by a finite-volume eulerian-lagrangian local adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1992-01-01

    A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.

  14. The Cagniard Method in Complex Time Revisited

    DTIC Science & Technology

    1991-04-04

    make the p-integral take the form of a forward Laplace transform, so that the cascade of the two integrals can be identified as a forward and inverse ... transform , thereby making the actual integration unnecessary. Typically, the method is applied to an integral that represents one body wave plus other

  15. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  16. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finn, John M., E-mail: finn@lanl.gov

    2015-03-15

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less

  17. Improving the numerical integration solution of satellite orbits in the presence of solar radiation pressure using modified back differences

    NASA Technical Reports Server (NTRS)

    Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.

  18. New numerical method for radiation heat transfer in nonhomogeneous participating media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, J.R.; Tan, Zhiqiang

    A new numerical method, which solves the exact integral equations of distance-angular integration form for radiation transfer, is introduced in this paper. By constructing and prestoring the numerical integral formulas for the distance integral for appropriate kernel functions, this method eliminates the time consuming evaluations of the kernels of the space integrals in the formal computations. In addition, when the number of elements in the system is large, the resulting coefficient matrix is quite sparse. Thus, either considerable time or much storage can be saved. A weakness of the method is discussed, and some remedies are suggested. As illustrations, somemore » one-dimensional and two-dimensional problems in both homogeneous and inhomogeneous emitting, absorbing, and linear anisotropic scattering media are studied. Some results are compared with available data. 13 refs.« less

  19. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  20. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  1. Mixed time integration methods for transient thermal analysis of structures, appendix 5

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    Mixed time integration methods for transient thermal analysis of structures are studied. An efficient solution procedure for predicting the thermal behavior of aerospace vehicle structures was developed. A 2D finite element computer program incorporating these methodologies is being implemented. The performance of these mixed time finite element algorithms can then be evaluated employing the proposed example problem.

  2. Finite time step and spatial grid effects in δf simulation of warm plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.

    2016-01-15

    This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less

  3. Variational symplectic algorithm for guiding center dynamics in the inner magnetosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Jinxing; Pu Zuyin; Xie Lun

    Charged particle dynamics in magnetosphere has temporal and spatial multiscale; therefore, numerical accuracy over a long integration time is required. A variational symplectic integrator (VSI) [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008) and H. Qin, X. Guan, and W. M. Tang, Phys. Plasmas 16, 042510 (2009)] for the guiding-center motion of charged particles in general magnetic field is applied to study the dynamics of charged particles in magnetosphere. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing themore » dynamics. The VSI conserves exactly a discrete Lagrangian symplectic structure and has better numerical properties over a long integration time, compared with standard integrators, such as the standard and adaptive fourth order Runge-Kutta (RK4) methods. Applying the VSI method to guiding-center dynamics in the inner magnetosphere, we can accurately calculate the particles'orbits for an arbitrary long simulating time with good conservation property. When a time-independent convection and corotation electric field is considered, the VSI method can give the accurate single particle orbit, while the RK4 method gives an incorrect orbit due to its intrinsic error accumulation over a long integrating time.« less

  4. Optimizing some 3-stage W-methods for the time integration of PDEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.

    2017-07-01

    The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.

  5. Stabilization of time domain acoustic boundary element method for the exterior problem avoiding the nonuniqueness.

    PubMed

    Jang, Hae-Won; Ih, Jeong-Guon

    2013-03-01

    The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.

  6. A Fourier spectral-discontinuous Galerkin method for time-dependent 3-D Schrödinger-Poisson equations with discontinuous potentials

    NASA Astrophysics Data System (ADS)

    Lu, Tiao; Cai, Wei

    2008-10-01

    In this paper, we propose a high order Fourier spectral-discontinuous Galerkin method for time-dependent Schrödinger-Poisson equations in 3-D spaces. The Fourier spectral Galerkin method is used for the two periodic transverse directions and a high order discontinuous Galerkin method for the longitudinal propagation direction. Such a combination results in a diagonal form for the differential operators along the transverse directions and a flexible method to handle the discontinuous potentials present in quantum heterojunction and supperlattice structures. As the derivative matrices are required for various time integration schemes such as the exponential time differencing and Crank Nicholson methods, explicit derivative matrices of the discontinuous Galerkin method of various orders are derived. Numerical results, using the proposed method with various time integration schemes, are provided to validate the method.

  7. Stability of mixed time integration schemes for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Lin, J. I.

    1982-01-01

    A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.

  8. Time-dependent spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cole, Justin T.; Musslimani, Ziad H.

    2017-11-01

    The spectral renormalization method was introduced by Ablowitz and Musslimani (2005) as an effective way to numerically compute (time-independent) bound states for certain nonlinear boundary value problems. In this paper, we extend those ideas to the time domain and introduce a time-dependent spectral renormalization method as a numerical means to simulate linear and nonlinear evolution equations. The essence of the method is to convert the underlying evolution equation from its partial or ordinary differential form (using Duhamel's principle) into an integral equation. The solution sought is then viewed as a fixed point in both space and time. The resulting integral equation is then numerically solved using a simple renormalized fixed-point iteration method. Convergence is achieved by introducing a time-dependent renormalization factor which is numerically computed from the physical properties of the governing evolution equation. The proposed method has the ability to incorporate physics into the simulations in the form of conservation laws or dissipation rates. This novel scheme is implemented on benchmark evolution equations: the classical nonlinear Schrödinger (NLS), integrable PT symmetric nonlocal NLS and the viscous Burgers' equations, each of which being a prototypical example of a conservative and dissipative dynamical system. Numerical implementation and algorithm performance are also discussed.

  9. Indoor integrated navigation and synchronous data acquisition method for Android smartphone

    NASA Astrophysics Data System (ADS)

    Hu, Chunsheng; Wei, Wenjian; Qin, Shiqiao; Wang, Xingshu; Habib, Ayman; Wang, Ruisheng

    2015-08-01

    Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have been recorded about 30 minutes. The bias stability and noise feature of the sensors have been analyzed. Besides the indoor integrated navigation, the integrated navigation and synchronous data acquisition method can be applied to outdoor navigation.

  10. Discretization analysis of bifurcation based nonlinear amplifiers

    NASA Astrophysics Data System (ADS)

    Feldkord, Sven; Reit, Marco; Mathis, Wolfgang

    2017-09-01

    Recently, for modeling biological amplification processes, nonlinear amplifiers based on the supercritical Andronov-Hopf bifurcation have been widely analyzed analytically. For technical realizations, digital systems have become the most relevant systems in signal processing applications. The underlying continuous-time systems are transferred to the discrete-time domain using numerical integration methods. Within this contribution, effects on the qualitative behavior of the Andronov-Hopf bifurcation based systems concerning numerical integration methods are analyzed. It is shown exemplarily that explicit Runge-Kutta methods transform the truncated normalform equation of the Andronov-Hopf bifurcation into the normalform equation of the Neimark-Sacker bifurcation. Dependent on the order of the integration method, higher order terms are added during this transformation.A rescaled normalform equation of the Neimark-Sacker bifurcation is introduced that allows a parametric design of a discrete-time system which corresponds to the rescaled Andronov-Hopf system. This system approximates the characteristics of the rescaled Hopf-type amplifier for a large range of parameters. The natural frequency and the peak amplitude are preserved for every set of parameters. The Neimark-Sacker bifurcation based systems avoid large computational effort that would be caused by applying higher order integration methods to the continuous-time normalform equations.

  11. Virtual-pulse time integral methodology: A new explicit approach for computational dynamics - Theoretical developments for general nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.

  12. Long-time predictions in nonlinear dynamics

    NASA Technical Reports Server (NTRS)

    Szebehely, V.

    1980-01-01

    It is known that nonintegrable dynamical systems do not allow precise predictions concerning their behavior for arbitrary long times. The available series solutions are not uniformly convergent according to Poincare's theorem and numerical integrations lose their meaningfulness after the elapse of arbitrary long times. Two approaches are the use of existing global integrals and statistical methods. This paper presents a generalized method along the first approach. As examples long-time predictions in the classical gravitational satellite and planetary problems are treated.

  13. Performance Benchmark for a Prismatic Flow Solver

    DTIC Science & Technology

    2007-03-26

    Gauss- Seidel (LU-SGS) implicit method is used for time integration to reduce the computational time. A one-equation turbulence model by Goldberg and...numerical flux computations. The Lower-Upper-Symmetric Gauss- Seidel (LU-SGS) implicit method [1] is used for time integration to reduce the...Sharov, D. and Nakahashi, K., “Reordering of Hybrid Unstructured Grids for Lower-Upper Symmetric Gauss- Seidel Computations,” AIAA Journal, Vol. 36

  14. A constitutive material model for nonlinear finite element structural analysis using an iterative matrix approach

    NASA Technical Reports Server (NTRS)

    Koenig, Herbert A.; Chan, Kwai S.; Cassenti, Brice N.; Weber, Richard

    1988-01-01

    A unified numerical method for the integration of stiff time dependent constitutive equations is presented. The solution process is directly applied to a constitutive model proposed by Bodner. The theory confronts time dependent inelastic behavior coupled with both isotropic hardening and directional hardening behaviors. Predicted stress-strain responses from this model are compared to experimental data from cyclic tests on uniaxial specimens. An algorithm is developed for the efficient integration of the Bodner flow equation. A comparison is made with the Euler integration method. An analysis of computational time is presented for the three algorithms.

  15. Classification of Time Series Gene Expression in Clinical Studies via Integration of Biological Network

    PubMed Central

    Qian, Liwei; Zheng, Haoran; Zhou, Hong; Qin, Ruibin; Li, Jinlong

    2013-01-01

    The increasing availability of time series expression datasets, although promising, raises a number of new computational challenges. Accordingly, the development of suitable classification methods to make reliable and sound predictions is becoming a pressing issue. We propose, here, a new method to classify time series gene expression via integration of biological networks. We evaluated our approach on 2 different datasets and showed that the use of a hidden Markov model/Gaussian mixture models hybrid explores the time-dependence of the expression data, thereby leading to better prediction results. We demonstrated that the biclustering procedure identifies function-related genes as a whole, giving rise to high accordance in prognosis prediction across independent time series datasets. In addition, we showed that integration of biological networks into our method significantly improves prediction performance. Moreover, we compared our approach with several state-of–the-art algorithms and found that our method outperformed previous approaches with regard to various criteria. Finally, our approach achieved better prediction results on early-stage data, implying the potential of our method for practical prediction. PMID:23516469

  16. Performance analysis of different tuning rules for an isothermal CSTR using integrated EPC and SPC

    NASA Astrophysics Data System (ADS)

    Roslan, A. H.; Karim, S. F. Abd; Hamzah, N.

    2018-03-01

    This paper demonstrates the integration of Engineering Process Control (EPC) and Statistical Process Control (SPC) for the control of product concentration of an isothermal CSTR. The objectives of this study are to evaluate the performance of Ziegler-Nichols (Z-N), Direct Synthesis, (DS) and Internal Model Control (IMC) tuning methods and determine the most effective method for this process. The simulation model was obtained from past literature and re-constructed using SIMULINK MATLAB to evaluate the process response. Additionally, the process stability, capability and normality were analyzed using Process Capability Sixpack reports in Minitab. Based on the results, DS displays the best response for having the smallest rise time, settling time, overshoot, undershoot, Integral Time Absolute Error (ITAE) and Integral Square Error (ISE). Also, based on statistical analysis, DS yields as the best tuning method as it exhibits the highest process stability and capability.

  17. A hybrid method for transient wave propagation in a multilayered solid

    NASA Astrophysics Data System (ADS)

    Tian, Jiayong; Xie, Zhoumin

    2009-08-01

    We present a hybrid method for the evaluation of transient elastic-wave propagation in a multilayered solid, integrating reverberation matrix method with the theory of generalized rays. Adopting reverberation matrix formulation, Laplace-Fourier domain solutions of elastic waves in the multilayered solid are expanded into the sum of a series of generalized-ray group integrals. Each generalized-ray group integral containing Kth power of reverberation matrix R represents the set of K-times reflections and refractions of source waves arriving at receivers in the multilayered solid, which was computed by fast inverse Laplace transform (FILT) and fast Fourier transform (FFT) algorithms. However, the calculation burden and low precision of FILT-FFT algorithm limit the application of reverberation matrix method. In this paper, we expand each of generalized-ray group integrals into the sum of a series of generalized-ray integrals, each of which is accurately evaluated by Cagniard-De Hoop method in the theory of generalized ray. The numerical examples demonstrate that the proposed method makes it possible to calculate the early-time transient response in the complex multilayered-solid configuration efficiently.

  18. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  19. Robust rotational-velocity-Verlet integration methods.

    PubMed

    Rozmanov, Dmitri; Kusalik, Peter G

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  20. Robust rotational-velocity-Verlet integration methods

    NASA Astrophysics Data System (ADS)

    Rozmanov, Dmitri; Kusalik, Peter G.

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  1. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  2. 3D graphics hardware accelerator programming methods for real-time visualization systems

    NASA Astrophysics Data System (ADS)

    Souetov, Andrew E.

    2001-02-01

    The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.

  3. 3D graphics hardware accelerator programming methods for real-time visualization systems

    NASA Astrophysics Data System (ADS)

    Souetov, Andrew E.

    2000-02-01

    The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.

  4. Computation of type curves for flow to partially penetrating wells in water-table aquifers

    USGS Publications Warehouse

    Moench, Allen F.

    1993-01-01

    Evaluation of Neuman's analytical solution for flow to a well in a homogeneous, anisotropic, water-table aquifer commonly requires large amounts of computation time and can produce inaccurate results for selected combinations of parameters. Large computation times occur because the integrand of a semi-infinite integral involves the summation of an infinite series. Each term of the series requires evaluation of the roots of equations, and the series itself is sometimes slowly convergent. Inaccuracies can result from lack of computer precision or from the use of improper methods of numerical integration. In this paper it is proposed to use a method of numerical inversion of the Laplace transform solution, provided by Neuman, to overcome these difficulties. The solution in Laplace space is simpler in form than the real-time solution; that is, the integrand of the semi-infinite integral does not involve an infinite series or the need to evaluate roots of equations. Because the integrand is evaluated rapidly, advanced methods of numerical integration can be used to improve accuracy with an overall reduction in computation time. The proposed method of computing type curves, for which a partially documented computer program (WTAQ1) was written, was found to reduce computation time by factors of 2 to 20 over the time needed to evaluate the closed-form, real-time solution.

  5. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  6. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  7. Image sensor with high dynamic range linear output

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)

    2007-01-01

    Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.

  8. Dynamical Chaos in the Wisdom-Holman Integrator: Origins and Solutions

    NASA Technical Reports Server (NTRS)

    Rauch, Kevin P.; Holman, Matthew

    1999-01-01

    We examine the nonlinear stability of the Wisdom-Holman (WH) symplectic mapping applied to the integration of perturbed, highly eccentric (e-0.9) two-body orbits. We find that the method is unstable and introduces artificial chaos into the computed trajectories for this class of problems, unless the step size chosen 1s small enough that PeriaPse is always resolved, in which case the method is generically stable. This 'radial orbit instability' persists even for weakly perturbed systems. Using the Stark problem as a fiducial test case, we investigate the dynamical origin of this instability and argue that the numerical chaos results from the overlap of step-size resonances; interestingly, for the Stark-problem many of these resonances appear to be absolutely stable. We similarly examine the robustness of several alternative integration methods: a time-regularized version of the WH mapping suggested by Mikkola; the potential-splitting (PS) method of Duncan, Levison, Lee; and two original methods incorporating approximations based on Stark motion instead of Keplerian motion. The two fixed point problem and a related, more general problem are used to conduct a comparative test of the various methods for several types of motion. Among the algorithms tested, the time-transformed WH mapping is clearly the most efficient and stable method of integrating eccentric, nearly Keplerian orbits in the absence of close encounters. For test particles subject to both high eccentricities and very close encounters, we find an enhanced version of the PS method-incorporating time regularization, force-center switching, and an improved kernel function-to be both economical and highly versatile. We conclude that Stark-based methods are of marginal utility in N-body type integrations. Additional implications for the symplectic integration of N-body systems are discussed.

  9. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  10. Fast and reliable symplectic integration for planetary system N-body problems

    NASA Astrophysics Data System (ADS)

    Hernandez, David M.

    2016-06-01

    We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.

  11. An annular superposition integral for axisymmetric radiators.

    PubMed

    Kelly, James F; McGough, Robert J

    2007-02-01

    A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a "smooth piston" function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity.

  12. Integrating Behavioral Health in Primary Care Using Lean Workflow Analysis: A Case Study

    PubMed Central

    van Eeghen, Constance; Littenberg, Benjamin; Holman, Melissa D.; Kessler, Rodger

    2016-01-01

    Background Primary care offices are integrating behavioral health (BH) clinicians into their practices. Implementing such a change is complex, difficult, and time consuming. Lean workflow analysis may be an efficient, effective, and acceptable method for integration. Objective Observe BH integration into primary care and measure its impact. Design Prospective, mixed methods case study in a primary care practice. Measurements Change in treatment initiation (referrals generating BH visits within the system). Secondary measures: primary care visits resulting in BH referrals, referrals resulting in scheduled appointments, time from referral to scheduled appointment, and time from referral to first visit. Providers and staff were surveyed on the Lean method. Results Referrals increased from 23 to 37/1000 visits (P<.001). Referrals resulted in more scheduled (60% to 74%, P<.001) and arrived visits (44% to 53%, P=.025). Time from referral to first scheduled visit decreased (Hazard Ratio (HR) 1.60; 95% Confidence Interval (CI) 1.37, 1.88; P<0.001) as did time to first arrived visit (HR 1.36; 95% CI 1.14, 1.62; P=0.001). Surveys and comments were positive. Conclusions This pilot integration of BH showed significant improvements in treatment initiation and other measures. Strengths of Lean included workflow improvement, system perspective, and project success. Further evaluation is indicated. PMID:27170796

  13. On time discretizations for the simulation of the batch settling-compression process in one dimension.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Mejías, Camilo

    2016-01-01

    The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.

  14. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  15. Multi-objective and Perishable Fuzzy Inventory Models Having Weibull Life-time With Time Dependent Demand, Demand Dependent Production and Time Varying Holding Cost: A Possibility/Necessity Approach

    NASA Astrophysics Data System (ADS)

    Pathak, Savita; Mondal, Seema Sarkar

    2010-10-01

    A multi-objective inventory model of deteriorating item has been developed with Weibull rate of decay, time dependent demand, demand dependent production, time varying holding cost allowing shortages in fuzzy environments for non- integrated and integrated businesses. Here objective is to maximize the profit from different deteriorating items with space constraint. The impreciseness of inventory parameters and goals for non-integrated business has been expressed by linear membership functions. The compromised solutions are obtained by different fuzzy optimization methods. To incorporate the relative importance of the objectives, the different cardinal weights crisp/fuzzy have been assigned. The models are illustrated with numerical examples and results of models with crisp/fuzzy weights are compared. The result for the model assuming them to be integrated business is obtained by using Generalized Reduced Gradient Method (GRG). The fuzzy integrated model with imprecise inventory cost is formulated to optimize the possibility necessity measure of fuzzy goal of the objective function by using credibility measure of fuzzy event by taking fuzzy expectation. The results of crisp/fuzzy integrated model are illustrated with numerical examples and results are compared.

  16. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  17. cuSwift --- a suite of numerical integration methods for modelling planetary systems implemented in C/CUDA

    NASA Astrophysics Data System (ADS)

    Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.

    2014-07-01

    Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky effect on resonant asteroids. We present first results and compare them with integrations done with the original algorithms implemented in SWIFT in order to assess the numerical precision of cuSwift and to demonstrate the speed-up we achieved using the GPU.

  18. The Green's matrix and the boundary integral equations for analysis of time-harmonic dynamics of elastic helical springs.

    PubMed

    Sorokin, Sergey V

    2011-03-01

    Helical springs serve as vibration isolators in virtually any suspension system. Various exact and approximate methods may be employed to determine the eigenfrequencies of vibrations of these structural elements and their dynamic transfer functions. The method of boundary integral equations is a meaningful alternative to obtain exact solutions of problems of the time-harmonic dynamics of elastic springs in the framework of Bernoulli-Euler beam theory. In this paper, the derivations of the Green's matrix, of the Somigliana's identities, and of the boundary integral equations are presented. The vibrational power transmission in an infinitely long spring is analyzed by means of the Green's matrix. The eigenfrequencies and the dynamic transfer functions are found by solving the boundary integral equations. In the course of analysis, the essential features and advantages of the method of boundary integral equations are highlighted. The reported analytical results may be used to study the time-harmonic motion in any wave guide governed by a system of linear differential equations in a single spatial coordinate along its axis. © 2011 Acoustical Society of America

  19. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  20. A high-order relaxation method with projective integration for solving nonlinear systems of hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Lafitte, Pauline; Melis, Ward; Samaey, Giovanni

    2017-07-01

    We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.

  1. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  2. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE PAGES

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    2017-10-12

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  3. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    PubMed

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  4. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  5. Time-splitting combined with exponential wave integrator fourier pseudospectral method for Schrödinger-Boussinesq system

    NASA Astrophysics Data System (ADS)

    Liao, Feng; Zhang, Luming; Wang, Shanshan

    2018-02-01

    In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.

  6. Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.

    PubMed

    Morrison, Shane A; Luttbeg, Barney; Belden, Jason B

    2016-11-01

    Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50  = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  8. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    NASA Technical Reports Server (NTRS)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  9. Development of a method for personal, spatiotemporal exposure assessment.

    PubMed

    Adams, Colby; Riggs, Philip; Volckens, John

    2009-07-01

    This work describes the development and evaluation of a high resolution, space and time-referenced sampling method for personal exposure assessment to airborne particulate matter (PM). This method integrates continuous measures of personal PM levels with the corresponding location-activity (i.e. work/school, home, transit) of the subject. Monitoring equipment include a small, portable global positioning system (GPS) receiver, a miniature aerosol nephelometer, and an ambient temperature monitor to estimate the location, time, and magnitude of personal exposure to particulate matter air pollution. Precision and accuracy of each component, as well as the integrated method performance were tested in a combination of laboratory and field tests. Spatial data was apportioned into pre-determined location-activity categories (i.e. work/school, home, transit) with a simple, temporospatially-based algorithm. The apportioning algorithm was extremely effective with an overall accuracy of 99.6%. This method allows examination of an individual's estimated exposure through space and time, which may provide new insights into exposure-activity relationships not possible with traditional exposure assessment techniques (i.e., time-integrated, filter-based measurements). Furthermore, the method is applicable to any contaminant or stressor that can be measured on an individual with a direct-reading sensor.

  10. Stability and delay sensitivity of neutral fractional-delay systems.

    PubMed

    Xu, Qi; Shi, Min; Wang, Zaihua

    2016-08-01

    This paper generalizes the stability test method via integral estimation for integer-order neutral time-delay systems to neutral fractional-delay systems. The key step in stability test is the calculation of the number of unstable characteristic roots that is described by a definite integral over an interval from zero to a sufficient large upper limit. Algorithms for correctly estimating the upper limits of the integral are given in two concise ways, parameter dependent or independent. A special feature of the proposed method is that it judges the stability of fractional-delay systems simply by using rough integral estimation. Meanwhile, the paper shows that for some neutral fractional-delay systems, the stability is extremely sensitive to the change of time delays. Examples are given for demonstrating the proposed method as well as the delay sensitivity.

  11. Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions

    NASA Astrophysics Data System (ADS)

    El-Kalla, I. L.; Al-Bugami, A. M.

    2010-11-01

    In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.

  12. Numerical Methods for 2-Dimensional Modeling

    DTIC Science & Technology

    1980-12-01

    high-order finite element methods, and a multidimensional version of the method of lines, both utilizing an optimized stiff integrator for the time...integration. The finite element methods have proved disappointing, but the method of lines has provided an unexpectedly large gain in speed. Two...diffusion problems with the same number of unknowns (a 21 x 41 grid), solved by second-order finite element methods, took over seven minutes on the Cray-i

  13. An annular superposition integral for axisymmetric radiators

    PubMed Central

    Kelly, James F.; McGough, Robert J.

    2007-01-01

    A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a “smooth piston” function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity. PMID:17348500

  14. The general 2-D moments via integral transform method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  15. High-speed extended-term time-domain simulation for online cascading analysis of power system

    NASA Astrophysics Data System (ADS)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.

  16. Numerical time-domain electromagnetics based on finite-difference and convolution

    NASA Astrophysics Data System (ADS)

    Lin, Yuanqu

    Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.

  17. Time-dependent integral equations of neutron transport for calculating the kinetics of nuclear reactors by the Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidenko, V. D., E-mail: Davidenko-VD@nrcki.ru; Zinchenko, A. S., E-mail: zin-sn@mail.ru; Harchenko, I. K.

    2016-12-15

    Integral equations for the shape functions in the adiabatic, quasi-static, and improved quasi-static approximations are presented. The approach to solving these equations by the Monte Carlo method is described.

  18. New approach to radar rainfall measurement

    NASA Technical Reports Server (NTRS)

    Atlas, David; Rosenfeld, Daniel; Wolff, David B.

    1991-01-01

    This paper integrates individual studies of the Area-Time Integrals method and climatic tuning methods. The latter is extended to a new approach which generalizes the technique so that it is no longer restricted to power law relations between effective reflectivity and rain rate.

  19. Explicit finite-difference simulation of optical integrated devices on massive parallel computers.

    PubMed

    Sterkenburgh, T; Michels, R M; Dress, P; Franke, H

    1997-02-20

    An explicit method for the numerical simulation of optical integrated circuits by means of the finite-difference time-domain (FDTD) method is presented. This method, based on an explicit solution of Maxwell's equations, is well established in microwave technology. Although the simulation areas are small, we verified the behavior of three interesting problems, especially nonparaxial problems, with typical aspects of integrated optical devices. Because numerical losses are within acceptable limits, we suggest the use of the FDTD method to achieve promising quantitative simulation results.

  20. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  1. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan

    2016-04-28

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less

  2. Improved algorithms and methods for room sound-field prediction by acoustical radiosity in arbitrary polyhedral rooms.

    PubMed

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-08-01

    This paper explores acoustical (or time-dependent) radiosity--a geometrical-acoustics sound-field prediction method that assumes diffuse surface reflection. The literature of acoustical radiosity is briefly reviewed and the advantages and disadvantages of the method are discussed. A discrete form of the integral equation that results from meshing the enclosure boundaries into patches is presented and used in a discrete-time algorithm. Furthermore, an averaging technique is used to reduce computational requirements. To generalize to nonrectangular rooms, a spherical-triangle method is proposed as a means of evaluating the integrals over solid angles that appear in the discrete form of the integral equation. The evaluation of form factors, which also appear in the numerical solution, is discussed for rectangular and nonrectangular rooms. This algorithm and associated methods are validated by comparison of the steady-state predictions for a spherical enclosure to analytical solutions.

  3. Improved algorithms and methods for room sound-field prediction by acoustical radiosity in arbitrary polyhedral rooms

    NASA Astrophysics Data System (ADS)

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-08-01

    This paper explores acoustical (or time-dependent) radiosity-a geometrical-acoustics sound-field prediction method that assumes diffuse surface reflection. The literature of acoustical radiosity is briefly reviewed and the advantages and disadvantages of the method are discussed. A discrete form of the integral equation that results from meshing the enclosure boundaries into patches is presented and used in a discrete-time algorithm. Furthermore, an averaging technique is used to reduce computational requirements. To generalize to nonrectangular rooms, a spherical-triangle method is proposed as a means of evaluating the integrals over solid angles that appear in the discrete form of the integral equation. The evaluation of form factors, which also appear in the numerical solution, is discussed for rectangular and nonrectangular rooms. This algorithm and associated methods are validated by comparison of the steady-state predictions for a spherical enclosure to analytical solutions.

  4. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    PubMed Central

    Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost

    2016-01-01

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167

  5. A Galerkin discretisation-based identification for parameters in nonlinear mechanical systems

    NASA Astrophysics Data System (ADS)

    Liu, Zuolin; Xu, Jian

    2018-04-01

    In the paper, a new parameter identification method is proposed for mechanical systems. Based on the idea of Galerkin finite-element method, the displacement over time history is approximated by piecewise linear functions, and the second-order terms in model equation are eliminated by integrating by parts. In this way, the lost function of integration form is derived. Being different with the existing methods, the lost function actually is a quadratic sum of integration over the whole time history. Then for linear or nonlinear systems, the optimisation of the lost function can be applied with traditional least-squares algorithm or the iterative one, respectively. Such method could be used to effectively identify parameters in linear and arbitrary nonlinear mechanical systems. Simulation results show that even under the condition of sparse data or low sampling frequency, this method could still guarantee high accuracy in identifying linear and nonlinear parameters.

  6. High-speed railway real-time localization auxiliary method based on deep neural network

    NASA Astrophysics Data System (ADS)

    Chen, Dongjie; Zhang, Wensheng; Yang, Yang

    2017-11-01

    High-speed railway intelligent monitoring and management system is composed of schedule integration, geographic information, location services, and data mining technology for integration of time and space data. Assistant localization is a significant submodule of the intelligent monitoring system. In practical application, the general access is to capture the image sequences of the components by using a high-definition camera, digital image processing technique and target detection, tracking and even behavior analysis method. In this paper, we present an end-to-end character recognition method based on a deep CNN network called YOLO-toc for high-speed railway pillar plate number. Different from other deep CNNs, YOLO-toc is an end-to-end multi-target detection framework, furthermore, it exhibits a state-of-art performance on real-time detection with a nearly 50fps achieved on GPU (GTX960). Finally, we realize a real-time but high-accuracy pillar plate number recognition system and integrate natural scene OCR into a dedicated classification YOLO-toc model.

  7. High-resolution time series of Pseudomonas aeruginosa gene expression and rhamnolipid secretion through growth curve synchronization.

    PubMed

    van Ditmarsch, Dave; Xavier, João B

    2011-06-17

    Online spectrophotometric measurements allow monitoring dynamic biological processes with high-time resolution. Contrastingly, numerous other methods require laborious treatment of samples and can only be carried out offline. Integrating both types of measurement would allow analyzing biological processes more comprehensively. A typical example of this problem is acquiring quantitative data on rhamnolipid secretion by the opportunistic pathogen Pseudomonas aeruginosa. P. aeruginosa cell growth can be measured by optical density (OD600) and gene expression can be measured using reporter fusions with a fluorescent protein, allowing high time resolution monitoring. However, measuring the secreted rhamnolipid biosurfactants requires laborious sample processing, which makes this an offline measurement. Here, we propose a method to integrate growth curve data with endpoint measurements of secreted metabolites that is inspired by a model of exponential cell growth. If serial diluting an inoculum gives reproducible time series shifted in time, then time series of endpoint measurements can be reconstructed using calculated time shifts between dilutions. We illustrate the method using measured rhamnolipid secretion by P. aeruginosa as endpoint measurements and we integrate these measurements with high-resolution growth curves measured by OD600 and expression of rhamnolipid synthesis genes monitored using a reporter fusion. Two-fold serial dilution allowed integrating rhamnolipid measurements at a ~0.4 h-1 frequency with high-time resolved data measured at a 6 h-1 frequency. We show how this simple method can be used in combination with mutants lacking specific genes in the rhamnolipid synthesis or quorum sensing regulation to acquire rich dynamic data on P. aeruginosa virulence regulation. Additionally, the linear relation between the ratio of inocula and the time-shift between curves produces high-precision measurements of maximum specific growth rates, which were determined with a precision of ~5.4%. Growth curve synchronization allows integration of rich time-resolved data with endpoint measurements to produce time-resolved quantitative measurements. Such data can be valuable to unveil the dynamic regulation of virulence in P. aeruginosa. More generally, growth curve synchronization can be applied to many biological systems thus helping to overcome a key obstacle in dynamic regulation: the scarceness of quantitative time-resolved data.

  8. A method for exponential propagation of large systems of stiff nonlinear differential equations

    NASA Technical Reports Server (NTRS)

    Friesner, Richard A.; Tuckerman, Laurette S.; Dornblaser, Bright C.; Russo, Thomas V.

    1989-01-01

    A new time integrator for large, stiff systems of linear and nonlinear coupled differential equations is described. For linear systems, the method consists of forming a small (5-15-term) Krylov space using the Jacobian of the system and carrying out exact exponential propagation within this space. Nonlinear corrections are incorporated via a convolution integral formalism; the integral is evaluated via approximate Krylov methods as well. Gains in efficiency ranging from factors of 2 to 30 are demonstrated for several test problems as compared to a forward Euler scheme and to the integration package LSODE.

  9. The orbital evolution of NEA 30825 1900 TG1

    NASA Astrophysics Data System (ADS)

    Timoshkova, E. I.

    2008-02-01

    The orbital evolution of the near-Earth asteroid (NEA) 30825 1990 TG1 has been studied by numerical integration of the equations of its motion over the 100 000-year time interval with allowance for perturbations from eight major planets and Pluto, and the variations in its osculating orbit over this time interval were determined. The numerical integrations were performed using two methods: the Bulirsch-Stoer method and the Everhart method. The comparative analysis of the two resulting orbital evolutions of motion is presented for the time interval examined. The evolution of the asteroid motion is qualitatively the same for both variants, but the rate of evolution of the orbital elements is different. Our research confirms the known fact that the application of different integrators to the study of the long-term evolution of the NEA orbit may lead to different evolution tracks.

  10. A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation

    USGS Publications Warehouse

    Smith, Peter E.

    2006-01-01

    A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.

  11. Sensitivity curves for searches for gravitational-wave backgrounds

    NASA Astrophysics Data System (ADS)

    Thrane, Eric; Romano, Joseph D.

    2013-12-01

    We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.

  12. Variable Structure PID Control to Prevent Integrator Windup

    NASA Technical Reports Server (NTRS)

    Hall, C. E.; Hodel, A. S.; Hung, J. Y.

    1999-01-01

    PID controllers are frequently used to control systems requiring zero steady-state error while maintaining requirements for settling time and robustness (gain/phase margins). PID controllers suffer significant loss of performance due to short-term integrator wind-up when used in systems with actuator saturation. We examine several existing and proposed methods for the prevention of integrator wind-up in both continuous and discrete time implementations.

  13. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    NASA Technical Reports Server (NTRS)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  14. Time-symmetric integration in astrophysics

    NASA Astrophysics Data System (ADS)

    Hernandez, David M.; Bertschinger, Edmund

    2018-04-01

    Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.

  15. Differential temperature integrating diagnostic method and apparatus

    DOEpatents

    Doss, James D.; McCabe, Charles W.

    1976-01-01

    A method and device for detecting the presence of breast cancer in women by integrating the temperature difference between the temperature of a normal breast and that of a breast having a malignant tumor. The breast-receiving cups of a brassiere are each provided with thermally conductive material next to the skin, with a thermistor attached to the thermally conductive material in each cup. The thermistors are connected to adjacent arms of a Wheatstone bridge. Unbalance currents in the bridge are integrated with respect to time by means of an electrochemical integrator. In the absence of a tumor, both breasts maintain substantially the same temperature, and the bridge remains balanced. If the tumor is present in one breast, a higher temperature in that breast unbalances the bridge and the electrochemical cells integrate the temperature difference with respect to time.

  16. [Computerized monitoring for integrated cervical screening. Rationale, methods and indicators of participation].

    PubMed

    Bucchi, L; Pierri, C; Caprara, L; Cortecchia, S; De Lillo, M; Bondi, A

    2003-02-01

    This paper presents a computerised system for the monitoring of integrated cervical screening, i.e. the integration of spontaneous Pap smear practice into organised screening. The general characteristics of the system are described, including background and rationale (integrated cervical screening in European countries, impact of integration on monitoring, decentralised organization of screening and levels of monitoring), general methods (definitions, sections, software description, and setting of application), and indicators of participation (distribution by time interval since previous Pap smear, distribution by screening sector--organised screening centres vs public and private clinical settings--, distribution by time interval between the last two Pap smears, and movement of women between the two screening sectors). Also, the paper reports the results of the application of these indicators in the general database of the Pathology Department of Imola Health District in northern Italy.

  17. Integrating Behavioral Health in Primary Care Using Lean Workflow Analysis: A Case Study.

    PubMed

    van Eeghen, Constance; Littenberg, Benjamin; Holman, Melissa D; Kessler, Rodger

    2016-01-01

    Primary care offices are integrating behavioral health (BH) clinicians into their practices. Implementing such a change is complex, difficult, and time consuming. Lean workflow analysis may be an efficient, effective, and acceptable method for use during integration. The objectives of this study were to observe BH integration into primary care and to measure its impact. This was a prospective, mixed-methods case study in a primary care practice that served 8,426 patients over a 17-month period, with 652 patients referred to BH services. Secondary measures included primary care visits resulting in BH referrals, referrals resulting in scheduled appointments, time from referral to the scheduled appointment, and time from the referral to the first visit. Providers and staff were surveyed on the Lean method. Referrals increased from 23 to 37 per 1000 visits (P < .001). Referrals resulted in more scheduled (60% to 74%; P < .001) and arrived visits (44% to 53%; P = .025). Time from referral to the first scheduled visit decreased (hazard ratio, 1.60; 95% confidence interval, 1.37-1.88) as did time to first arrived visit (hazard ratio, 1.36; 95% confidence interval, 1.14-1.62). Survey responses and comments were positive. This pilot integration of BH showed significant improvements in treatment initiation and other measures. Strengths of Lean analysis included workflow improvement, system perspective, and project success. Further evaluation is indicated. © Copyright 2016 by the American Board of Family Medicine.

  18. Integrable Time-Dependent Quantum Hamiltonians

    NASA Astrophysics Data System (ADS)

    Sinitsyn, Nikolai A.; Yuzbashyan, Emil A.; Chernyak, Vladimir Y.; Patra, Aniket; Sun, Chen

    2018-05-01

    We formulate a set of conditions under which the nonstationary Schrödinger equation with a time-dependent Hamiltonian is exactly solvable analytically. The main requirement is the existence of a non-Abelian gauge field with zero curvature in the space of system parameters. Known solvable multistate Landau-Zener models satisfy these conditions. Our method provides a strategy to incorporate time dependence into various quantum integrable models while maintaining their integrability. We also validate some prior conjectures, including the solution of the driven generalized Tavis-Cummings model.

  19. Space-time domain solutions of the wave equation by a non-singular boundary integral method and Fourier transform.

    PubMed

    Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C

    2017-08-01

    The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.

  20. Finite element implementation of state variable-based viscoplasticity models

    NASA Technical Reports Server (NTRS)

    Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.

    1991-01-01

    The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.

  1. Free nitrous acid serving as a pretreatment method for alkaline fermentation to enhance short-chain fatty acid production from waste activated sludge.

    PubMed

    Zhao, Jianwei; Wang, Dongbo; Li, Xiaoming; Yang, Qi; Chen, Hongbo; Zhong, Yu; Zeng, Guangming

    2015-07-01

    Alkaline condition (especially pH 10) has been demonstrated to be a promising method for short-chain fatty acid (SCFA) production from waste activated sludge anaerobic fermentation, because it can effectively inhibit the activities of methanogens. However, due to the limit of sludge solubilization rate, long fermentation time is required but SCFA yield is still limited. This paper reports a new pretreatment method for alkaline fermentation, i.e., using free nitrous acid (FNA) to pretreat sludge for 2 d, by which the fermentation time is remarkably shortened and meanwhile the SCFA production is significantly enhanced. Experimental results showed the highest SCFA production of 370.1 mg COD/g VSS (volatile suspended solids) was achieved at 1.54 mg FNA/L pretreatment integration with 2 d of pH 10 fermentation, which was 4.7- and 1.5-fold of that in the blank (uncontrolled) and sole pH 10 systems, respectively. The total time of this integration system was only 4 d, whereas the corresponding time was 15 d in the blank and 8 d in the sole pH 10 systems. The mechanism study showed that compared with pH 10, FNA pretreatment accelerated disruption of both extracellular polymeric substances and cell envelope. After FNA pretreatment, pH 10 treatment (1 d) caused 38.0% higher substrate solubilization than the sole FNA, which indicated that FNA integration with pH 10 could cause positive synergy on sludge solubilization. It was also observed that this integration method benefited hydrolysis and acidification processes. Therefore, more SCFA was produced, but less fermentation time was required in the integrated system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Mixed methods research in music therapy research.

    PubMed

    Bradt, Joke; Burns, Debra S; Creswell, John W

    2013-01-01

    Music therapists have an ethical and professional responsibility to provide the highest quality care possible to their patients. Much of the time, high quality care is guided by evidence-based practice standards that integrate the most current, available research in making decisions. Accordingly, music therapists need research that integrates multiple ways of knowing and forms of evidence. Mixed methods research holds great promise for facilitating such integration. At this time, there have not been any methodological articles published on mixed methods research in music therapy. The purpose of this article is to introduce mixed methods research as an approach to address research questions relevant to music therapy practice. This article describes the core characteristics of mixed methods research, considers paradigmatic issues related to this research approach, articulates major challenges in conducting mixed methods research, illustrates four basic designs, and provides criteria for evaluating the quality of mixed methods articles using examples of mixed methods research from the music therapy literature. Mixed methods research offers unique opportunities for strengthening the evidence base in music therapy. Recommendations are provided to ensure rigorous implementation of this research approach.

  3. Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves

    NASA Astrophysics Data System (ADS)

    Liu, Shukui; Papanikolaou, Apostolos D.

    2011-03-01

    Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT) of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.

  4. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, T.

    Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.

  5. Viscous-inviscid interaction method including wake effects for three-dimensional wing-body configurations

    NASA Technical Reports Server (NTRS)

    Streett, C. L.

    1981-01-01

    A viscous-inviscid interaction method has been developed by using a three-dimensional integral boundary-layer method which produces results in good agreement with a finite-difference method in a fraction of the computer time. The integral method is stable and robust and incorporates a model for computation in a small region of streamwise separation. A locally two-dimensional wake model, accounting for thickness and curvature effects, is also included in the interaction procedure. Computation time spent in converging an interacted result is, many times, only slightly greater than that required to converge an inviscid calculation. Results are shown from the interaction method, run at experimental angle of attack, Reynolds number, and Mach number, on a wing-body test case for which viscous effects are large. Agreement with experiment is good; in particular, the present wake model improves prediction of the spanwise lift distribution and lower surface cove pressure.

  6. Thermalization and light cones in a model with weak integrability breaking

    DOE PAGES

    Bertini, Bruno; Essler, Fabian H. L.; Groha, Stefan; ...

    2016-12-09

    Here, we employ equation-of-motion techniques to study the nonequilibrium dynamics in a lattice model of weakly interacting spinless fermions. Our model provides a simple setting for analyzing the effects of weak integrability-breaking perturbations on the time evolution after a quantum quench. We establish the accuracy of the method by comparing results at short and intermediate times to time-dependent density matrix renormalization group computations. For sufficiently weak integrability-breaking interactions we always observe prethermalization plateaus, where local observables relax to nonthermal values at intermediate time scales. At later times a crossover towards thermal behavior sets in. We determine the associated time scale,more » which depends on the initial state, the band structure of the noninteracting theory, and the strength of the integrability-breaking perturbation. Our method allows us to analyze in some detail the spreading of correlations and in particular the structure of the associated light cones in our model. We find that the interior and exterior of the light cone are separated by an intermediate region, the temporal width of which appears to scale with a universal power law t 1/3.« less

  7. Self spectrum window method in wigner-ville distribution.

    PubMed

    Liu, Zhongguo; Liu, Changchun; Liu, Boqiang; Lv, Yangsheng; Lei, Yinsheng; Yu, Mengsun

    2005-01-01

    Wigner-Ville distribution (WVD) is an important type of time-frequency analysis in biomedical signal processing. The cross-term interference in WVD has a disadvantageous influence on its application. In this research, the Self Spectrum Window (SSW) method was put forward to suppress the cross-term interference, based on the fact that the cross-term and auto-WVD- terms in integral kernel function are orthogonal. With the Self Spectrum Window (SSW) algorithm, a real auto-WVD function was used as a template to cross-correlate with the integral kernel function, and the Short Time Fourier Transform (STFT) spectrum of the signal was used as window function to process the WVD in time-frequency plane. The SSW method was confirmed by computer simulation with good analysis results. Satisfactory time- frequency distribution was obtained.

  8. A new integrated evaluation method of heavy metals pollution control during melting and sintering of MSWI fly ash.

    PubMed

    Li, Rundong; Li, Yanlong; Yang, Tianhua; Wang, Lei; Wang, Weiyun

    2015-05-30

    Evaluations of technologies for heavy metal control mainly examine the residual and leaching rates of a single heavy metal, such that developed evaluation method have no coordination or uniqueness and are therefore unsuitable for hazard control effect evaluation. An overall pollution toxicity index (OPTI) was established in this paper, based on the developed index, an integrated evaluation method of heavy metal pollution control was established. Application of this method in the melting and sintering of fly ash revealed the following results: The integrated control efficiency of the melting process was higher in all instances than that of the sintering process. The lowest integrated control efficiency of melting was 56.2%, and the highest integrated control efficiency of sintering was 46.6%. Using the same technology, higher integrated control efficiency conditions were all achieved with lower temperatures and shorter times. This study demonstrated the unification and consistency of this method. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  10. Algorithms and software for nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.

    1989-01-01

    The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.

  11. A 3D image sensor with adaptable charge subtraction scheme for background light suppression

    NASA Astrophysics Data System (ADS)

    Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.

    2013-02-01

    We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.

  12. A finite-volume Eulerian-Lagrangian Localized Adjoint Method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    A new mass-conservative method for solution of the one-dimensional advection-dispersion equation is derived and discussed. Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods, in terms of accuracy and efficiency, for solute transport problems that are dominated by advection. For dispersion-dominated problems, the performance of the method is similar to that of standard methods. Like previous ELLAM formulations, FVELLAM systematically conserves mass globally with all types of boundary conditions. FVELLAM differs from other ELLAM approaches in that integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking, as used by most characteristic methods, of characteristic lines intersecting inflow boundaries. FVELLAM extends previous ELLAM results by obtaining mass conservation locally on Lagrangian space-time elements. Details of the integration, tracking, and boundary algorithms are presented. Test results are given for problems in Cartesian and radial coordinates.

  13. Integral-equation based methods for parameter estimation in output pulses of radiation detectors: Application in nuclear medicine and spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-04-01

    Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

  14. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  15. Variational path integral molecular dynamics and hybrid Monte Carlo algorithms using a fourth order propagator with applications to molecular systems

    NASA Astrophysics Data System (ADS)

    Kamibayashi, Yuki; Miura, Shinichi

    2016-08-01

    In the present study, variational path integral molecular dynamics and associated hybrid Monte Carlo (HMC) methods have been developed on the basis of a fourth order approximation of a density operator. To reveal various parameter dependence of physical quantities, we analytically solve one dimensional harmonic oscillators by the variational path integral; as a byproduct, we obtain the analytical expression of the discretized density matrix using the fourth order approximation for the oscillators. Then, we apply our methods to realistic systems like a water molecule and a para-hydrogen cluster. In the HMC, we adopt two level description to avoid the time consuming Hessian evaluation. For the systems examined in this paper, the HMC method is found to be about three times more efficient than the molecular dynamics method if appropriate HMC parameters are adopted; the advantage of the HMC method is suggested to be more evident for systems described by many body interaction.

  16. Method and apparatus of high dynamic range image sensor with individual pixel reset

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)

    2001-01-01

    A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.

  17. A modified precise integration method for transient dynamic analysis in structural systems with multiple damping models

    NASA Astrophysics Data System (ADS)

    Ding, Zhe; Li, Li; Hu, Yujin

    2018-01-01

    Sophisticated engineering systems are usually assembled by subcomponents with significantly different levels of energy dissipation. Therefore, these damping systems often contain multiple damping models and lead to great difficulties in analyzing. This paper aims at developing a time integration method for structural systems with multiple damping models. The dynamical system is first represented by a generally damped model. Based on this, a new extended state-space method for the damped system is derived. A modified precise integration method with Gauss-Legendre quadrature is then proposed. The numerical stability and accuracy of the proposed integration method are discussed in detail. It is verified that the method is conditionally stable and has inherent algorithmic damping, period error and amplitude decay. Numerical examples are provided to assess the performance of the proposed method compared with other methods. It is demonstrated that the method is more accurate than other methods with rather good efficiency and the stable condition is easy to be satisfied in practice.

  18. Integration of RNA-Seq and RPPA data for survival time prediction in cancer patients.

    PubMed

    Isik, Zerrin; Ercan, Muserref Ece

    2017-10-01

    Integration of several types of patient data in a computational framework can accelerate the identification of more reliable biomarkers, especially for prognostic purposes. This study aims to identify biomarkers that can successfully predict the potential survival time of a cancer patient by integrating the transcriptomic (RNA-Seq), proteomic (RPPA), and protein-protein interaction (PPI) data. The proposed method -RPBioNet- employs a random walk-based algorithm that works on a PPI network to identify a limited number of protein biomarkers. Later, the method uses gene expression measurements of the selected biomarkers to train a classifier for the survival time prediction of patients. RPBioNet was applied to classify kidney renal clear cell carcinoma (KIRC), glioblastoma multiforme (GBM), and lung squamous cell carcinoma (LUSC) patients based on their survival time classes (long- or short-term). The RPBioNet method correctly identified the survival time classes of patients with between 66% and 78% average accuracy for three data sets. RPBioNet operates with only 20 to 50 biomarkers and can achieve on average 6% higher accuracy compared to the closest alternative method, which uses only RNA-Seq data in the biomarker selection. Further analysis of the most predictive biomarkers highlighted genes that are common for both cancer types, as they may be driver proteins responsible for cancer progression. The novelty of this study is the integration of a PPI network with mRNA and protein expression data to identify more accurate prognostic biomarkers that can be used for clinical purposes in the future. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  20. Integral Equations in Computational Electromagnetics: Formulations, Properties and Isogeometric Analysis

    NASA Astrophysics Data System (ADS)

    Lovell, Amy Elizabeth

    Computational electromagnetics (CEM) provides numerical methods to simulate electromagnetic waves interacting with its environment. Boundary integral equation (BIE) based methods, that solve the Maxwell's equations in the homogeneous or piecewise homogeneous medium, are both efficient and accurate, especially for scattering and radiation problems. Development and analysis electromagnetic BIEs has been a very active topic in CEM research. Indeed, there are still many open problems that need to be addressed or further studied. A short and important list includes (1) closed-form or quasi-analytical solutions to time-domain integral equations, (2) catastrophic cancellations at low frequencies, (3) ill-conditioning due to high mesh density, multi-scale discretization, and growing electrical size, and (4) lack of flexibility due to re-meshing when increasing number of forward numerical simulations are involved in the electromagnetic design process. This dissertation will address those several aspects of boundary integral equations in computational electromagnetics. The first contribution of the dissertation is to construct quasi-analytical solutions to time-dependent boundary integral equations using a direct approach. Direct inverse Fourier transform of the time-harmonic solutions is not stable due to the non-existence of the inverse Fourier transform of spherical Hankel functions. Using new addition theorems for the time-domain Green's function and dyadic Green's functions, time-domain integral equations governing transient scattering problems of spherical objects are solved directly and stably for the first time. Additional, the direct time-dependent solutions, together with the newly proposed time-domain dyadic Green's functions, can enrich the time-domain spherical multipole theory. The second contribution is to create a novel method of moments (MoM) framework to solve electromagnetic boundary integral equation on subdivision surfaces. The aim is to avoid the meshing and re-meshing stages to accelerate the design process when the geometry needs to be updated. Two schemes to construct basis functions on the subdivision surface have been explored. One is to use the div-conforming basis function, and the other one is to create a rigorous iso-geometric approach based on the subdivision basis function with better smoothness properties. This new framework provides us better accuracy, more stability and high flexibility. The third contribution is a new stable integral equation formulation to avoid catastrophic cancellations due to low-frequency breakdown or dense-mesh breakdown. Many of the conventional integral equations and their associated post-processing operations suffer from numerical catastrophic cancellations, which can lead to ill-conditioning of the linear systems or serious accuracy problems. Examples includes low-frequency breakdown and dense mesh breakdown. Another instability may come from nontrivial null spaces of involving integral operators that might be related with spurious resonance or topology breakdown. This dissertation presents several sets of new boundary integral equations and studies their analytical properties. The first proposed formulation leads to the scalar boundary integral equations where only scalar unknowns are involved. Besides the requirements of gaining more stability and better conditioning in the resulting linear systems, multi-physics simulation is another driving force for new formulations. Scalar and vector potentials (rather than electromagnetic field) based formulation have been studied for this purpose. Those new contributions focus on different stages of boundary integral equations in an almost independent manner, e.g. isogeometric analysis framework can be used to solve different boundary integral equations, and the time-dependent solutions to integral equations from different formulations can be achieved through the same methodology proposed.

  1. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  2. Event specific qualitative and quantitative polymerase chain reaction detection of genetically modified MON863 maize based on the 5'-transgene integration sequence.

    PubMed

    Yang, Litao; Xu, Songci; Pan, Aihu; Yin, Changsong; Zhang, Kewei; Wang, Zhenying; Zhou, Zhigang; Zhang, Dabing

    2005-11-30

    Because of the genetically modified organisms (GMOs) labeling policies issued in many countries and areas, polymerase chain reaction (PCR) methods were developed for the execution of GMO labeling policies, such as screening, gene specific, construct specific, and event specific PCR detection methods, which have become a mainstay of GMOs detection. The event specific PCR detection method is the primary trend in GMOs detection because of its high specificity based on the flanking sequence of the exogenous integrant. This genetically modified maize, MON863, contains a Cry3Bb1 coding sequence that produces a protein with enhanced insecticidal activity against the coleopteran pest, corn rootworm. In this study, the 5'-integration junction sequence between the host plant DNA and the integrated gene construct of the genetically modified maize MON863 was revealed by means of thermal asymmetric interlaced-PCR, and the specific PCR primers and TaqMan probe were designed based upon the revealed 5'-integration junction sequence; the conventional qualitative PCR and quantitative TaqMan real-time PCR detection methods employing these primers and probes were successfully developed. In conventional qualitative PCR assay, the limit of detection (LOD) was 0.1% for MON863 in 100 ng of maize genomic DNA for one reaction. In the quantitative TaqMan real-time PCR assay, the LOD and the limit of quantification were eight and 80 haploid genome copies, respectively. In addition, three mixed maize samples with known MON863 contents were detected using the established real-time PCR systems, and the ideal results indicated that the established event specific real-time PCR detection systems were reliable, sensitive, and accurate.

  3. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  4. Integrated Detection and Prediction of Influenza Activity for Real-Time Surveillance: Algorithm Design

    PubMed Central

    2017-01-01

    Background Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic “big data” from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. Objective The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. Methods An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. Results The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Conclusions Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. PMID:28619700

  5. Detection of anatomical changes in lung cancer patients with 2D time-integrated, 2D time-resolved and 3D time-integrated portal dosimetry: a simulation study

    NASA Astrophysics Data System (ADS)

    Wolfs, Cecile J. A.; Brás, Mariana G.; Schyns, Lotte E. J. R.; Nijsten, Sebastiaan M. J. J. G.; van Elmpt, Wouter; Scheib, Stefan G.; Baltes, Christof; Podesta, Mark; Verhaegen, Frank

    2017-08-01

    The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95%) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95%, which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.

  6. Detection of anatomical changes in lung cancer patients with 2D time-integrated, 2D time-resolved and 3D time-integrated portal dosimetry: a simulation study.

    PubMed

    Wolfs, Cecile J A; Brás, Mariana G; Schyns, Lotte E J R; Nijsten, Sebastiaan M J J G; van Elmpt, Wouter; Scheib, Stefan G; Baltes, Christof; Podesta, Mark; Verhaegen, Frank

    2017-07-12

    The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95% ) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95% , which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.

  7. Long-time uncertainty propagation using generalized polynomial chaos and flow map composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.

    2014-10-01

    We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less

  8. Integral reinforcement learning for continuous-time input-affine nonlinear systems with simultaneous invariant explorations.

    PubMed

    Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2015-05-01

    This paper focuses on a class of reinforcement learning (RL) algorithms, named integral RL (I-RL), that solve continuous-time (CT) nonlinear optimal control problems with input-affine system dynamics. First, we extend the concepts of exploration, integral temporal difference, and invariant admissibility to the target CT nonlinear system that is governed by a control policy plus a probing signal called an exploration. Then, we show input-to-state stability (ISS) and invariant admissibility of the closed-loop systems with the policies generated by integral policy iteration (I-PI) or invariantly admissible PI (IA-PI) method. Based on these, three online I-RL algorithms named explorized I-PI and integral Q -learning I, II are proposed, all of which generate the same convergent sequences as I-PI and IA-PI under the required excitation condition on the exploration. All the proposed methods are partially or completely model free, and can simultaneously explore the state space in a stable manner during the online learning processes. ISS, invariant admissibility, and convergence properties of the proposed methods are also investigated, and related with these, we show the design principles of the exploration for safe learning. Neural-network-based implementation methods for the proposed schemes are also presented in this paper. Finally, several numerical simulations are carried out to verify the effectiveness of the proposed methods.

  9. A numerical scheme to solve unstable boundary value problems

    NASA Technical Reports Server (NTRS)

    Kalnay Derivas, E.

    1975-01-01

    A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.

  10. Family Assessment/Treatment/Evaluation Methods Integrated for Helping Teen Suicide Attempters/Families in Short Term Psychiatric Hospitalization Programs.

    ERIC Educational Resources Information Center

    Shepard, Suzanne

    The assessment process can be integrated with treatment and evaluation for helping teenage suicide attempters and families in short term psychiatric hospitalization programs. The method is an extremely efficient way for the therapist to work within a given time constraint. During family assessment sufficient information can be gathered to…

  11. Simple photometer circuits using modular electronic components

    NASA Technical Reports Server (NTRS)

    Wampler, J. E.

    1975-01-01

    Operational and peak holding amplifiers are discussed as useful circuits for bioluminescence assays. Circuit diagrams are provided. While analog methods can give a good integration on short time scales, digital methods were found best for long term integration in bioluminescence assays. Power supplies, a general photometer circuit with ratio capability, and variations in the basic photometer design are also considered.

  12. Method and apparatus for in-system redundant array repair on integrated circuits

    DOEpatents

    Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN

    2008-07-29

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  13. Method and apparatus for in-system redundant array repair on integrated circuits

    DOEpatents

    Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN

    2008-07-08

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  14. Method and apparatus for in-system redundant array repair on integrated circuits

    DOEpatents

    Bright, Arthur A.; Crumley, Paul G.; Dombrowa, Marc B.; Douskey, Steven M.; Haring, Rudolf A.; Oakland, Steven F.; Ouellette, Michael R.; Strissel, Scott A.

    2007-12-18

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  15. 10 CFR Appendix Y to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Battery Chargers

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... power (i.e., watts) consumed as the time series integral of the power consumed over a 1-hour test period...) consumed as the time series integral of the power consumed over a 1-hour test period, divided by the period...-maintenance mode and standby mode over time periods defined in the test procedure. b. Active mode is the...

  16. 10 CFR Appendix Y to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Battery Chargers

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... power (i.e., watts) consumed as the time series integral of the power consumed over a 1-hour test period...) consumed as the time series integral of the power consumed over a 1-hour test period, divided by the period...-maintenance mode and standby mode over time periods defined in the test procedure. b. Active mode is the...

  17. SIMULATIONS OF 2D AND 3D THERMOCAPILLARY FLOWS BY A LEAST-SQUARES FINITE ELEMENT METHOD. (R825200)

    EPA Science Inventory

    Numerical results for time-dependent 2D and 3D thermocapillary flows are presented in this work. The numerical algorithm is based on the Crank-Nicolson scheme for time integration, Newton's method for linearization, and a least-squares finite element method, together with a matri...

  18. Method to manage integration error in the Green-Kubo method.

    PubMed

    Oliveira, Laura de Sousa; Greaney, P Alex

    2017-02-01

    The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.

  19. Method to manage integration error in the Green-Kubo method

    NASA Astrophysics Data System (ADS)

    Oliveira, Laura de Sousa; Greaney, P. Alex

    2017-02-01

    The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.

  20. The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.

    ERIC Educational Resources Information Center

    Doerann-George, Judith

    The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

  1. A transition from using multi‐step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies

    PubMed Central

    Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-01-01

    Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561

  2. Inviscid, nonadiabatic flow fields over blunt, sonic corner bodies for outer planet entry conditions by a method of integral relations

    NASA Technical Reports Server (NTRS)

    Gnoffo, P. A.

    1978-01-01

    An investigation has been made into the ability of a method of integral relations to calculate inviscid zero degree angle of attack, radiative heating distributions over blunt, sonic corner bodies for some representative outer planet entry conditions is investigated. Comparisons have been made with a more detailed numerical method, a time asymptotic technique, using the same equilibrium chemistry and radiation transport subroutines. An effort to produce a second order approximation (two-strip) method of integral relations code to aid in this investigation is also described and a modified two-strip routine is presented. Results indicate that the one-strip method of integral relations cannot be used to obtain accurate estimates of the radiative heating distribution because of its inability to resolve thermal gradients near the wall. The two-strip method can sometimes be used to improve these estimates; however, the two-strip method has only a small range of conditions over which it will yield significant improvement over the one-strip method.

  3. Integrated Detection and Prediction of Influenza Activity for Real-Time Surveillance: Algorithm Design.

    PubMed

    Spreco, Armin; Eriksson, Olle; Dahlström, Örjan; Cowling, Benjamin John; Timpka, Toomas

    2017-06-15

    Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic "big data" from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. ©Armin Spreco, Olle Eriksson, Örjan Dahlström, Benjamin John Cowling, Toomas Timpka. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.06.2017.

  4. Solution of the advection-dispersion equation in two dimensions by a finite-volume Eulerian-Lagrangian localized adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1998-01-01

    We extend the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) for solution of the advection-dispersion equation to two dimensions. The method can conserve mass globally and is not limited by restrictions on the size of the grid Peclet or Courant number. Therefore, it is well suited for solution of advection-dominated ground-water solute transport problems. In test problem comparisons with standard finite differences, FVELLAM is able to attain accurate solutions on much coarser space and time grids. On fine grids, the accuracy of the two methods is comparable. A critical aspect of FVELLAM (and all other ELLAMs) is evaluation of the mass storage integral from the preceding time level. In FVELLAM this may be accomplished with either a forward or backtracking approach. The forward tracking approach conserves mass globally and is the preferred approach. The backtracking approach is less computationally intensive, but not globally mass conservative. Boundary terms are systematically represented as integrals in space and time which are evaluated by a common integration scheme in conjunction with forward tracking through time. Unlike the one-dimensional case, local mass conservation cannot be guaranteed, so slight oscillations in concentration can develop, particularly in the vicinity of inflow or outflow boundaries. Published by Elsevier Science Ltd.

  5. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  6. Adaptive methods for nonlinear structural dynamics and crashworthiness analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1993-01-01

    The objective is to describe three research thrusts in crashworthiness analysis: adaptivity; mixed time integration, or subcycling, in which different timesteps are used for different parts of the mesh in explicit methods; and methods for contact-impact which are highly vectorizable. The techniques are being developed to improve the accuracy of calculations, ease-of-use of crashworthiness programs, and the speed of calculations. The latter is still of importance because crashworthiness calculations are often made with models of 20,000 to 50,000 elements using explicit time integration and require on the order of 20 to 100 hours on current supercomputers. The methodologies are briefly reviewed and then some example calculations employing these methods are described. The methods are also of value to other nonlinear transient computations.

  7. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.

  8. An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.

    PubMed

    Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui

    2016-01-23

    As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods.

  9. A new method for determining the plasma electron density using three-color interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arakawa, Hiroyuki; Kawano, Yasunori; Itami, Kiyoshi

    2012-06-15

    A new method for determining the plasma electron density using the fractional fringes on three-color interferometer is proposed. Integrated phase shift on each interferometer is derived without using the temporal history of the fractional fringes. The dependence on the fringe resolution and the electrical noise are simulated on the wavelengths of CO{sub 2} laser. Short-time integrations of the fractional fringes enhance the reliability of this method.

  10. Two-dimensional Euler and Navier-Stokes Time accurate simulations of fan rotor flows

    NASA Technical Reports Server (NTRS)

    Boretti, A. A.

    1990-01-01

    Two numerical methods are presented which describe the unsteady flow field in the blade-to-blade plane of an axial fan rotor. These methods solve the compressible, time-dependent, Euler and the compressible, turbulent, time-dependent, Navier-Stokes conservation equations for mass, momentum, and energy. The Navier-Stokes equations are written in Favre-averaged form and are closed with an approximate two-equation turbulence model with low Reynolds number and compressibility effects included. The unsteady aerodynamic component is obtained by superposing inflow or outflow unsteadiness to the steady conditions through time-dependent boundary conditions. The integration in space is performed by using a finite volume scheme, and the integration in time is performed by using k-stage Runge-Kutta schemes, k = 2,5. The numerical integration algorithm allows the reduction of the computational cost of an unsteady simulation involving high frequency disturbances in both CPU time and memory requirements. Less than 200 sec of CPU time are required to advance the Euler equations in a computational grid made up of about 2000 grid during 10,000 time steps on a CRAY Y-MP computer, with a required memory of less than 0.3 megawords.

  11. Chloride mass-balance method for estimating ground water recharge in arid areas: examples from western Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Bazuhair, Abdulghaffar S.; Wood, Warren W.

    1996-11-01

    The chloride mass-balance method, which integrates time and aerial distribution of ground water recharge, was applied to small alluvial aquifers in the wadi systems of the Asir and Hijaz mountains in western Saudi Arabia. This application is an extension of the method shown to be suitable for estimating recharge in regional aquifers in semi-arid areas. Because the method integrates recharge in time and space it appears to be, with certain assumptions, particularly well suited for and areas with large temporal and spatial variation in recharge. In general, recharge was found to be between 3 to 4% of precipitation — a range consistent with recharge rates found in other and and semi-arid areas of the earth.

  12. Chloride mass-balance method for estimating ground water recharge in arid areas: Examples from western Saudi Arabia

    USGS Publications Warehouse

    Bazuhair, A.S.; Wood, W.W.

    1996-01-01

    The chloride mass-balance method, which integrates time and aerial distribution of ground water recharge, was applied to small alluvial aquifers in the wadi systems of the Asir and Hijaz mountains in western Saudi Arabia. This application is an extension of the method shown to be suitable for estimating recharge in regional aquifers in semi-arid areas. Because the method integrates recharge in time and space it appears to be, with certain assumptions, particularly well suited for and areas with large temporal and spatial variation in recharge. In general, recharge was found to be between 3 to 4% of precipitation - a range consistent with recharge rates found in other arid and semi-arid areas of the earth.

  13. An Integrated Computational Materials Engineering Method for Woven Carbon Fiber Composites Preforming Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Weizhao; Ren, Huaqing; Wang, Zequn

    2016-10-19

    An integrated computational materials engineering method is proposed in this paper for analyzing the design and preforming process of woven carbon fiber composites. The goal is to reduce the cost and time needed for the mass production of structural composites. It integrates the simulation methods from the micro-scale to the macro-scale to capture the behavior of the composite material in the preforming process. In this way, the time consuming and high cost physical experiments and prototypes in the development of the manufacturing process can be circumvented. This method contains three parts: the micro-scale representative volume element (RVE) simulation to characterizemore » the material; the metamodeling algorithm to generate the constitutive equations; and the macro-scale preforming simulation to predict the behavior of the composite material during forming. The results show the potential of this approach as a guidance to the design of composite materials and its manufacturing process.« less

  14. Data assimilation using a GPU accelerated path integral Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Quinn, John C.; Abarbanel, Henry D. I.

    2011-09-01

    The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.

  15. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  16. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  17. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-01

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  18. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.

    PubMed

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-28

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  19. A Mixed Methods Analysis of the Effects of an Integrative Geobiological Study of Petrified Wood in Introductory College Geology Classrooms

    ERIC Educational Resources Information Center

    Clary, Renee M.; Wandersee, James H.

    2007-01-01

    Mixed methods research conducted across three semesters in introductory college geology classes (n=187, 190, 138) attempted to ascertain whether integrated study of petrified wood could serve as a portal to improved student geobiological understanding of fossilization, geologic time, and evolution. The Petrified Wood Survey[TM] was administered as…

  20. Coarse-grained representation of the quasi adiabatic propagator path integral for the treatment of non-Markovian long-time bath memory

    NASA Astrophysics Data System (ADS)

    Richter, Martin; Fingerhut, Benjamin P.

    2017-06-01

    The description of non-Markovian effects imposed by low frequency bath modes poses a persistent challenge for path integral based approaches like the iterative quasi-adiabatic propagator path integral (iQUAPI) method. We present a novel approximate method, termed mask assisted coarse graining of influence coefficients (MACGIC)-iQUAPI, that offers appealing computational savings due to substantial reduction of considered path segments for propagation. The method relies on an efficient path segment merging procedure via an intermediate coarse grained representation of Feynman-Vernon influence coefficients that exploits physical properties of system decoherence. The MACGIC-iQUAPI method allows us to access the regime of biological significant long-time bath memory on the order of hundred propagation time steps while retaining convergence to iQUAPI results. Numerical performance is demonstrated for a set of benchmark problems that cover bath assisted long range electron transfer, the transition from coherent to incoherent dynamics in a prototypical molecular dimer and excitation energy transfer in a 24-state model of the Fenna-Matthews-Olson trimer complex where in all cases excellent agreement with numerically exact reference data is obtained.

  1. The importance of independent chronology in integrating records of past climate change for the 60-8 ka INTIMATE time interval

    NASA Astrophysics Data System (ADS)

    Brauer, Achim; Hajdas, Irka; Blockley, Simon P. E.; Bronk Ramsey, Christopher; Christl, Marcus; Ivy-Ochs, Susan; Moseley, Gina E.; Nowaczyk, Norbert N.; Rasmussen, Sune O.; Roberts, Helen M.; Spötl, Christoph; Staff, Richard A.; Svensson, Anders

    2014-12-01

    This paper provides a brief overview of the most common dating techniques applied in palaeoclimate and palaeoenvironmental studies including four radiometric and isotopic dating methods (radiocarbon, 230Th disequilibrium, luminescence, cosmogenic nuclides) and two incremental methods based on layer counting (ice layer, varves). For each method, concise background information about the fundamental principles and methodological approaches is provided. We concentrate on the time interval of focus for the INTIMATE (Integrating Ice core, MArine and TErrestrial records) community (60-8 ka). This dating guide addresses palaeoclimatologists who aim at interpretation of their often regional and local proxy time series in a wider spatial context and, therefore, have to rely on correlation with proxy records obtained from different archives from various regions. For this reason, we especially emphasise scientific approaches for harmonising chronologies for sophisticated and robust proxy data integration. In this respect, up-to-date age modelling techniques are presented as well as tools for linking records by age equivalence including tephrochronology, cosmogenic 10Be and palaeomagnetic variations. Finally, to avoid inadequate documentation of chronologies and assure reliable correlation of proxy time series, this paper provides recommendations for minimum standards of uncertainty and age datum reporting.

  2. High efficiency integration of three-dimensional functional microdevices inside a microfluidic chip by using femtosecond laser multifoci parallel microfabrication

    NASA Astrophysics Data System (ADS)

    Xu, Bing; Du, Wen-Qiang; Li, Jia-Wen; Hu, Yan-Lei; Yang, Liang; Zhang, Chen-Chu; Li, Guo-Qiang; Lao, Zhao-Xin; Ni, Jin-Cheng; Chu, Jia-Ru; Wu, Dong; Liu, Su-Ling; Sugioka, Koji

    2016-01-01

    High efficiency fabrication and integration of three-dimension (3D) functional devices in Lab-on-a-chip systems are crucial for microfluidic applications. Here, a spatial light modulator (SLM)-based multifoci parallel femtosecond laser scanning technology was proposed to integrate microstructures inside a given ‘Y’ shape microchannel. The key novelty of our approach lies on rapidly integrating 3D microdevices inside a microchip for the first time, which significantly reduces the fabrication time. The high quality integration of various 2D-3D microstructures was ensured by quantitatively optimizing the experimental conditions including prebaking time, laser power and developing time. To verify the designable and versatile capability of this method for integrating functional 3D microdevices in microchannel, a series of microfilters with adjustable pore sizes from 12.2 μm to 6.7 μm were fabricated to demonstrate selective filtering of the polystyrene (PS) particles and cancer cells with different sizes. The filter can be cleaned by reversing the flow and reused for many times. This technology will advance the fabrication technique of 3D integrated microfluidic and optofluidic chips.

  3. [How timely are the methods taught in psychotherapy training and practice?].

    PubMed

    Beutel, Manfred E; Michal, Matthias; Wiltink, Jörg; Subic-Wrana, Claudia

    2015-01-01

    Even though many psychotherapists consider themselves to be eclectic or integrative, training and reimbursement in the modern healthcare system are clearly oriented toward the model of distinct psychotherapy approaches. Prompted by the proposition to favor general, disorder-oriented psychotherapy, we investigate how timely distinctive methods are that are taught in training and practice. We reviewed the pertinent literature regarding general and specific factors, the effectiveness of integrative and eclectic treatments, orientation toward specific disorders, manualization and psychotherapeutic training. There is a lack of systematic studies on the efficacy of combining therapy methods from different approaches. The first empirical findings reveal that a superiority of combined versus single treatmentmethods has yet to be demonstrated. The development of transnosological manuals shows the limits of disorder-specific treatment.General factors such as therapeutic alliance or education about the model of disease and treatment rationale require specific definitions. Taking reference to a specific treatment approach provides important consistency of theory, training therapy and supervision, though this does not preclude an openness toward other therapy concepts. Current manualized examples show that methods and techniques can indeed be integrated from other approaches. Integrating different methods can also be seen as a developmental task for practitioners and researchers which may be mastered increasingly better with more experience.

  4. Integrated method for chaotic time series analysis

    DOEpatents

    Hively, Lee M.; Ng, Esmond G.

    1998-01-01

    Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.

  5. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    PubMed

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-12-01

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  6. A highly accurate boundary integral equation method for surfactant-laden drops in 3D

    NASA Astrophysics Data System (ADS)

    Sorgentone, Chiara; Tornberg, Anna-Karin

    2018-05-01

    The presence of surfactants alters the dynamics of viscous drops immersed in an ambient viscous fluid. This is specifically true at small scales, such as in applications of droplet based microfluidics, where the interface dynamics become of increased importance. At such small scales, viscous forces dominate and inertial effects are often negligible. Considering Stokes flow, a numerical method based on a boundary integral formulation is presented for simulating 3D drops covered by an insoluble surfactant. The method is able to simulate drops with different viscosities and close interactions, automatically controlling the time step size and maintaining high accuracy also when substantial drop deformation appears. To achieve this, the drop surfaces as well as the surfactant concentration on each surface are represented by spherical harmonics expansions. A novel reparameterization method is introduced to ensure a high-quality representation of the drops also under deformation, specialized quadrature methods for singular and nearly singular integrals that appear in the formulation are evoked and the adaptive time stepping scheme for the coupled drop and surfactant evolution is designed with a preconditioned implicit treatment of the surfactant diffusion.

  7. Calculating Time-Integral Quantities in Depletion Calculations

    DOE PAGES

    Isotalo, Aarno

    2016-06-02

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  8. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    DOE PAGES

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; ...

    2016-11-25

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  9. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    NASA Astrophysics Data System (ADS)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.

    2016-11-01

    We present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography-mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arranged into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.

  10. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  11. Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz

    NASA Astrophysics Data System (ADS)

    Vanicat, Matthieu

    2018-04-01

    We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.

  12. The fast multipole method and point dipole moment polarizable force fields.

    PubMed

    Coles, Jonathan P; Masella, Michel

    2015-01-14

    We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.

  13. The Space-Time Conservation Element and Solution Element Method: A New High-Resolution and Genuinely Multidimensional Paradigm for Solving Conservation Laws. 1; The Two Dimensional Time Marching Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen

    1998-01-01

    A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.

  14. Finite and spectral cell method for wave propagation in heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Joulaian, Meysam; Duczek, Sascha; Gabbert, Ulrich; Düster, Alexander

    2014-09-01

    In the current paper we present a fast, reliable technique for simulating wave propagation in complex structures made of heterogeneous materials. The proposed approach, the spectral cell method, is a combination of the finite cell method and the spectral element method that significantly lowers preprocessing and computational expenditure. The spectral cell method takes advantage of explicit time-integration schemes coupled with a diagonal mass matrix to reduce the time spent on solving the equation system. By employing a fictitious domain approach, this method also helps to eliminate some of the difficulties associated with mesh generation. Besides introducing a proper, specific mass lumping technique, we also study the performance of the low-order and high-order versions of this approach based on several numerical examples. Our results show that the high-order version of the spectral cell method together requires less memory storage and less CPU time than other possible versions, when combined simultaneously with explicit time-integration algorithms. Moreover, as the implementation of the proposed method in available finite element programs is straightforward, these properties turn the method into a viable tool for practical applications such as structural health monitoring [1-3], quantitative ultrasound applications [4], or the active control of vibrations and noise [5, 6].

  15. Determining a Method of Enabling and Disabling the Integral Torque in the SDO Science and Inertial Mode Controllers

    NASA Technical Reports Server (NTRS)

    Vess, Melissa F.; Starin, Scott R.

    2007-01-01

    During design of the SDO Science and Inertial mode PID controllers, the decision was made to disable the integral torque whenever system stability was in question. Three different schemes were developed to determine when to disable or enable the integral torque, and a trade study was performed to determine which scheme to implement. The trade study compared complexity of the control logic, risk of not reenabling the integral gain in time to reject steady-state error, and the amount of integral torque space used. The first scheme calculated a simplified Routh criterion to determine when to disable the integral torque. The second scheme calculates the PD part of the torque and looked to see if that torque would cause actuator saturation. If so, only the PD torque is used. If not, the integral torque is added. Finally, the third scheme compares the attitude and rate errors to limits and disables the integral torque if either of the errors is greater than the limit. Based on the trade study results, the third scheme was selected. Once it was decided when to disable the integral torque, analysis was performed to determine how to disable the integral torque and whether or not to reset the integrator once the integral torque was reenabled. Three ways to disable the integral torque were investigated: zero the input into the integrator, which causes the integral part of the PID control torque to be held constant; zero the integral torque directly but allow the integrator to continue integrating; or zero the integral torque directly and reset the integrator on integral torque reactivation. The analysis looked at complexity of the control logic, slew time plus settling time between each calibration maneuver step, and ability to reject steady-state error. Based on the results of the analysis, the decision was made to zero the input into the integrator without resetting it. Throughout the analysis, a high fidelity simulation was used to test the various implementation methods.

  16. Comparison of a real-time PCR method with a culture method for the detection of Salmonella enterica serotype enteritidis in naturally contaminated environmental samples from integrated poultry houses.

    PubMed

    Lungu, Bwalya; Waltman, W Douglas; Berghaus, Roy D; Hofacre, Charles L

    2012-04-01

    Conventional culture methods have traditionally been considered the "gold standard" for the isolation and identification of foodborne bacterial pathogens. However, culture methods are labor-intensive and time-consuming. A Salmonella enterica serotype Enteritidis-specific real-time PCR assay that recently received interim approval by the National Poultry Improvement Plan for the detection of Salmonella Enteritidis was evaluated against a culture method that had also received interim National Poultry Improvement Plan approval for the analysis of environmental samples from integrated poultry houses. The method was validated with 422 field samples collected by either the boot sock or drag swab method. The samples were cultured by selective enrichment in tetrathionate broth followed by transfer onto a modified semisolid Rappaport-Vassiliadis medium and then plating onto brilliant green with novobiocin and xylose lysine brilliant Tergitol 4 plates. One-milliliter aliquots of the selective enrichment broths from each sample were collected for DNA extraction by the commercial PrepSEQ nucleic acid extraction assay and analysis by the Salmonella Enteritidis-specific real-time PCR assay. The real-time PCR assay detected no significant differences between the boot sock and drag swab samples. In contrast, the culture method detected a significantly higher number of positive samples from boot socks. The diagnostic sensitivity of the real-time PCR assay for the field samples was significantly higher than that of the culture method. The kappa value obtained was 0.46, indicating moderate agreement between the real-time PCR assay and the culture method. In addition, the real-time PCR method had a turnaround time of 2 days compared with 4 to 8 days for the culture method. The higher sensitivity as well as the reduction in time and labor makes this real-time PCR assay an excellent alternative to conventional culture methods for diagnostic purposes, surveillance, and research studies to improve food safety.

  17. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  18. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  19. Asynchronous collision integrators: Explicit treatment of unilateral contact with friction and nodal restraints

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806

  20. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  1. An integrative view of phylogenetic comparative methods: connections to population genetics, community ecology, and paleobiology.

    PubMed

    Pennell, Matthew W; Harmon, Luke J

    2013-06-01

    Recent innovations in phylogenetic comparative methods (PCMs) have spurred a renaissance of research into the causes and consequences of large-scale patterns of biodiversity. In this paper, we review these advances. We also highlight the potential of comparative methods to integrate across fields and focus on three examples where such integration might be particularly valuable: quantitative genetics, community ecology, and paleobiology. We argue that PCMs will continue to be a key set of tools in evolutionary biology, shedding new light on how evolutionary processes have shaped patterns of biodiversity through deep time. © 2013 New York Academy of Sciences.

  2. Evaluation of radiation loading on finite cylindrical shells using the fast Fourier transform: A comparison with direct numerical integration.

    PubMed

    Liu, S X; Zou, M S

    2018-03-01

    The radiation loading on a vibratory finite cylindrical shell is conventionally evaluated through the direct numerical integration (DNI) method. An alternative strategy via the fast Fourier transform algorithm is put forward in this work based on the general expression of radiation impedance. To check the feasibility and efficiency of the proposed method, a comparison with DNI is presented through numerical cases. The results obtained using the present method agree well with those calculated by DNI. More importantly, the proposed calculating strategy can significantly save the time cost compared with the conventional approach of straightforward numerical integration.

  3. Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems

    NASA Astrophysics Data System (ADS)

    Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao

    2018-02-01

    Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.

  4. NLO cross sections in 4 dimensions without DREG

    NASA Astrophysics Data System (ADS)

    Hernández-Pinto, R. J.; Driencourt-Mangin, F.; Rodrigo, G.; Sborlini, G. F. R.

    2016-10-01

    In this review, we present a new method for computing physical cross sections at NLO accuracy in QCD without using the standard Dimensional Regularisation. The algorithm is based on the Loop-Tree Duality theorem, which allow us to obtain loop integrals as a sum of phase-space integrals; in this way, transforming loop integrals into phase-space integrals, we propose a method to merge virtual and real contributions in order to find observables at NLO in d = 4 space-time dimensions. In addition, the strategy described is used for computing the γ* → qq̅(g) process. A more detailed discussion related on this topic can be found in Ref [1].

  5. Identification of aerodynamic models for maneuvering aircraft

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward; Hu, C. C.

    1992-01-01

    A Fourier analysis method was developed to analyze harmonic forced-oscillation data at high angles of attack as functions of the angle of attack and its time rate of change. The resulting aerodynamic responses at different frequencies are used to build up the aerodynamic models involving time integrals of the indicial type. An efficient numerical method was also developed to evaluate these time integrals for arbitrary motions based on a concept of equivalent harmonic motion. The method was verified by first using results from two-dimensional and three-dimensional linear theories. The developed models for C sub L, C sub D, and C sub M based on high-alpha data for a 70 deg delta wing in harmonic motions showed accurate results in reproducing hysteresis. The aerodynamic models are further verified by comparing with test data using ramp-type motions.

  6. a Time-Dependent Many-Electron Approach to Atomic and Molecular Interactions

    NASA Astrophysics Data System (ADS)

    Runge, Keith

    A new methodology is developed for the description of electronic rearrangement in atomic and molecular collisions. Using the eikonal representation of the total wavefunction, time -dependent equations are derived for the electronic densities within the time-dependent Hartree-Fock approximation. An averaged effective potential which ensures time reversal invariance is used to describe the effect of the fast electronic transitions on the slower nuclear motions. Electron translation factors (ETF) are introduced to eliminate spurious asymptotic couplings, and a local ETF is incorporated into a basis of traveling atomic orbitals. A reference density is used to describe local electronic relaxation and to account for the time propagation of fast and slow motions, and is shown to lead to an efficient integration scheme. Expressions for time-dependent electronic populations and polarization parameters are given. Electronic integrals over Gaussians including ETFs are derived to extend electronic state calculations to dynamical phenomena. Results of the method are in good agreement with experimental data for charge transfer integral cross sections over a projectile energy range of three orders of magnitude in the proton-Hydrogen atom system. The more demanding calculations of integral alignment, state-to-state integral cross sections, and differential cross sections are found to agree well with experimental data provided care is taken to include ETFs in the calculation of electronic integrals and to choose the appropriate effective potential. The method is found to be in good agreement with experimental data for the calculation of charge transfer integral cross sections and state-to-state integral cross sections in the one-electron heteronuclear Helium(2+)-Hydrogen atom system and in the two-electron system, Hydrogen atom-Hydrogen atom. Time-dependent electronic populations are seen to oscillate rapidly in the midst of collision event. In particular, multiple exchanges of the electron are seen to occur in the proton-Hydrogen atom system at low collision energies. The concepts and results derived from the approach provide new insight into the dynamics of nuclear screening and electronic rearrangement in atomic collisions.

  7. Integrity in Presidential Leadership: Principles Related to Maintaining Integrity for College Presidents in the Council for Christian Colleges & Universities

    ERIC Educational Resources Information Center

    Thomason, Robert Riner, Jr.

    2013-01-01

    This qualitative study, utilizing a grounded theory methodological approach, focused on how former Christian college and university presidents maintain their integrity over the course of their lives and their time in office. Eight participants from a variety of theological backgrounds were identified by using purposeful sampling methods; the…

  8. Time for creative integration in medical sociology.

    PubMed

    Levine, S

    1995-01-01

    The burgeoning of medical sociology has sometimes been accompanied by unfortunate parochialism and the presence of opposing intellectual camps that ignore and even impugn each other's work. We have lost opportunities to achieve creative discourse and integration of different perspectives, methods, and findings. At this stage we should consider how we can foster creative integration within our field.

  9. Explaining Changing Suicide Rates in Norway 1948-2004: The Role of Social Integration

    ERIC Educational Resources Information Center

    Barstad, Anders

    2008-01-01

    Using Norway 1948-2004 as a case, I test whether changes in variables related to social integration can explain changes in suicide rates. The method is the Box-Jenkins approach to time-series analysis. Different aspects of family integration contribute significantly to the explanation of Norwegian suicide rates in this period. The estimated effect…

  10. Seakeeping with the semi-Lagrangian particle finite element method

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio

    2017-07-01

    The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.

  11. Assessing Backwards Integration as a Method of KBO Family Finding

    NASA Astrophysics Data System (ADS)

    Benfell, Nathan; Ragozzine, Darin

    2018-04-01

    The age of young asteroid collisional families can sometimes be determined by using backwards n-body integrations of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt, Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. But various challenges present themselves when running precise and accurate 4+ Gyr integrations of Kuiper Belt objects. We have created simulated families of Kuiper Belt Objects with identical starting locations and velocity distributions, based on the Haumea Family. We then ran several long-term test integrations to observe the effect of various simulation parameters on integration results. These integrations were then used to investigate which parameters are of enough significance to require inclusion in the integration. Thereby we determined how to construct long-term integrations that both yield significant results and require manageable processing power. Additionally, we have tested the use of backwards integration as a method of discovery of potential young families in the Kuiper Belt.

  12. Exploring inductive linearization for pharmacokinetic-pharmacodynamic systems of nonlinear ordinary differential equations.

    PubMed

    Hasegawa, Chihiro; Duffull, Stephen B

    2018-02-01

    Pharmacokinetic-pharmacodynamic systems are often expressed with nonlinear ordinary differential equations (ODEs). While there are numerous methods to solve such ODEs these methods generally rely on time-stepping solutions (e.g. Runge-Kutta) which need to be matched to the characteristics of the problem at hand. The primary aim of this study was to explore the performance of an inductive approximation which iteratively converts nonlinear ODEs to linear time-varying systems which can then be solved algebraically or numerically. The inductive approximation is applied to three examples, a simple nonlinear pharmacokinetic model with Michaelis-Menten elimination (E1), an integrated glucose-insulin model and an HIV viral load model with recursive feedback systems (E2 and E3, respectively). The secondary aim of this study was to explore the potential advantages of analytically solving linearized ODEs with two examples, again E3 with stiff differential equations and a turnover model of luteinizing hormone with a surge function (E4). The inductive linearization coupled with a matrix exponential solution provided accurate predictions for all examples with comparable solution time to the matched time-stepping solutions for nonlinear ODEs. The time-stepping solutions however did not perform well for E4, particularly when the surge was approximated by a square wave. In circumstances when either a linear ODE is particularly desirable or the uncertainty in matching the integrator to the ODE system is of potential risk, then the inductive approximation method coupled with an analytical integration method would be an appropriate alternative.

  13. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  14. Integrated method for chaotic time series analysis

    DOEpatents

    Hively, L.M.; Ng, E.G.

    1998-09-29

    Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.

  15. Real-time measurements of jet aircraft engine exhaust.

    PubMed

    Rogers, Fred; Arnott, Pat; Zielinska, Barbara; Sagebiel, John; Kelly, Kerry E; Wagner, David; Lighty, JoAnn S; Sarofim, Adel F

    2005-05-01

    Particulate-phase exhaust properties from two different types of ground-based jet aircraft engines--high-thrust and turboshaft--were studied with real-time instruments on a portable pallet and additional time-integrated sampling devices. The real-time instruments successfully characterized rapidly changing particulate mass, light absorption, and polycyclic aromatic hydrocarbon (PAH) content. The integrated measurements included particulate-size distributions, PAH, and carbon concentrations for an entire test run (i.e., "run-integrated" measurements). In all cases, the particle-size distributions showed single modes peaking at 20-40nm diameter. Measurements of exhaust from high-thrust F404 engines showed relatively low-light absorption compared with exhaust from a turboshaft engine. Particulate-phase PAH measurements generally varied in phase with both net particulate mass and with light-absorbing particulate concentrations. Unexplained response behavior sometimes occurred with the real-time PAH analyzer, although on average the real-time and integrated PAH methods agreed within the same order of magnitude found in earlier investigations.

  16. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  17. Estimates on Functional Integrals of Quantum Mechanics and Non-relativistic Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Bley, Gonzalo A.; Thomas, Lawrence E.

    2017-01-01

    We provide a unified method for obtaining upper bounds for certain functional integrals appearing in quantum mechanics and non-relativistic quantum field theory, functionals of the form {E[{exp}(A_T)]} , the (effective) action {A_T} being a function of particle trajectories up to time T. The estimates in turn yield rigorous lower bounds for ground state energies, via the Feynman-Kac formula. The upper bounds are obtained by writing the action for these functional integrals in terms of stochastic integrals. The method is illustrated in familiar quantum mechanical settings: for the hydrogen atom, for a Schrödinger operator with {1/|x|^2} potential with small coupling, and, with a modest adaptation of the method, for the harmonic oscillator. We then present our principal applications of the method, in the settings of non-relativistic quantum field theories for particles moving in a quantized Bose field, including the optical polaron and Nelson models.

  18. System and method for integrating hazard-based decision making tools and processes

    DOEpatents

    Hodgin, C Reed [Westminster, CO

    2012-03-20

    A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.

  19. Supporting BPMN choreography with system integration artefacts for enterprise process collaboration

    NASA Astrophysics Data System (ADS)

    Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2014-07-01

    Business Process Model and Notation (BPMN) choreography modelling depicts externally visible message exchanges between collaborating processes of enterprise information systems. Implementation of choreography relies on designing system integration solutions to realise message exchanges between independently developed systems. Enterprise integration patterns (EIPs) are widely accepted artefacts to design integration solutions. If the choreography model represents coordination requirements between processes with behaviour mismatches, the integration designer needs to analyse the routing requirements and address these requirements by manually designing EIP message routers. As collaboration scales and complexity increases, manual design becomes inefficient. Thus, the research problem of this paper is to explore a method to automatically identify routing requirements from BPMN choreography model and to accordingly design routing in the integration solution. To achieve this goal, recurring behaviour mismatch scenarios are analysed as patterns, and corresponding solutions are proposed as EIP routers. Using this method, a choreography model can be analysed by computer to identify occurrences of mismatch patterns, leading to corresponding router selection. A case study demonstrates that the proposed method enables computer-assisted integration design to implement choreography. A further experiment reveals that the method is effective to improve the design quality and reduce time cost.

  20. The Crank Nicolson Time Integrator for EMPHASIS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGregor, Duncan Alisdair Odum; Love, Edward; Kramer, Richard Michael Jack

    2018-03-01

    We investigate the use of implicit time integrators for finite element time domain approxi- mations of Maxwell's equations in vacuum. We discretize Maxwell's equations in time using Crank-Nicolson and in 3D space using compatible finite elements. We solve the system by taking a single step of Newton's method and inverting the Eddy-Current Schur complement allowing for the use of standard preconditioning techniques. This approach also generalizes to more complex material models that can include the Unsplit PML. We present verification results and demonstrate performance at CFL numbers up to 1000.

  1. Higher-order time integration of Coulomb collisions in a plasma using Langevin equations

    DOE PAGES

    Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...

    2013-02-08

    The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less

  2. Numerical solution of boundary-integral equations for molecular electrostatics.

    PubMed

    Bardhan, Jaydeep P

    2009-03-07

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.

  3. Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean

    2017-10-01

    Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.

  4. A Fourier collocation time domain method for numerically solving Maxwell's equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1991-01-01

    A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.

  5. Symbolic programming language in molecular multicenter integral problem

    NASA Astrophysics Data System (ADS)

    Safouhi, Hassan; Bouferguene, Ahmed

    It is well known that in any ab initio molecular orbital (MO) calculation, the major task involves the computation of molecular integrals, among which the computation of three-center nuclear attraction and Coulomb integrals is the most frequently encountered. As the molecular system becomes larger, computation of these integrals becomes one of the most laborious and time-consuming steps in molecular systems calculation. Improvement of the computational methods of molecular integrals would be indispensable to further development in computational studies of large molecular systems. To develop fast and accurate algorithms for the numerical evaluation of these integrals over B functions, we used nonlinear transformations for improving convergence of highly oscillatory integrals. These methods form the basis of new methods for solving various problems that were unsolvable otherwise and have many applications as well. To apply these nonlinear transformations, the integrands should satisfy linear differential equations with coefficients having asymptotic power series in the sense of Poincaré, which in their turn should satisfy some limit conditions. These differential equations are very difficult to obtain explicitly. In the case of molecular integrals, we used a symbolic programming language (MAPLE) to demonstrate that all the conditions required to apply these nonlinear transformation methods are satisfied. Differential equations are obtained explicitly, allowing us to demonstrate that the limit conditions are also satisfied.

  6. Biomarkers of animal health: integrating nutritional ecology, endocrine ecophysiology, ecoimmunology, and geospatial ecology.

    PubMed

    Warne, Robin W; Proudfoot, Glenn A; Crespi, Erica J

    2015-02-01

    Diverse biomarkers including stable isotope, hormonal, and ecoimmunological assays are powerful tools to assess animal condition. However, an integrative approach is necessary to provide the context essential to understanding how biomarkers reveal animal health in varied ecological conditions. A barrier to such integration is a general lack of awareness of how shared extraction methods from across fields can provide material from the same animal tissues for diverse biomarker assays. In addition, the use of shared methods for extracting differing tissue fractions can also provide biomarkers for how animal health varies across time. Specifically, no study has explicitly illustrated the depth and breadth of spacial and temporal information that can be derived from coupled biomarker assessments on two easily collected tissues: blood and feathers or hair. This study used integrated measures of glucocorticoids, stable isotopes, and parasite loads in the feathers and blood of fall-migrating Northern saw-whet owls (Aegolius acadicus) to illustrate the wealth of knowledge about animal health and ecology across both time and space. In feathers, we assayed deuterium (δD) isotope and corticosterone (CORT) profiles, while in blood we measured CORT and blood parasite levels. We found that while earlier migrating owls had elevated CORT levels relative to later migrating birds, there was also a disassociation between plasma and feather CORT, and blood parasite loads. These results demonstrate how these tissues integrate time periods from weeks to seasons and reflect energetic demands during differing life stages. Taken together, these findings illustrate the potential for integrating diverse biomarkers to assess interactions between environmental factors and animal health across varied time periods without the necessity of continually recapturing and tracking individuals. Combining biomarkers from diverse research fields into an integrated framework hold great promise for advancing our understanding of environmental effects on animal health.

  7. A Numerical Scheme for Ordinary Differential Equations Having Time Varying and Nonlinear Coefficients Based on the State Transition Matrix

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2002-01-01

    A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.

  8. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  9. Bayes factors for the linear ballistic accumulator model of decision-making.

    PubMed

    Evans, Nathan J; Brown, Scott D

    2018-04-01

    Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.

  10. Trajectories for High Specific Impulse High Specific Power Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Flight times and deliverable masses for electric and fusion propulsion systems are difficult to approximate. Numerical integration is required for these continuous thrust systems. Many scientists are not equipped with the tools and expertise to conduct interplanetary and interstellar trajectory analysis for their concepts. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. An analytical method derived in the companion paper was also evaluated. The accuracy of this method is discussed in the paper.

  11. Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics

    NASA Astrophysics Data System (ADS)

    d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.

    2018-05-01

    Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.

  12. Induction of plasticity in the human motor cortex by pairing an auditory stimulus with TMS.

    PubMed

    Sowman, Paul F; Dueholm, Søren S; Rasmussen, Jesper H; Mrachacz-Kersting, Natalie

    2014-01-01

    Acoustic stimuli can cause a transient increase in the excitability of the motor cortex. The current study leverages this phenomenon to develop a method for testing the integrity of auditorimotor integration and the capacity for auditorimotor plasticity. We demonstrate that appropriately timed transcranial magnetic stimulation (TMS) of the hand area, paired with auditorily mediated excitation of the motor cortex, induces an enhancement of motor cortex excitability that lasts beyond the time of stimulation. This result demonstrates for the first time that paired associative stimulation (PAS)-induced plasticity within the motor cortex is applicable with auditory stimuli. We propose that the method developed here might provide a useful tool for future studies that measure auditory-motor connectivity in communication disorders.

  13. Numerical experiment for ultrasonic-measurement-integrated simulation of three-dimensional unsteady blood flow.

    PubMed

    Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki

    2008-08-01

    Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.

  14. An approximate method for solution to variable moment of inertia problems

    NASA Technical Reports Server (NTRS)

    Beans, E. W.

    1981-01-01

    An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.

  15. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaodong; Xia, Yidong; Luo, Hong

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  16. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE PAGES

    Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...

    2016-10-05

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  17. Reflective and refractive objects for mixed reality.

    PubMed

    Knecht, Martin; Traxler, Christoph; Winklhofer, Christoph; Wimmer, Michael

    2013-04-01

    In this paper, we present a novel rendering method which integrates reflective or refractive objects into a differential instant radiosity (DIR) framework usable for mixed-reality (MR) applications. This kind of objects are very special from the light interaction point of view, as they reflect and refract incident rays. Therefore they may cause high-frequency lighting effects known as caustics. Using instant-radiosity (IR) methods to approximate these high-frequency lighting effects would require a large amount of virtual point lights (VPLs) and is therefore not desirable due to real-time constraints. Instead, our approach combines differential instant radiosity with three other methods. One method handles more accurate reflections compared to simple cubemaps by using impostors. Another method is able to calculate two refractions in real-time, and the third method uses small quads to create caustic effects. Our proposed method replaces parts in light paths that belong to reflective or refractive objects using these three methods and thus tightly integrates into DIR. In contrast to previous methods which introduce reflective or refractive objects into MR scenarios, our method produces caustics that also emit additional indirect light. The method runs at real-time frame rates, and the results show that reflective and refractive objects with caustics improve the overall impression for MR scenarios.

  18. Variational Algorithms for Test Particle Trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  19. Finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems

    NASA Astrophysics Data System (ADS)

    Xie, Xue-Jun; Zhang, Xing-Hui; Zhang, Kemei

    2016-07-01

    This paper studies the finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems. Based on the stochastic Lyapunov theorem on finite-time stability, by using the homogeneous domination method, the adding one power integrator and sign function method, constructing a ? Lyapunov function and verifying the existence and uniqueness of solution, a continuous state feedback controller is designed to guarantee the closed-loop system finite-time stable in probability.

  20. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  1. A user-defined data type for the storage of time series data allowing efficient similarity screening.

    PubMed

    Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor

    2012-07-16

    The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Multi-symplectic integrators: numerical schemes for Hamiltonian PDEs that conserve symplecticity

    NASA Astrophysics Data System (ADS)

    Bridges, Thomas J.; Reich, Sebastian

    2001-06-01

    The symplectic numerical integration of finite-dimensional Hamiltonian systems is a well established subject and has led to a deeper understanding of existing methods as well as to the development of new very efficient and accurate schemes, e.g., for rigid body, constrained, and molecular dynamics. The numerical integration of infinite-dimensional Hamiltonian systems or Hamiltonian PDEs is much less explored. In this Letter, we suggest a new theoretical framework for generalizing symplectic numerical integrators for ODEs to Hamiltonian PDEs in R2: time plus one space dimension. The central idea is that symplecticity for Hamiltonian PDEs is directional: the symplectic structure of the PDE is decomposed into distinct components representing space and time independently. In this setting PDE integrators can be constructed by concatenating uni-directional ODE symplectic integrators. This suggests a natural definition of multi-symplectic integrator as a discretization that conserves a discrete version of the conservation of symplecticity for Hamiltonian PDEs. We show that this approach leads to a general framework for geometric numerical schemes for Hamiltonian PDEs, which have remarkable energy and momentum conservation properties. Generalizations, including development of higher-order methods, application to the Euler equations in fluid mechanics, application to perturbed systems, and extension to more than one space dimension are also discussed.

  3. Toward magnetic resonance-guided electroanatomical voltage mapping for catheter ablation of scar-related ventricular tachycardia: a comparison of registration methods.

    PubMed

    Tao, Qian; Milles, Julien; VAN Huls VAN Taxis, Carine; Lamb, Hildo J; Reiber, Johan H C; Zeppenfeld, Katja; VAN DER Geest, Rob J

    2012-01-01

    Integration of preprocedural delayed enhanced magnetic resonance imaging (DE-MRI) with electroanatomical voltage mapping (EAVM) may provide additional high-resolution substrate information for catheter ablation of scar-related ventricular tachycardias (VT). Accurate and fast image integration of DE-MRI with EAVM is desirable for MR-guided ablation. Twenty-six VT patients with large transmural scar underwent catheter ablation and preprocedural DE-MRI. With different registration models and EAVM input, 3 image integration methods were evaluated and compared to the commercial registration module CartoMerge. The performance was evaluated both in terms of distance measure that describes surface matching, and correlation measure that describes actual scar correspondence. Compared to CartoMerge, the method that uses the translation-and-rotation model and high-density EAVM input resulted in a registration error of 4.32±0.69 mm as compared to 4.84 ± 1.07 (P <0.05); the method that uses the translation model and high-density EAVM input resulted in a registration error of 4.60 ± 0.65 mm (P = NS); and the method that uses the translation model and a single anatomical landmark input resulted in a registration error of 6.58 ± 1.63 mm (P < 0.05). No significant difference in scar correlation was observed between all 3 methods and CartoMerge (P = NS). During VT ablation procedures, accurate integration of EAVM and DE-MRI can be achieved using a translation registration model and a single anatomical landmark. This model allows for image integration in minimal mapping time and is likely to reduce fluoroscopy time and increase procedure efficacy. © 2011 Wiley Periodicals, Inc.

  4. A Location Method Using Sensor Arrays for Continuous Gas Leakage in Integrally Stiffened Plates Based on the Acoustic Characteristics of the Stiffener

    PubMed Central

    Bian, Xu; Li, Yibo; Feng, Hao; Wang, Jiaqiang; Qi, Lei; Jin, Shijiu

    2015-01-01

    This paper proposes a continuous leakage location method based on the ultrasonic array sensor, which is specific to continuous gas leakage in a pressure container with an integral stiffener. This method collects the ultrasonic signals generated from the leakage hole through the piezoelectric ultrasonic sensor array, and analyzes the space-time correlation of every collected signal in the array. Meanwhile, it combines with the method of frequency compensation and superposition in time domain (SITD), based on the acoustic characteristics of the stiffener, to obtain a high-accuracy location result on the stiffener wall. According to the experimental results, the method successfully solves the orientation problem concerning continuous ultrasonic signals generated from leakage sources, and acquires high accuracy location information on the leakage source using a combination of multiple sets of orienting results. The mean value of location absolute error is 13.51 mm on the one-square-meter plate with an integral stiffener (4 mm width; 20 mm height; 197 mm spacing), and the maximum location absolute error is generally within a ±25 mm interval. PMID:26404316

  5. Cutting Force Predication Based on Integration of Symmetric Fuzzy Number and Finite Element Method

    PubMed Central

    Wang, Zhanli; Hu, Yanjuan; Wang, Yao; Dong, Chao; Pang, Zaixiang

    2014-01-01

    In the process of turning, pointing at the uncertain phenomenon of cutting which is caused by the disturbance of random factors, for determining the uncertain scope of cutting force, the integrated symmetric fuzzy number and the finite element method (FEM) are used in the prediction of cutting force. The method used symmetric fuzzy number to establish fuzzy function between cutting force and three factors and obtained the uncertain interval of cutting force by linear programming. At the same time, the change curve of cutting force with time was directly simulated by using thermal-mechanical coupling FEM; also the nonuniform stress field and temperature distribution of workpiece, tool, and chip under the action of thermal-mechanical coupling were simulated. The experimental result shows that the method is effective for the uncertain prediction of cutting force. PMID:24790556

  6. Symmetric and arbitrarily high-order Birkhoff-Hermite time integrators and their long-time behaviour for solving nonlinear Klein-Gordon equations

    NASA Astrophysics Data System (ADS)

    Liu, Changying; Iserles, Arieh; Wu, Xinyuan

    2018-03-01

    The Klein-Gordon equation with nonlinear potential occurs in a wide range of application areas in science and engineering. Its computation represents a major challenge. The main theme of this paper is the construction of symmetric and arbitrarily high-order time integrators for the nonlinear Klein-Gordon equation by integrating Birkhoff-Hermite interpolation polynomials. To this end, under the assumption of periodic boundary conditions, we begin with the formulation of the nonlinear Klein-Gordon equation as an abstract second-order ordinary differential equation (ODE) and its operator-variation-of-constants formula. We then derive a symmetric and arbitrarily high-order Birkhoff-Hermite time integration formula for the nonlinear abstract ODE. Accordingly, the stability, convergence and long-time behaviour are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix, subject to suitable temporal and spatial smoothness. A remarkable characteristic of this new approach is that the requirement of temporal smoothness is reduced compared with the traditional numerical methods for PDEs in the literature. Numerical results demonstrate the advantage and efficiency of our time integrators in comparison with the existing numerical approaches.

  7. VizieR Online Data Catalog: ynogkm: code for calculating time-like geodesics (Yang+, 2014)

    NASA Astrophysics Data System (ADS)

    Yang, X.-L.; Wang, J.-C.

    2013-11-01

    Here we present the source file for a new public code named ynogkm, aim on calculating the time-like geodesics in a Kerr-Newmann spacetime fast. In the code the four Boyer-Lindquis coordinates and proper time are expressed as functions of a parameter p semi-analytically, i.e., r(p), μ(p), φ(p), t(p), and σ(p), by using the Weiers- trass' and Jacobi's elliptic functions and integrals. All of the ellip- tic integrals are computed by Carlson's elliptic integral method, which guarantees the fast speed of the code.The source Fortran file ynogkm.f90 contains three modules: constants, rootfind, ellfunction, and blcoordinates. (3 data files).

  8. Geometric integration in Born-Oppenheimer molecular dynamics.

    PubMed

    Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N

    2011-12-14

    Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics

  9. Exact simulation of integrate-and-fire models with exponential currents.

    PubMed

    Brette, Romain

    2007-10-01

    Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

  10. Thermal form-factor approach to dynamical correlation functions of integrable lattice models

    NASA Astrophysics Data System (ADS)

    Göhmann, Frank; Karbach, Michael; Klümper, Andreas; Kozlowski, Karol K.; Suzuki, Junji

    2017-11-01

    We propose a method for calculating dynamical correlation functions at finite temperature in integrable lattice models of Yang-Baxter type. The method is based on an expansion of the correlation functions as a series over matrix elements of a time-dependent quantum transfer matrix rather than the Hamiltonian. In the infinite Trotter-number limit the matrix elements become time independent and turn into the thermal form factors studied previously in the context of static correlation functions. We make this explicit with the example of the XXZ model. We show how the form factors can be summed utilizing certain auxiliary functions solving finite sets of nonlinear integral equations. The case of the XX model is worked out in more detail leading to a novel form-factor series representation of the dynamical transverse two-point function.

  11. Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.

    PubMed

    Kis, Maria

    2005-01-01

    In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.

  12. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    1999-01-01

    This paper reviews three recent works on the numerical methods to integrate ordinary differential equations (ODE), which are specially designed for parallel, vector, and/or multi-processor-unit(PU) computers. The first is the Picard-Chebyshev method (Fukushima, 1997a). It obtains a global solution of ODE in the form of Chebyshev polynomial of large (> 1000) degree by applying the Picard iteration repeatedly. The iteration converges for smooth problems and/or perturbed dynamics. The method runs around 100-1000 times faster in the vector mode than in the scalar mode of a certain computer with vector processors (Fukushima, 1997b). The second is a parallelization of a symplectic integrator (Saha et al., 1997). It regards the implicit midpoint rules covering thousands of timesteps as large-scale nonlinear equations and solves them by the fixed-point iteration. The method is applicable to Hamiltonian systems and is expected to lead an acceleration factor of around 50 in parallel computers with more than 1000 PUs. The last is a parallelization of the extrapolation method (Ito and Fukushima, 1997). It performs trial integrations in parallel. Also the trial integrations are further accelerated by balancing computational load among PUs by the technique of folding. The method is all-purpose and achieves an acceleration factor of around 3.5 by using several PUs. Finally, we give a perspective on the parallelization of some implicit integrators which require multiple corrections in solving implicit formulas like the implicit Hermitian integrators (Makino and Aarseth, 1992), (Hut et al., 1995) or the implicit symmetric multistep methods (Fukushima, 1998), (Fukushima, 1999).

  13. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1996-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. Art accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  14. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1998-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  15. Method for Fabricating and Packaging an M.Times.N Phased-Array Antenna

    NASA Technical Reports Server (NTRS)

    Xu, Xiaochuan (Inventor); Chen, Yihong (Inventor); Chen, Ray T. (Inventor); Subbaraman, Harish (Inventor)

    2017-01-01

    A method for fabricating an M.times.N, P-bit phased-array antenna on a flexible substrate is disclosed. The method comprising ink jet printing and hardening alignment marks, antenna elements, transmission lines, switches, an RF coupler, and multilayer interconnections onto the flexible substrate. The substrate of the M.times.N, P-bit phased-array antenna may comprise an integrated control circuit of printed electronic components such as, photovoltaic cells, batteries, resistors, capacitors, etc. Other embodiments are described and claimed.

  16. Integration of laser trapping for continuous and selective monitoring of photothermal response of a single microparticle.

    PubMed

    Vasudevan, Srivathsan; Chen, George C K; Ahluwalia, Balpreet Singh

    2008-12-01

    Photothermal response (PTR) is an established pump and probe technique for real-time sensing of biological assays. Continuous and selective PTR monitoring is difficult owing to the Brownian motion changing the relative position of the target with respect to the beams. Integration of laser trapping with PTR is proposed as a solution. The proposed method is verified on red polystyrene microparticles. PTR is continuously monitored for 30 min. Results show that the mean relaxation time variation of the acquired signals is less than 5%. The proposed method is then applied to human red blood cells for continuous and selective PTR.

  17. Real-time, interactive animation of deformable two- and three-dimensional objects

    DOEpatents

    Desbrun, Mathieu; Schroeder, Peter; Meyer, Mark; Barr, Alan H.

    2003-06-03

    A method of updating in real-time the locations and velocities of mass points of a two- or three-dimensional object represented by a mass-spring system. A modified implicit Euler integration scheme is employed to determine the updated locations and velocities. In an optional post-integration step, the updated locations are corrected to preserve angular momentum. A processor readable medium and a network server each tangibly embodying the method are also provided. A system comprising a processor in combination with the medium, and a system comprising the server in combination with a client for accessing the server over a computer network, are also provided.

  18. Collaboration processes and perceived effectiveness of integrated care projects in primary care: a longitudinal mixed-methods study.

    PubMed

    Valentijn, Pim P; Ruwaard, Dirk; Vrijhoef, Hubertus J M; de Bont, Antoinette; Arends, Rosa Y; Bruijnzeels, Marc A

    2015-10-09

    Collaborative partnerships are considered an essential strategy for integrating local disjointed health and social services. Currently, little evidence is available on how integrated care arrangements between professionals and organisations are achieved through the evolution of collaboration processes over time. The first aim was to develop a typology of integrated care projects (ICPs) based on the final degree of integration as perceived by multiple stakeholders. The second aim was to study how types of integration differ in changes of collaboration processes over time and final perceived effectiveness. A longitudinal mixed-methods study design based on two data sources (surveys and interviews) was used to identify the perceived degree of integration and patterns in collaboration among 42 ICPs in primary care in The Netherlands. We used cluster analysis to identify distinct subgroups of ICPs based on the final perceived degree of integration from a professional, organisational and system perspective. With the use of ANOVAs, the subgroups were contrasted based on: 1) changes in collaboration processes over time (shared ambition, interests and mutual gains, relationship dynamics, organisational dynamics and process management) and 2) final perceived effectiveness (i.e. rated success) at the professional, organisational and system levels. The ICPs were classified into three subgroups with: 'United Integration Perspectives (UIP)', 'Disunited Integration Perspectives (DIP)' and 'Professional-oriented Integration Perspectives (PIP)'. ICPs within the UIP subgroup made the strongest increase in trust-based (mutual gains and relationship dynamics) as well as control-based (organisational dynamics and process management) collaboration processes and had the highest overall effectiveness rates. On the other hand, ICPs with the DIP subgroup decreased on collaboration processes and had the lowest overall effectiveness rates. ICPs within the PIP subgroup increased in control-based collaboration processes (organisational dynamics and process management) and had the highest effectiveness rates at the professional level. The differences across the three subgroups in terms of the development of collaboration processes and the final perceived effectiveness provide evidence that united stakeholders' perspectives are achieved through a constructive collaboration process over time. Disunited perspectives at the professional, organisation and system levels can be aligned by both trust-based and control-based collaboration processes.

  19. High throughput integrated thermal characterization with non-contact optical calorimetry

    NASA Astrophysics Data System (ADS)

    Hou, Sichao; Huo, Ruiqing; Su, Ming

    2017-10-01

    Commonly used thermal analysis tools such as calorimeter and thermal conductivity meter are separated instruments and limited by low throughput, where only one sample is examined each time. This work reports an infrared based optical calorimetry with its theoretical foundation, which is able to provide an integrated solution to characterize thermal properties of materials with high throughput. By taking time domain temperature information of spatially distributed samples, this method allows a single device (infrared camera) to determine the thermal properties of both phase change systems (melting temperature and latent heat of fusion) and non-phase change systems (thermal conductivity and heat capacity). This method further allows these thermal properties of multiple samples to be determined rapidly, remotely, and simultaneously. In this proof-of-concept experiment, the thermal properties of a panel of 16 samples including melting temperatures, latent heats of fusion, heat capacities, and thermal conductivities have been determined in 2 min with high accuracy. Given the high thermal, spatial, and temporal resolutions of the advanced infrared camera, this method has the potential to revolutionize the thermal characterization of materials by providing an integrated solution with high throughput, high sensitivity, and short analysis time.

  20. Using MathCad to Evaluate Exact Integral Formulations of Spacecraft Orbital Heats for Primitive Surfaces at Any Orientation

    NASA Technical Reports Server (NTRS)

    Pinckney, John

    2010-01-01

    With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.

  1. An Integrated Analysis-Test Approach

    NASA Technical Reports Server (NTRS)

    Kaufman, Daniel

    2003-01-01

    This viewgraph presentation provides an overview of a project to develop a computer program which integrates data analysis and test procedures. The software application aims to propose a new perspective to traditional mechanical analysis and test procedures and to integrate pre-test and test analysis calculation methods. The program also should also be able to be used in portable devices and allows for the 'quasi-real time' analysis of data sent by electronic means. Test methods reviewed during this presentation include: shaker swept sine and random tests, shaker shock mode tests, shaker base driven model survey tests and acoustic tests.

  2. INTEGRAL/SPI data segmentation to retrieve source intensity variations

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P. R.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-07-01

    Context. The INTEGRAL/SPI, X/γ-ray spectrometer (20 keV-8 MeV) is an instrument for which recovering source intensity variations is not straightforward and can constitute a difficulty for data analysis. In most cases, determining the source intensity changes between exposures is largely based on a priori information. Aims: We propose techniques that help to overcome the difficulty related to source intensity variations, which make this step more rational. In addition, the constructed "synthetic" light curves should permit us to obtain a sky model that describes the data better and optimizes the source signal-to-noise ratios. Methods: For this purpose, the time intensity variation of each source was modeled as a combination of piecewise segments of time during which a given source exhibits a constant intensity. To optimize the signal-to-noise ratios, the number of segments was minimized. We present a first method that takes advantage of previous time series that can be obtained from another instrument on-board the INTEGRAL observatory. A data segmentation algorithm was then used to synthesize the time series into segments. The second method no longer needs external light curves, but solely SPI raw data. For this, we developed a specific algorithm that involves the SPI transfer function. Results: The time segmentation algorithms that were developed solve a difficulty inherent to the SPI instrument, which is the intensity variations of sources between exposures, and it allows us to obtain more information about the sources' behavior. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain, and Switzerland), Czech Republic and Poland with participation of Russia and the USA.

  3. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time

    PubMed Central

    Lu, Yuhua; Liu, Qian

    2018-01-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870

  4. Recording of absorption spectra by a three-beam integral technique with a tunable laser and external cavity

    NASA Astrophysics Data System (ADS)

    Korolenko, P. V.; Nikolaev, I. V.; Ochkin, V. N.; Tskhai, S. N.

    2014-04-01

    An integral method is considered for recording absorption using three laser beams transmitted through and reflected from an external cavity with the absorbing medium (R-ICOS). The method is the elaboration of a known single-beam ICOS method and allows suppression of the influence of radiation phase fluctuations in the resonator on recording weak absorption spectra. First of all, this reduces high-frequency instabilities and gives a possibility to record spectra during short time intervals. In this method, mirrors of the resonator may have moderate reflection coefficients. Capabilities of the method have been demonstrated by the examples of weak absorption spectra of atmospheric methane and natural gas in a spectral range around 1650 nm. With the mirrors having the reflection coefficients of 0.8-0.99, a spectrum can be recorded for 320 μs with the accuracy sufficient for detecting a background concentration of methane in atmosphere. For the acquisition time of 20 s, the absorption coefficients of ~2×10-8 cm-1 can be measured, which corresponds to a 40 times less molecule concentration than the background value.

  5. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time.

    PubMed

    Xu, Lang; Lu, Yuhua; Liu, Qian

    2018-02-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.

  6. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  7. Development of Multistep and Degenerate Variational Integrators for Applications in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Ellison, Charles Leland

    Geometric integrators yield high-fidelity numerical results by retaining conservation laws in the time advance. A particularly powerful class of geometric integrators is symplectic integrators, which are widely used in orbital mechanics and accelerator physics. An important application presently lacking symplectic integrators is the guiding center motion of magnetized particles represented by non-canonical coordinates. Because guiding center trajectories are foundational to many simulations of magnetically confined plasmas, geometric guiding center algorithms have high potential for impact. The motivation is compounded by the need to simulate long-pulse fusion devices, including ITER, and opportunities in high performance computing, including the use of petascale resources and beyond. This dissertation uses a systematic procedure for constructing geometric integrators --- known as variational integration --- to deliver new algorithms for guiding center trajectories and other plasma-relevant dynamical systems. These variational integrators are non-trivial because the Lagrangians of interest are degenerate - the Euler-Lagrange equations are first-order differential equations and the Legendre transform is not invertible. The first contribution of this dissertation is that variational integrators for degenerate Lagrangian systems are typically multistep methods. Multistep methods admit parasitic mode instabilities that can ruin the numerical results. These instabilities motivate the second major contribution: degenerate variational integrators. By replicating the degeneracy of the continuous system, degenerate variational integrators avoid parasitic mode instabilities. The new methods are therefore robust geometric integrators for degenerate Lagrangian systems. These developments in variational integration theory culminate in one-step degenerate variational integrators for non-canonical magnetic field line flow and guiding center dynamics. The guiding center integrator assumes coordinates such that one component of the magnetic field is zero; it is shown how to construct such coordinates for nested magnetic surface configurations. Additionally, collisional drag effects are incorporated in the variational guiding center algorithm for the first time, allowing simulation of energetic particle thermalization. Advantages relative to existing canonical-symplectic and non-geometric algorithms are numerically demonstrated. All algorithms have been implemented as part of a modern, parallel, ODE-solving library, suitable for use in high-performance simulations.

  8. Remote Sensing: A valuable tool in the Forest Service decision making process. [in Utah

    NASA Technical Reports Server (NTRS)

    Stanton, F. L.

    1975-01-01

    Forest Service studies for integrating remotely sensed data into existing information systems highlight a need to: (1) re-examine present methods of collecting and organizing data, (2) develop an integrated information system for rapidly processing and interpreting data, (3) apply existing technological tools in new ways, and (4) provide accurate and timely information for making right management decisions. The Forest Service developed an integrated information system using remote sensors, microdensitometers, computer hardware and software, and interactive accessories. Their efforts substantially reduce the time it takes for collecting and processing data.

  9. Spin coherent-state path integrals and the instanton calculus

    NASA Astrophysics Data System (ADS)

    Garg, Anupam; Kochetov, Evgueny; Park, Kee-Su; Stone, Michael

    2003-01-01

    We use an instanton approximation to the continuous-time spin coherent-state path integral to obtain the tunnel splitting of classically degenerate ground states. We show that provided the fluctuation determinant is carefully evaluated, the path integral expression is accurate to order O(1/j). We apply the method to the LMG model and to the molecular magnet Fe8 in a transverse field.

  10. High-sensitivity Leak-testing Method with High-Resolution Integration Technique

    NASA Astrophysics Data System (ADS)

    Fujiyoshi, Motohiro; Nonomura, Yutaka; Senda, Hidemi

    A high-resolution leak-testing method named HR (High-Resolution) Integration Technique has been developed for MEMS (Micro Electro Mechanical Systems) sensors such as a vibrating angular-rate sensor housed in a vacuum package. Procedures of the method to obtain high leak-rate resolution were as follows. A package filled with helium gas was kept in a small accumulation chamber to accumulate helium gas leaking from the package. After the accumulation, the accumulated helium gas was introduced into a mass spectrometer in a short period of time, and the flux of the helium gas was measured by the mass spectrometer as a transient phenomenon. The leak-rate of the package was calculated from the detected transient waveform of the mass spectrometer and the accumulation time of the helium gas in the accumulation chamber. Because the density of the helium gas in the vacuum chamber increased and the accumulated helium gas was measured in a very short period of time with the mass spectrometer, the peak strength of the transient waveform became high and the signal to noise ratio was much improved. The detectable leak-rate resolution of the technique reached 1×10-15 (Pa·m3/s). This resolution is 103 times superior to that of the conventional helium vacuum integration method. The accuracy of the measuring system was verified with a standard helium gas leak source. The results were well matched between theoretical calculation based on the leak-rate of the source and the experimental results within only 2% error.

  11. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    PubMed

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  12. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase

    PubMed Central

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-01-01

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533

  13. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.

    PubMed

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-09-10

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.

  14. Numerical integration of KPZ equation with restrictions

    NASA Astrophysics Data System (ADS)

    Torres, M. F.; Buceta, R. C.

    2018-03-01

    In this paper, we introduce a novel integration method of Kardar–Parisi–Zhang (KPZ) equation. It is known that if during the discrete integration of the KPZ equation the nearest-neighbor height-difference exceeds a critical value, instabilities appear and the integration diverges. One way to avoid these instabilities is to replace the KPZ nonlinear-term by a function of the same term that depends on a single adjustable parameter which is able to control pillars or grooves growing on the interface. Here, we propose a different integration method which consists of directly limiting the value taken by the KPZ nonlinearity, thereby imposing a restriction rule that is applied in each integration time-step, as if it were the growth rule of a restricted discrete model, e.g. restricted-solid-on-solid (RSOS). Taking the discrete KPZ equation with restrictions to its dimensionless version, the integration depends on three parameters: the coupling constant g, the inverse of the time-step k, and the restriction constant ε which is chosen to eliminate divergences while keeping all the properties of the continuous KPZ equation. We study in detail the conditions in the parameters’ space that avoid divergences in the 1-dimensional integration and reproduce the scaling properties of the continuous KPZ with a particular parameter set. We apply the tested methodology to the d-dimensional case (d = 3, 4 ) with the purpose of obtaining the growth exponent β, by establishing the conditions of the coupling constant g under which we recover known values reached by other authors, particularly for the RSOS model. This method allows us to infer that d  =  4 is not the critical dimension of the KPZ universality class, where the strong-coupling phase disappears.

  15. Simulation electromagnetic scattering on bodies through integral equation and neural networks methods

    NASA Astrophysics Data System (ADS)

    Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.

    2018-05-01

    The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.

  16. Spatial Data Integration Using Ontology-Based Approach

    NASA Astrophysics Data System (ADS)

    Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.

    2015-12-01

    In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  17. Pseudo-time algorithms for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, E.

    1986-01-01

    A pseudo-time method is introduced to integrate the compressible Navier-Stokes equations to a steady state. This method is a generalization of a method used by Crocco and also by Allen and Cheng. We show that for a simple heat equation that this is just a renormalization of the time. For a convection-diffusion equation the renormalization is dependent only on the viscous terms. We implement the method for the Navier-Stokes equations using a Runge-Kutta type algorithm. This permits the time step to be chosen based on the inviscid model only. We also discuss the use of residual smoothing when viscous terms are present.

  18. Unmanned aircraft system sense and avoid integrity and continuity

    NASA Astrophysics Data System (ADS)

    Jamoom, Michael B.

    This thesis describes new methods to guarantee safety of sense and avoid (SAA) functions for Unmanned Aircraft Systems (UAS) by evaluating integrity and continuity risks. Previous SAA efforts focused on relative safety metrics, such as risk ratios, comparing the risk of using an SAA system versus not using it. The methods in this thesis evaluate integrity and continuity risks as absolute measures of safety, as is the established practice in commercial aircraft terminal area navigation applications. The main contribution of this thesis is a derivation of a new method, based on a standard intruder relative constant velocity assumption, that uses hazard state estimates and estimate error covariances to establish (1) the integrity risk of the SAA system not detecting imminent loss of '"well clear," which is the time and distance required to maintain safe separation from intruder aircraft, and (2) the probability of false alert, the continuity risk. Another contribution is applying these integrity and continuity risk evaluation methods to set quantifiable and certifiable safety requirements on sensors. A sensitivity analysis uses this methodology to evaluate the impact of sensor errors on integrity and continuity risks. The penultimate contribution is an integrity and continuity risk evaluation where the estimation model is refined to address realistic intruder relative linear accelerations, which goes beyond the current constant velocity standard. The final contribution is an integrity and continuity risk evaluation addressing multiple intruders. This evaluation is a new innovation-based method to determine the risk of mis-associating intruder measurements. A mis-association occurs when the SAA system incorrectly associates a measurement to the wrong intruder, causing large errors in the estimated intruder trajectories. The new methods described in this thesis can help ensure safe encounters between aircraft and enable SAA sensor certification for UAS integration into the National Airspace System.

  19. Deep Coupled Integration of CSAC and GNSS for Robust PNT.

    PubMed

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-09-11

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT.

  20. Deep Coupled Integration of CSAC and GNSS for Robust PNT

    PubMed Central

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-01-01

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. “Clock coasting” of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542

  1. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  2. Method and Apparatus for Monitoring the Integrity of a Geomembrane Liner using time Domain Reflectometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, John L.

    1998-11-09

    Leaks are detected in a multi-layered geomembrane liner by a two-dimensional time domain reflectometry (TDR) technique. The TDR geomembrane liner is constructed with an electrically conductive detection layer positioned between two electrically non-conductive dielectric layers, which are each positioned between the detection layer and an electrically conductive reference layer. The integrity of the TDR geomembrane liner is determined by generating electrical pulses within the detection layer and measuring the time delay for any reflected electrical energy caused by absorption of moisture by a dielectric layer.

  3. Method and apparatus for monitoring the integrity of a geomembrane liner using time domain reflectometry

    DOEpatents

    Morrison, John L [Idaho Falls, ID

    2001-04-24

    Leaks are detected in a multi-layered geomembrane liner by a two-dimensional time domain reflectometry (TDR) technique. The TDR geomembrane liner is constructed with an electrically conductive detection layer positioned between two electrically non-conductive dielectric layers, which are each positioned between the detection layer and an electrically conductive reference layer. The integrity of the TDR geomembrane liner is determined by generating electrical pulses within the detection layer and measuring the time delay for any reflected electrical energy caused by absorption of moisture by a dielectric layer.

  4. Method and appartus for converting static in-ground vehicle scales into weigh-in-motion systems

    DOEpatents

    Muhs, Jeffrey D.; Scudiere, Matthew B.; Jordan, John K.

    2002-01-01

    An apparatus and method for converting in-ground static weighing scales for vehicles to weigh-in-motion systems. The apparatus upon conversion includes the existing in-ground static scale, peripheral switches and an electronic module for automatic computation of the weight. By monitoring the velocity, tire position, axle spacing, and real time output from existing static scales as a vehicle drives over the scales, the system determines when an axle of a vehicle is on the scale at a given time, monitors the combined weight output from any given axle combination on the scale(s) at any given time, and from these measurements automatically computes the weight of each individual axle and gross vehicle weight by an integration, integration approximation, and/or signal averaging technique.

  5. Grid Research | Grid Modernization | NREL

    Science.gov Websites

    Grid Research Grid Research NREL addresses the challenges of today's electric grid through high researcher in a lab Integrated Devices and Systems Developing and evaluating grid technologies and integrated Controls Developing methods for real-time operations and controls of power systems at any scale Photo of

  6. Testing Backwards Integration As A Method Of Age-Determination for KBO Families

    NASA Astrophysics Data System (ADS)

    Benfell, Nathan; Ragozzine, Darin

    2017-10-01

    The age of young asteroid collisional families is often determined by using backwards n-body integration of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. However, there are some minor effects that may be important to include in the integration to ensure a faithful reproduction of the actual solar system. We have created simulated families of Kuiper Belt objects through a forwards integration of various objects with identical starting locations and velocity distributions, based on the Haumea family. After carrying this integration forwards through ~4 Gyr, backwards integrations are used (1) to investigate which factors are of enough significance to require inclusion in the integration (e.g., terrestrial planets, KBO self-gravity, putative Planet 9, etc.), (2) to test orbital element clustering statistics and identify methods for assessing false alarm probabilities, and (3) to compare the age estimates with the known age of the simulated family to explore the viability of backwards integration for precise age estimates.

  7. Magnus integrators on multicore CPUs and GPUs

    NASA Astrophysics Data System (ADS)

    Auer, N.; Einkemmer, L.; Kandolf, P.; Ostermann, A.

    2018-07-01

    In the present paper we consider numerical methods to solve the discrete Schrödinger equation with a time dependent Hamiltonian (motivated by problems encountered in the study of spin systems). We will consider both short-range interactions, which lead to evolution equations involving sparse matrices, and long-range interactions, which lead to dense matrices. Both of these settings show very different computational characteristics. We use Magnus integrators for time integration and employ a framework based on Leja interpolation to compute the resulting action of the matrix exponential. We consider both traditional Magnus integrators (which are extensively used for these types of problems in the literature) as well as the recently developed commutator-free Magnus integrators and implement them on modern CPU and GPU (graphics processing unit) based systems. We find that GPUs can yield a significant speed-up (up to a factor of 10 in the dense case) for these types of problems. In the sparse case GPUs are only advantageous for large problem sizes and the achieved speed-ups are more modest. In most cases the commutator-free variant is superior but especially on the GPU this advantage is rather small. In fact, none of the advantage of commutator-free methods on GPUs (and on multi-core CPUs) is due to the elimination of commutators. This has important consequences for the design of more efficient numerical methods.

  8. Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume

    2013-01-01

    Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.

  9. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    2004-01-01

    A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.

  10. Integral imaging with multiple image planes using a uniaxial crystal plate.

    PubMed

    Park, Jae-Hyeung; Jung, Sungyong; Choi, Heejin; Lee, Byoungho

    2003-08-11

    Integral imaging has been attracting much attention recently for its several advantages such as full parallax, continuous view-points, and real-time full-color operation. However, the thickness of the displayed three-dimensional image is limited to relatively small value due to the degradation of the image resolution. In this paper, we propose a method to provide observers with enhanced perception of the depth without severe resolution degradation by the use of the birefringence of a uniaxial crystal plate. The proposed integral imaging system can display images integrated around three central depth planes by dynamically altering the polarization and controlling both elemental images and dynamic slit array mask accordingly. We explain the principle of the proposed method and verify it experimentally.

  11. An integral equation formulation for rigid bodies in Stokes flow in three dimensions

    NASA Astrophysics Data System (ADS)

    Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan

    2017-03-01

    We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.

  12. Chaos synchronization of uncertain chaotic systems using composite nonlinear feedback based integral sliding mode control.

    PubMed

    Mobayen, Saleh

    2018-06-01

    This paper proposes a combination of composite nonlinear feedback and integral sliding mode techniques for fast and accurate chaos synchronization of uncertain chaotic systems with Lipschitz nonlinear functions, time-varying delays and disturbances. The composite nonlinear feedback method allows accurate following of the master chaotic system and the integral sliding mode control provides invariance property which rejects the perturbations and preserves the stability of the closed-loop system. Based on the Lyapunov- Krasovskii stability theory and linear matrix inequalities, a novel sufficient condition is offered for the chaos synchronization of uncertain chaotic systems. This method not only guarantees the robustness against perturbations and time-delays, but also eliminates reaching phase and avoids chattering problem. Simulation results demonstrate that the suggested procedure leads to a great control performance. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Fast Time-Dependent Density Functional Theory Calculations of the X-ray Absorption Spectroscopy of Large Systems.

    PubMed

    Besley, Nicholas A

    2016-10-11

    The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .

  14. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  15. Semi-implicit integration factor methods on sparse grids for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-07-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.

  16. Propagators for the Time-Dependent Kohn-Sham Equations: Multistep, Runge-Kutta, Exponential Runge-Kutta, and Commutator Free Magnus Methods.

    PubMed

    Gómez Pueyo, Adrián; Marques, Miguel A L; Rubio, Angel; Castro, Alberto

    2018-05-09

    We examine various integration schemes for the time-dependent Kohn-Sham equations. Contrary to the time-dependent Schrödinger's equation, this set of equations is nonlinear, due to the dependence of the Hamiltonian on the electronic density. We discuss some of their exact properties, and in particular their symplectic structure. Four different families of propagators are considered, specifically the linear multistep, Runge-Kutta, exponential Runge-Kutta, and the commutator-free Magnus schemes. These have been chosen because they have been largely ignored in the past for time-dependent electronic structure calculations. The performance is analyzed in terms of cost-versus-accuracy. The clear winner, in terms of robustness, simplicity, and efficiency is a simplified version of a fourth-order commutator-free Magnus integrator. However, in some specific cases, other propagators, such as some implicit versions of the multistep methods, may be useful.

  17. An Improved Manual Method for NOx Emission Measurement.

    ERIC Educational Resources Information Center

    Dee, L. A.; And Others

    The current manual NO (x) sampling and analysis method was evaluated. Improved time-integrated sampling and rapid analysis methods were developed. In the new method, the sample gas is drawn through a heated bed of uniquely active, crystalline, Pb02 where NO (x) is quantitatively absorbed. Nitrate ion is later extracted with water and the…

  18. Integrated identification and control for nanosatellites reclaiming failed satellite

    NASA Astrophysics Data System (ADS)

    Han, Nan; Luo, Jianjun; Ma, Weihua; Yuan, Jianping

    2018-05-01

    Using nanosatellites to reclaim a failed satellite needs nanosatellites to attach to its surface to take over its attitude control function. This is challenging, since parameters including the inertia matrix of the combined spacecraft and the relative attitude information of attached nanosatellites with respect to the given body-fixed frame of the failed satellite are all unknown after the attachment. Besides, if the total control capacity needs to be increased during the reclaiming process by new nanosatellites, real-time parameters updating will be necessary. For these reasons, an integrated identification and control method is proposed in this paper, which enables the real-time parameters identification and attitude takeover control to be conducted concurrently. Identification of the inertia matrix of the combined spacecraft and the relative attitude information of attached nanosatellites are both considered. To guarantee sufficient excitation for the identification of the inertia matrix, a modified identification equation is established by filtering out sample points leading to ill-conditioned identification, and the identification performance of the inertia matrix is improved. Based on the real-time estimated inertia matrix, an attitude takeover controller is designed, the stability of the controller is analysed using Lyapunov method. The commanded control torques are allocated to each nanosatellite while the control saturation constraint being satisfied using the Quadratic Programming (QP) method. Numerical simulations are carried out to demonstrate the feasibility and effectiveness of the proposed integrated identification and control method.

  19. A Study into Advanced Guidance Laws Using Computational Methods

    DTIC Science & Technology

    2011-12-01

    0; for ii = 2:index integral = integral+(t(ii)-t(ii-1))*u2(ii-1); end J = 20*min(range)^2+integral/1000; 73 outtxt = [’Time (s...0.67*LN; % nose CP XCPW = LN+XW+0.7*CRW-0.2*CTW; % wing CP AN = 0.67*LN*DIAM...0.5*(LENGTH-LN)))/(AN+AB); % body CP %--- Area computations ------------------------------------- SW = 0.5*HW*(CTW+CRW)+CRW*WXT; % wing

  20. Long-term dynamic modeling of tethered spacecraft using nodal position finite element method and symplectic integration

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Zhu, Z. H.

    2015-12-01

    Dynamic modeling of tethered spacecraft with the consideration of elasticity of tether is prone to the numerical instability and error accumulation over long-term numerical integration. This paper addresses the challenges by proposing a globally stable numerical approach with the nodal position finite element method (NPFEM) and the implicit, symplectic, 2-stage and 4th order Gaussian-Legendre Runge-Kutta time integration. The NPFEM eliminates the numerical error accumulation by using the position instead of displacement of tether as the state variable, while the symplectic integration enforces the energy and momentum conservation of the discretized finite element model to ensure the global stability of numerical solution. The effectiveness and robustness of the proposed approach is assessed by an elastic pendulum problem, whose dynamic response resembles that of tethered spacecraft, in comparison with the commonly used time integrators such as the classical 4th order Runge-Kutta schemes and other families of non-symplectic Runge-Kutta schemes. Numerical results show that the proposed approach is accurate and the energy of the corresponding numerical model is conservative over the long-term numerical integration. Finally, the proposed approach is applied to the dynamic modeling of deorbiting process of tethered spacecraft over a long period.

  1. The Application of Sensors on Guardrails for the Purpose of Real Time Impact Detection

    DTIC Science & Technology

    2012-03-01

    collection methods ; however, there are major differences in the measures of performance for policy goals and objectives (U.S. DOT, 2002). The goal here is...seriousness of this issue has motivated the US Department of Transportation and Transportation Research Board to develop and deploy new methods and... methods to integrate new sensing capabilities into existing Intelligent Transportation Systems in a time efficient and cost effective manner. In

  2. Adolescent Internet Use, Social Integration, and Depressive Symptoms: Analysis from a Longitudinal Cohort Survey.

    PubMed

    Strong, Carol; Lee, Chih-Ting; Chao, Lo-Hsin; Lin, Chung-Ying; Tsai, Meng-Che

    2018-05-01

    To examine the association between adolescent leisure-time Internet use and social integration in the school context and how this association affects later depressive symptoms among adolescents in Taiwan, using a large nationwide cohort study and the latent growth model (LGM) method. Data of 3795 students followed from the year 2001 to 2006 in the Taiwan Education Panel Survey were analyzed. Leisure-time Internet use was defined by the hours per week spent on (1) online chatting and (2) online games. School social integration and depressive symptoms were self-reported. We first used an unconditional LGM to estimate the baseline (intercept) and growth (slope) of Internet use. Next, another LGM conditioned with school social integration and depression was conducted. Approximately 10% of the participants reported engaging in online chatting and/or gaming for more than 20 hours per week. Internet use for online chatting showed an increase over time. School social integration was associated with the baseline amount (coefficient = -0.62, p < 0.001) but not the growth of leisure-time Internet use. The trend of Internet use was positively related to depressive symptoms (coefficient = 0.31, p < 0.05) at Wave 4. School social integration was initially associated with decreased leisure-time Internet use among adolescents. The growth of Internet use with time was not explainable by school social integration but had adverse impacts on depression. Reinforcing adolescents' bonding to school may prevent initial leisure-time Internet use. When advising on adolescent Internet use, health care providers should consider their patients' social networks and mental well-being.

  3. Elevated temperature crack growth

    NASA Technical Reports Server (NTRS)

    Kim, K. S.; Vanstone, R. H.

    1992-01-01

    The purpose of this program was to extend the work performed in the base program (CR 182247) into the regime of time-dependent crack growth under isothermal and thermal mechanical fatigue (TMF) loading, where creep deformation also influences the crack growth behavior. The investigation was performed in a two-year, six-task, combined experimental and analytical program. The path-independent integrals for application to time-dependent crack growth were critically reviewed. The crack growth was simulated using a finite element method. The path-independent integrals were computed from the results of finite-element analyses. The ability of these integrals to correlate experimental crack growth data were evaluated under various loading and temperature conditions. The results indicate that some of these integrals are viable parameters for crack growth prediction at elevated temperatures.

  4. Robust H∞ cost guaranteed integral sliding mode control for the synchronization problem of nonlinear tele-operation system with variable time-delay.

    PubMed

    Al-Wais, Saba; Khoo, Suiyang; Lee, Tae Hee; Shanmugam, Lakshmanan; Nahavandi, Saeid

    2018-01-01

    This paper is devoted to the synchronization problem of tele-operation systems with time-varying delay, disturbances, and uncertainty. Delay-dependent sufficient conditions for the existence of integral sliding surfaces are given in the form of Linear Matrix Inequalities (LMIs). This guarantees the global stability of the tele-operation system with known upper bounds of the time-varying delays. Unlike previous work, in this paper, the controller gains are designed but not chosen, which increases the degree of freedom of the design. Moreover, Wirtinger based integral inequality and reciprocally convex combination techniques used in the constructed Lypunove-Krasoviskii Functional (LKF) are deemed to give less conservative stability condition for the system. Furthermore, to relax the analysis from any assumptions regarding the dynamics of the environment and human operator forces, H ∞ design method is used to involve the dynamics of these forces and ensure the stability of the system against these admissible forces in the H ∞ sense. This design scheme combines the strong robustness of the sliding mode control with the H ∞ design method for tele-operation systems which is coupled using state feedback controllers and inherit variable time-delays in their communication channels. Simulation examples are given to show the effectiveness of the proposed method. Copyright © 2017 ISA. All rights reserved.

  5. Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration

    NASA Astrophysics Data System (ADS)

    Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel

    2017-11-01

    In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.

  6. Observer design for compensation of network-induced delays in integrated communication and control systems

    NASA Technical Reports Server (NTRS)

    Luck, R.; Ray, A.

    1988-01-01

    A method for compensating the effects of network-induced delays in integrated communication and control systems (ICCS) is proposed, and a finite-dimensional time-invariant ICCS model is developed. The problem of analyzing systems with time-varying and stochastic delays is circumvented by the application of a deterministic observer. For the case of controller-to-actuator delays, the observed design must rely on an extended model which represents the delays as additional states.

  7. A Hybrid Numerical Analysis Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Staroselsky, Alexander

    2001-01-01

    A new hybrid surface-integral-finite-element numerical scheme has been developed to model a three-dimensional crack propagating through a thin, multi-layered coating. The finite element method was used to model the physical state of the coating (far field), and the surface integral method was used to model the fatigue crack growth. The two formulations are coupled through the need to satisfy boundary conditions on the crack surface and the external boundary. The coupling is sufficiently weak that the surface integral mesh of the crack surface and the finite element mesh of the uncracked volume can be set up independently. Thus when modeling crack growth, the finite element mesh can remain fixed for the duration of the simulation as the crack mesh is advanced. This method was implemented to evaluate the feasibility of fabricating a structural health monitoring system for real-time detection of surface cracks propagating in engine components. In this work, the authors formulate the hybrid surface-integral-finite-element method and discuss the mechanical issues of implementing a structural health monitoring system in an aircraft engine environment.

  8. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  9. Realization of Real-Time Clinical Data Integration Using Advanced Database Technology

    PubMed Central

    Yoo, Sooyoung; Kim, Boyoung; Park, Heekyong; Choi, Jinwook; Chun, Jonghoon

    2003-01-01

    As information & communication technologies have advanced, interest in mobile health care systems has grown. In order to obtain information seamlessly from distributed and fragmented clinical data from heterogeneous institutions, we need solutions that integrate data. In this article, we introduce a method for information integration based on real-time message communication using trigger and advanced database technologies. Messages were devised to conform to HL7, a standard for electronic data exchange in healthcare environments. The HL7 based system provides us with an integrated environment in which we are able to manage the complexities of medical data. We developed this message communication interface to generate and parse HL7 messages automatically from the database point of view. We discuss how easily real time data exchange is performed in the clinical information system, given the requirement for minimum loading of the database system. PMID:14728271

  10. Analytic double product integrals for all-frequency relighting.

    PubMed

    Wang, Rui; Pan, Minghao; Chen, Weifeng; Ren, Zhong; Zhou, Kun; Hua, Wei; Bao, Hujun

    2013-07-01

    This paper presents a new technique for real-time relighting of static scenes with all-frequency shadows from complex lighting and highly specular reflections from spatially varying BRDFs. The key idea is to depict the boundaries of visible regions using piecewise linear functions, and convert the shading computation into double product integrals—the integral of the product of lighting and BRDF on visible regions. By representing lighting and BRDF with spherical Gaussians and approximating their product using Legendre polynomials locally in visible regions, we show that such double product integrals can be evaluated in an analytic form. Given the precomputed visibility, our technique computes the visibility boundaries on the fly at each shading point, and performs the analytic integral to evaluate the shading color. The result is a real-time all-frequency relighting technique for static scenes with dynamic, spatially varying BRDFs, which can generate more accurate shadows than the state-of-the-art real-time PRT methods.

  11. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  12. A Kirchhoff approach to seismic modeling and prestack depth migration

    NASA Astrophysics Data System (ADS)

    Liu, Zhen-Yue

    1993-05-01

    The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.

  13. A discontinuous Galerkin method for the shallow water equations in spherical triangular coordinates

    NASA Astrophysics Data System (ADS)

    Läuter, Matthias; Giraldo, Francis X.; Handorf, Dörthe; Dethloff, Klaus

    2008-12-01

    A global model of the atmosphere is presented governed by the shallow water equations and discretized by a Runge-Kutta discontinuous Galerkin method on an unstructured triangular grid. The shallow water equations on the sphere, a two-dimensional surface in R3, are locally represented in terms of spherical triangular coordinates, the appropriate local coordinate mappings on triangles. On every triangular grid element, this leads to a two-dimensional representation of tangential momentum and therefore only two discrete momentum equations. The discontinuous Galerkin method consists of an integral formulation which requires both area (elements) and line (element faces) integrals. Here, we use a Rusanov numerical flux to resolve the discontinuous fluxes at the element faces. A strong stability-preserving third-order Runge-Kutta method is applied for the time discretization. The polynomial space of order k on each curved triangle of the grid is characterized by a Lagrange basis and requires high-order quadature rules for the integration over elements and element faces. For the presented method no mass matrix inversion is necessary, except in a preprocessing step. The validation of the atmospheric model has been done considering standard tests from Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow water equations in spherical geometry, J. Comput. Phys. 102 (1992) 211-224], unsteady analytical solutions of the nonlinear shallow water equations and a barotropic instability caused by an initial perturbation of a jet stream. A convergence rate of O(Δx) was observed in the model experiments. Furthermore, a numerical experiment is presented, for which the third-order time-integration method limits the model error. Thus, the time step Δt is restricted by both the CFL-condition and accuracy demands. Conservation of mass was shown up to machine precision and energy conservation converges for both increasing grid resolution and increasing polynomial order k.

  14. An Efficient Algorithm for Perturbed Orbit Integration Combining Analytical Continuation and Modified Chebyshev Picard Iteration

    NASA Astrophysics Data System (ADS)

    Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.

    2014-09-01

    Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.

  15. System and method for measuring fluorescence of a sample

    DOEpatents

    Riot, Vincent J

    2015-03-24

    The present disclosure provides a system and a method for measuring fluorescence of a sample. The sample may be a polymerase-chain-reaction (PCR) array, a loop-mediated-isothermal amplification array, etc. LEDs are used to excite the sample, and a photodiode is used to collect the sample's fluorescence. An electronic offset signal is used to reduce the effects of background fluorescence and the noises from the measurement system. An integrator integrates the difference between the output of the photodiode and the electronic offset signal over a given period of time. The resulting integral is then converted into digital domain for further processing and storage.

  16. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    NASA Astrophysics Data System (ADS)

    Li, Xingxing

    2014-05-01

    Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;

  17. Complex quantum Hamilton-Jacobi equation with Bohmian trajectories: Application to the photodissociation dynamics of NOCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw

    2014-03-14

    The complex quantum Hamilton-Jacobi equation-Bohmian trajectories (CQHJE-BT) method is introduced as a synthetic trajectory method for integrating the complex quantum Hamilton-Jacobi equation for the complex action function by propagating an ensemble of real-valued correlated Bohmian trajectories. Substituting the wave function expressed in exponential form in terms of the complex action into the time-dependent Schrödinger equation yields the complex quantum Hamilton-Jacobi equation. We transform this equation into the arbitrary Lagrangian-Eulerian version with the grid velocity matching the flow velocity of the probability fluid. The resulting equation describing the rate of change in the complex action transported along Bohmian trajectories is simultaneouslymore » integrated with the guidance equation for Bohmian trajectories, and the time-dependent wave function is readily synthesized. The spatial derivatives of the complex action required for the integration scheme are obtained by solving one moving least squares matrix equation. In addition, the method is applied to the photodissociation of NOCl. The photodissociation dynamics of NOCl can be accurately described by propagating a small ensemble of trajectories. This study demonstrates that the CQHJE-BT method combines the considerable advantages of both the real and the complex quantum trajectory methods previously developed for wave packet dynamics.« less

  18. Detecting submerged objects: the application of side scan sonar to forensic contexts.

    PubMed

    Schultz, John J; Healy, Carrie A; Parker, Kenneth; Lowers, Bim

    2013-09-10

    Forensic personnel must deal with numerous challenges when searching for submerged objects. While traditional water search methods have generally involved using dive teams, remotely operated vehicles (ROVs), and water scent dogs for cases involving submerged objects and bodies, law enforcement is increasingly integrating multiple methods that include geophysical technologies. There are numerous advantages for integrating geophysical technologies, such as side scan sonar and ground penetrating radar (GPR), with more traditional search methods. Overall, these methods decrease the time involved searching, in addition to increasing area searched. However, as with other search methods, there are advantages and disadvantages when using each method. For example, in instances with excessive aquatic vegetation or irregular bottom terrain, it may not be possible to discern a submersed body with side scan sonar. As a result, forensic personnel will have the highest rate of success during searches for submerged objects when integrating multiple search methods, including deploying multiple geophysical technologies. The goal of this paper is to discuss the methodology of various search methods that are employed for submerged objects and how these various methods can be integrated as part of a comprehensive protocol for water searches depending upon the type of underwater terrain. In addition, two successful case studies involving the search and recovery of a submerged human body using side scan sonar are presented to illustrate the successful application of integrating a geophysical technology with divers when searching for a submerged object. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Incorporating DNA Sequencing into Current Prenatal Screening Practice for Down's Syndrome

    PubMed Central

    Wald, Nicholas J.; Bestwick, Jonathan P.

    2013-01-01

    Background Prenatal screening for Down's syndrome is performed using biochemical and ultrasound markers measured in early pregnancy such as the Integrated test using first and second trimester markers. Recently, DNA sequencing methods have been introduced on free DNA in maternal plasma, yielding a high screening performance. These methods are expensive and there is a test failure rate. We determined the screening performance of merging the Integrated test with the newer DNA techniques in a protocol that substantially reduces the cost compared with universal DNA testing and still achieves high screening performance with no test failures. Methods Published data were used to model screening performance of a protocol in which all women receive the first stage of the Integrated test at about 11 weeks of pregnancy. On the basis of this higher risk women have reflex DNA testing and lower risk women as well as those with a failed DNA test complete the Integrated test at about 15 weeks. Results The overall detection rate was 95% with a 0.1% false-positive rate if 20% of women were selected to receive DNA testing. If all women had DNA testing the detection rate would be 3 to 4 percentage points higher with a false-positive rate 30 times greater if women with failed tests were treated as positive and offered a diagnostic amniocentesis, or 3 times greater if they had a second trimester screening test (Quadruple test) and treated as positive only if this were positive. The cost per women screened would be about one-fifth, compared with universal DNA testing, if the DNA test were 20 times the cost of the Integrated test. Conclusion The proposed screening protocol achieves a high screening performance without programme test failures and at a substantially lower cost than offering all women DNA testing. PMID:23527014

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  1. High-efficiency power transfer for silicon-based photonic devices

    NASA Astrophysics Data System (ADS)

    Son, Gyeongho; Yu, Kyoungsik

    2018-02-01

    We demonstrate an efficient coupling of guided light of 1550 nm from a standard single-mode optical fiber to a silicon waveguide using the finite-difference time-domain method and propose a fabrication method of tapered optical fibers for efficient power transfer to silicon-based photonic integrated circuits. Adiabatically-varying fiber core diameters with a small tapering angle can be obtained using the tube etching method with hydrofluoric acid and standard single-mode fibers covered by plastic jackets. The optical power transmission of the fundamental HE11 and TE-like modes between the fiber tapers and the inversely-tapered silicon waveguides was calculated with the finite-difference time-domain method to be more than 99% at a wavelength of 1550 nm. The proposed method for adiabatic fiber tapering can be applied in quantum optics, silicon-based photonic integrated circuits, and nanophotonics. Furthermore, efficient coupling within the telecommunication C-band is a promising approach for quantum networks in the future.

  2. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.

  3. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  4. Activity ranking of synthetic analogs targeting vascular endothelial growth factor receptor 2 by an integrated cell membrane chromatography system.

    PubMed

    Wang, Dongyao; Lv, Diya; Chen, Xiaofei; Liu, Yue; Ding, Xuan; Jia, Dan; Chen, Langdong; Zhu, Zhenyu; Cao, Yan; Chai, Yifeng

    2015-12-01

    Evaluating the biological activities of small molecules represents an important part of the drug discovery process. Cell membrane chromatography (CMC) is a well-developed biological chromatographic technique. In this study, we have developed combined SMMC-7721/CMC and HepG2/CMC with high-performance liquid chromatography and time-of-flight mass spectrometry to establish an integrated screening platform. These systems was subsequently validated and used for evaluating the activity of quinazoline compounds, which were designed and synthesized to target vascular endothelial growth factor receptor 2. The inhibitory activities of these compounds towards this receptor were also tested using a classical caliper mobility shift assay. The results revealed a significant correlation between these two methods (R(2) = 0.9565 or 0.9420) for evaluating the activities of these compounds. Compared with traditional methods of evaluating the activities analogous compounds, this integrated cell membrane chromatography screening system took less time and was more cost effective, indicating that it could be used as a practical method in drug discovery. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Increased of the capacity integral bridge with reinforced concrete beams for single span

    NASA Astrophysics Data System (ADS)

    Setiati, N. Retno

    2017-11-01

    Sinapeul Bridge that was built in 2012 in Sumedang is a bridge type using a full integral system. The prototype of integral bridge with reinforced concrete girder and single span 20 meters until this year had decreased capacity. The bridge was conducted monitoring of strain that occurs in the abutment in 2014. Monitoring results show that based on the data recorded, the maximum strain occurs at the abutment on the location of the integration of the girder of 10.59 x 10-6 tensile stress of 0.25 MPa (smaller than 150 x 10-6) with 3 MPa tensile stress as limit the occurrence of cracks in concrete. Sinapeul bridge abutment with integral system is still in the intact condition. Deflection of the bridge at the time of load test is 1.31 mm. But this time the bridge has decreased exceeded permission deflection (deflection occurred by 40 mm). Besides that, the slab also suffered destruction. One cause of the destruction of the bridge slab is the load factor. It is necessary for required effort to increase the capacity of the integral bridge with retrofitting. Retrofitting method also aims to restore the capacity of the bridge structure due to deterioration. Retrofitting can be done by shortening of the span or using Fibre Reinforced Polymer (FRC). Based on the results obtained by analysis of that method of retrofitting with Fibre Reinforced Polymer (FRC) is more simple and effective. Retrofitting with FRP can increase the capacity of the shear and bending moment becomes 41% of the existing bridge. Retrofitting with FRP method does not change the integral system on the bridge Sinapeul become conventional bridges.

  6. Real-time simulations for automated rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Cuseo, John A.

    1991-01-01

    Although the individual technologies for automated rendezvous and capture (AR&C) exist, they have not yet been integrated to produce a working system in the United States. Thus, real-time integrated systems simulations are critical to the development and pre-flight demonstration of an AR&C capability. Real-time simulations require a level of development more typical of a flight system compared to purely analytical methods, thus providing confidence in derived design concepts. This presentation will describe Martin Marietta's Space Operations Simulation (SOS) Laboratory, a state-of-the-art real-time simulation facility for AR&C, along with an implementation for the Satellite Servicer System (SSS) Program.

  7. Forecasting daily meteorological time series using ARIMA and regression models

    NASA Astrophysics Data System (ADS)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  8. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov Websites

    distribution feeder models for use in hardware-in-the-loop (HIL) experiments. Using this method, a full feeder ; proposes an additional control loop to improve frequency support while ensuring stable operation. The and Frequency Deviation," also proposes an additional control loop, this time to smooth the wind

  9. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  10. Application of the Green's function method for 2- and 3-dimensional steady transonic flows

    NASA Technical Reports Server (NTRS)

    Tseng, K.

    1984-01-01

    A Time-Domain Green's function method for the nonlinear time-dependent three-dimensional aerodynamic potential equation is presented. The Green's theorem is being used to transform the partial differential equation into an integro-differential-delay equation. Finite-element and finite-difference methods are employed for the spatial and time discretizations to approximate the integral equation by a system of differential-delay equations. Solution may be obtained by solving for this nonlinear simultaneous system of equations in time. This paper discusses the application of the method to the Transonic Small Disturbance Equation and numerical results for lifting and nonlifting airfoils and wings in steady flows are presented.

  11. Downdating a time-varying square root information filter

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.

    1990-01-01

    A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.

  12. Integral equation methods for vesicle electrohydrodynamics in three dimensions

    NASA Astrophysics Data System (ADS)

    Veerapaneni, Shravan

    2016-12-01

    In this paper, we develop a new boundary integral equation formulation that describes the coupled electro- and hydro-dynamics of a vesicle suspended in a viscous fluid and subjected to external flow and electric fields. The dynamics of the vesicle are characterized by a competition between the elastic, electric and viscous forces on its membrane. The classical Taylor-Melcher leaky-dielectric model is employed for the electric response of the vesicle and the Helfrich energy model combined with local inextensibility is employed for its elastic response. The coupled governing equations for the vesicle position and its transmembrane electric potential are solved using a numerical method that is spectrally accurate in space and first-order in time. The method uses a semi-implicit time-stepping scheme to overcome the numerical stiffness associated with the governing equations.

  13. METHOD OF MEASURING THE INTEGRATED ENERGY OUTPUT OF A NEUTRONIC CHAIN REACTOR

    DOEpatents

    Sturm, W.J.

    1958-12-01

    A method is presented for measuring the integrated energy output of a reactor conslsting of the steps of successively irradiating calibrated thin foils of an element, such as gold, which is rendered radioactive by exposure to neutron flux for periods of time not greater than one-fifth the mean life of the induced radioactlvity and producing an indication of the radioactivity induced in each foil, each foil belng introduced into the reactor immediately upon removal of its predecessor.

  14. Simulation verification techniques study: Simulation performance validation techniques document. [for the space shuttle system

    NASA Technical Reports Server (NTRS)

    Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.

    1975-01-01

    Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.

  15. Integral method for the calculation of Hawking radiation in dispersive media. II. Asymmetric asymptotics.

    PubMed

    Robertson, Scott

    2014-11-01

    Analog gravity experiments make feasible the realization of black hole space-times in a laboratory setting and the observational verification of Hawking radiation. Since such analog systems are typically dominated by dispersion, efficient techniques for calculating the predicted Hawking spectrum in the presence of strong dispersion are required. In the preceding paper, an integral method in Fourier space is proposed for stationary 1+1-dimensional backgrounds which are asymptotically symmetric. Here, this method is generalized to backgrounds which are different in the asymptotic regions to the left and right of the scattering region.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Argo, P.E.; DeLapp, D.; Sutherland, C.D.

    TRACKER is an extension of a three-dimensional Hamiltonian raytrace code developed some thirty years ago by R. Michael Jones. Subsequent modifications to this code, which is commonly called the {open_quotes}Jones Code,{close_quotes} were documented by Jones and Stephensen (1975). TRACKER incorporates an interactive user`s interface, modern differential equation integrators, graphical outputs, homing algorithms, and the Ionospheric Conductivity and Electron Density (ICED) ionosphere. TRACKER predicts the three-dimensional paths of radio waves through model ionospheres by numerically integrating Hamilton`s equations, which are a differential expression of Fermat`s principle of least time. By using continuous models, the Hamiltonian method avoids false caustics and discontinuousmore » raypath properties often encountered in other raytracing methods. In addition to computing the raypath, TRACKER also calculates the group path (or pulse travel time), the phase path, the geometrical (or {open_quotes}real{close_quotes}) pathlength, and the Doppler shift (if the time variation of the ionosphere is explicitly included). Computational speed can be traded for accuracy by specifying the maximum allowable integration error per step in the integration. Only geometrical optics are included in the main raytrace code; no partial reflections or diffraction effects are taken into account. In addition, TRACKER does not lend itself to statistical descriptions of propagation -- it requires a deterministic model of the ionosphere.« less

  17. Solving modal equations of motion with initial conditions using MSC/NASTRAN DMAP. Part 1: Implementing exact mode superposition

    NASA Technical Reports Server (NTRS)

    Abdallah, Ayman A.; Barnett, Alan R.; Ibrahim, Omar M.; Manella, Richard T.

    1993-01-01

    Within the MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) module TRD1, solving physical (coupled) or modal (uncoupled) transient equations of motion is performed using the Newmark-Beta or mode superposition algorithms, respectively. For equations of motion with initial conditions, only the Newmark-Beta integration routine has been available in MSC/NASTRAN solution sequences for solving physical systems and in custom DMAP sequences or alters for solving modal systems. In some cases, one difficulty with using the Newmark-Beta method is that the process of selecting suitable integration time steps for obtaining acceptable results is lengthy. In addition, when very small step sizes are required, a large amount of time can be spent integrating the equations of motion. For certain aerospace applications, a significant time savings can be realized when the equations of motion are solved using an exact integration routine instead of the Newmark-Beta numerical algorithm. In order to solve modal equations of motion with initial conditions and take advantage of efficiencies gained when using uncoupled solution algorithms (like that within TRD1), an exact mode superposition method using MSC/NASTRAN DMAP has been developed and successfully implemented as an enhancement to an existing coupled loads methodology at the NASA Lewis Research Center.

  18. Impact of telemonitoring approaches on integrated HIV and TB diagnosis and treatment interventions in sub-Saharan Africa: a scoping review.

    PubMed

    Yah, Clarence S; Tambo, Ernest; Khayeka-Wandabwa, Christopher; Ngogang, Jeanne Y

    2017-01-01

    Background: This paper explores telemonitoring/mhealth approaches as a promising real time and contextual strategy in overhauling HIV and TB interventions quality access and uptake, retention,adherence and coverage impact in endemic and prone-epidemic prevention and control in sub-Sahara Africa. Methods: The scoping review method was applied in acknowledged journals indexing platforms including Medline, Embase, Global Health, PubMed, MeSH PsycInfo, Scopus and Google Scholar to identify relevant articles pertaining to telemonitoring as a proxy surrogate method in reinforcing sustainability of HIV/TB prevention/treatment interventions in sub-Saharan Africa. Full papers were assessed and those selected that fosters evidence on telemonitoring/mhealth diagnosis, treatment approaches and strategies in HIV and TB prevention and control were synthesized and analyzed. Results: We found telemonitoring/mhealth approach as a more efficient and sustained proxy in HIV and TB risk reduction strategies for early diagnosis and prompt quality clinical outcomes. It can significantly contribute to decreasing health systems/patients cost, long waiting time in clinics, hospital visits, travels and time off/on from work. Improved integrated HIV and TB telemonitoring systems sustainability hold great promise in health systems strengthening including patient centered early diagnosis and care delivery systems, uptake and retention to medications/services and improving patients' survival and quality of life. Conclusion: Telemonitoring/mhealth (electronic phone text/video/materials messaging)acceptability, access and uptake are crucial in monitoring and improving uptake, retention,adherence and coverage in both local and national integrated HIV and TB programs and interventions. Moreover, telemonitoring is crucial in patient-providers-health professional partnership, real-time quality care and service delivery, antiretroviral and anti-tuberculous drugs improvement, susceptibility monitoring and prescription choice, reinforcing cost effective HIV and TB integrated therapy model and survival rate.

  19. CB4-03: An Eye on the Future: A Review of Data Virtualization Techniques to Improve Research Analytics

    PubMed Central

    Richter, Jack; McFarland, Lela; Bredfeldt, Christine

    2012-01-01

    Background/Aims Integrating data across systems can be a daunting process. The traditional method of moving data to a common location, mapping fields with different formats and meanings, and performing data cleaning activities to ensure valid and reliable integration across systems can be both expensive and extremely time consuming. As the scope of needed research data increases, the traditional methodology may not be sustainable. Data Virtualization provides an alternative to traditional methods that may reduce the effort required to integrate data across disparate systems. Objective Our goal was to survey new methods in data integration, cloud computing, enterprise data management and virtual data management for opportunities to increase the efficiency of producing VDW and similar data sets. Methods Kaiser Permanente Information Technology (KPIT), in collaboration with the Mid-Atlantic Permanente Research Institute (MAPRI) reviewed methodologies in the burgeoning field of Data Virtualization. We identified potential strengths and weaknesses of new approaches to data integration. For each method, we evaluated its potential application for producing effective research data sets. Results Data Virtualization provides opportunities to reduce the amount of data movement required to integrate data sources on different platforms in order to produce research data sets. Additionally, Data Virtualization also includes methods for managing “fuzzy” matching used to match fields known to have poor reliability such as names, addresses and social security numbers. These methods could improve the efficiency of integrating state and federal data such as patient race, death, and tumors with internal electronic health record data. Discussion The emerging field of Data Virtualization has considerable potential for increasing the efficiency of producing research data sets. An important next step will be to develop a proof of concept project that will help us understand to benefits and drawbacks of these techniques.

  20. Automatic integration of data from dissimilar sensors

    NASA Astrophysics Data System (ADS)

    Citrin, W. I.; Proue, R. W.; Thomas, J. W.

    The present investigation is concerned with the automatic integration of radar and electronic support measures (ESM) sensor data, and with the development of a method for the automatical integration of identification friend or foe (IFF) and radar sensor data. On the basis of the two considered proojects, significant advances have been made in the areas of sensor data integration. It is pointed out that the log likelihood approach in sensor data correlation is appropriate for both similar and dissimilar sensor data. Attention is given to the real time integration of radar and ESM sensor data, and a radar ESM correlation simulation program.

  1. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-07-25

    This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  2. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.

  3. Integration of scheduling and discrete event simulation systems to improve production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.

    2016-08-01

    The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.

  4. The DTIC Review. Volume 2, Number 3: Optical and Infrared Detection and Countermeasures

    DTIC Science & Technology

    1996-10-01

    are different from those en- countered in designing wavelets for other applications. For use in time- frequency analysis of signals, for example, it...view within the field of regard, and for high -fidelity simulation of optical blurring and temporal effects such as jitter. The real-time CLDWSG method ...integration methods or, for near spatially invariant FOV regions, by convolution methods or by way of the convolution theorem using OTF frequency -domain

  5. Second-order variational equations for N-body simulations

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2016-07-01

    First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.

  6. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  7. CHRONOS: a time-varying method for microRNA-mediated subpathway enrichment analysis.

    PubMed

    Vrahatis, Aristidis G; Dimitrakopoulou, Konstantina; Balomenos, Panos; Tsakalidis, Athanasios K; Bezerianos, Anastasios

    2016-03-15

    In the era of network medicine and the rapid growth of paired time series mRNA/microRNA expression experiments, there is an urgent need for pathway enrichment analysis methods able to capture the time- and condition-specific 'active parts' of the biological circuitry as well as the microRNA impact. Current methods ignore the multiple dynamical 'themes'-in the form of enriched biologically relevant microRNA-mediated subpathways-that determine the functionality of signaling networks across time. To address these challenges, we developed time-vaRying enriCHment integrOmics Subpathway aNalysis tOol (CHRONOS) by integrating time series mRNA/microRNA expression data with KEGG pathway maps and microRNA-target interactions. Specifically, microRNA-mediated subpathway topologies are extracted and evaluated based on the temporal transition and the fold change activity of the linked genes/microRNAs. Further, we provide measures that capture the structural and functional features of subpathways in relation to the complete organism pathway atlas. Our application to synthetic and real data shows that CHRONOS outperforms current subpathway-based methods into unraveling the inherent dynamic properties of pathways. CHRONOS is freely available at http://biosignal.med.upatras.gr/chronos/ tassos.bezerianos@nus.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Variational methods for direct/inverse problems of atmospheric dynamics and chemistry

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena

    2013-04-01

    We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).

  9. The Performance Analysis of a Real-Time Integrated INS/GPS Vehicle Navigation System with Abnormal GPS Measurement Elimination

    PubMed Central

    Chiang, Kai-Wei; Duong, Thanh Trung; Liao, Jhen-Kai

    2013-01-01

    The integration of an Inertial Navigation System (INS) and the Global Positioning System (GPS) is common in mobile mapping and navigation applications to seamlessly determine the position, velocity, and orientation of the mobile platform. In most INS/GPS integrated architectures, the GPS is considered to be an accurate reference with which to correct for the systematic errors of the inertial sensors, which are composed of biases, scale factors and drift. However, the GPS receiver may produce abnormal pseudo-range errors mainly caused by ionospheric delay, tropospheric delay and the multipath effect. These errors degrade the overall position accuracy of an integrated system that uses conventional INS/GPS integration strategies such as loosely coupled (LC) and tightly coupled (TC) schemes. Conventional tightly coupled INS/GPS integration schemes apply the Klobuchar model and the Hopfield model to reduce pseudo-range delays caused by ionospheric delay and tropospheric delay, respectively, but do not address the multipath problem. However, the multipath effect (from reflected GPS signals) affects the position error far more significantly in a consumer-grade GPS receiver than in an expensive, geodetic-grade GPS receiver. To avoid this problem, a new integrated INS/GPS architecture is proposed. The proposed method is described and applied in a real-time integrated system with two integration strategies, namely, loosely coupled and tightly coupled schemes, respectively. To verify the effectiveness of the proposed method, field tests with various scenarios are conducted and the results are compared with a reliable reference system. PMID:23955434

  10. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.

  11. A computationally efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.

    1988-01-01

    The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.

  12. Indirect (source-free) integration method. I. Wave-forms from geodesic generic orbits of EMRIs

    NASA Astrophysics Data System (ADS)

    Ritter, Patxi; Aoudia, Sofiane; Spallicci, Alessandro D. A. M.; Cordier, Stéphane

    2016-12-01

    The Regge-Wheeler-Zerilli (RWZ) wave-equation describes Schwarzschild-Droste black hole perturbations. The source term contains a Dirac distribution and its derivative. We have previously designed a method of integration in time domain. It consists of a finite difference scheme where analytic expressions, dealing with the wave-function discontinuity through the jump conditions, replace the direct integration of the source and the potential. Herein, we successfully apply the same method to the geodesic generic orbits of EMRI (Extreme Mass Ratio Inspiral) sources, at second order. An EMRI is a Compact Star (CS) captured by a Super-Massive Black Hole (SMBH). These are considered the best probes for testing gravitation in strong regime. The gravitational wave-forms, the radiated energy and angular momentum at infinity are computed and extensively compared with other methods, for different orbits (circular, elliptic, parabolic, including zoom-whirl).

  13. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    NASA Astrophysics Data System (ADS)

    Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  14. A Novel Multilayered RFID Tagged Cargo Integrity Assurance Scheme

    PubMed Central

    Yang, Ming Hour; Luo, Jia Ning; Lu, Shao Yong

    2015-01-01

    To minimize cargo theft during transport, mobile radio frequency identification (RFID) grouping proof methods are generally employed to ensure the integrity of entire cargo loads. However, conventional grouping proofs cannot simultaneously generate grouping proofs for a specific group of RFID tags. The most serious problem of these methods is that nonexistent tags are included in the grouping proofs because of the considerable amount of time it takes to scan a high number of tags. Thus, applying grouping proof methods in the current logistics industry is difficult. To solve this problem, this paper proposes a method for generating multilayered offline grouping proofs. The proposed method provides tag anonymity; moreover, resolving disputes between recipients and transporters over the integrity of cargo deliveries can be expedited by generating grouping proofs and automatically authenticating the consistency between the receipt proof and pick proof. The proposed method can also protect against replay attacks, multi-session attacks, and concurrency attacks. Finally, experimental results verify that, compared with other methods for generating grouping proofs, the proposed method can efficiently generate offline grouping proofs involving several parties in a supply chain using mobile RFID. PMID:26512673

  15. Integrated sensor biopsy device for real time tissue metabolism analysis

    NASA Astrophysics Data System (ADS)

    Delgado Alonso, Jesus; Lieberman, Robert A.; DiCarmine, Paul M.; Berry, David; Guzman, Narciso; Marpu, Sreekar B.

    2018-02-01

    Current methods for guiding cancer biopsies rely almost exclusively on images derived from X-ray, ultrasound, or magnetic resonance, which essentially characterize suspected lesions based only on tissue density. This paper presents a sensor integrated biopsy device for in situ tissue analysis that will enable biopsy teams to measure local tissue chemistry in real time during biopsy procedures, adding a valuable new set of parameters to augment and extend conventional image guidance. A first demonstrator integrating three chemical and biochemical sensors was tested in a mice strain that is a spontaneous breast cancer model. In all cases, the multisensory probe was able to discriminate between healthy tissue, the edge of the tumor, and total insertion inside the cancer tissue, recording real-time information about tissue metabolism.

  16. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, Peter G., E-mail: maginot1@llnl.gov; Ragusa, Jean C., E-mail: jean.ragusa@tamu.edu; Morel, Jim E., E-mail: morel@tamu.edu

    This work presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  17. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  18. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE PAGES

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    2016-09-29

    This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  19. ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations

    NASA Astrophysics Data System (ADS)

    Merkel, M.; Niyonzima, I.; Schöps, S.

    2017-12-01

    Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.

  20. Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional

    NASA Astrophysics Data System (ADS)

    Song, Jong-Won; Hirao, Kimihiko

    2015-07-01

    We previously developed an efficient screened hybrid functional called Gaussian-Perdew-Burke-Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals. We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.

  1. Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp

    2015-07-14

    We previously developed an efficient screened hybrid functional called Gaussian-Perdew–Burke–Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals.more » We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.« less

  2. TOT measurement implemented in FPGA TDC

    NASA Astrophysics Data System (ADS)

    Fan, Huan-Huan; Cao, Ping; Liu, Shu-Bin; An, Qi

    2015-11-01

    Time measurement plays a crucial role for the purpose of particle identification in high energy physics experiments. With increasingly demanding physics goals and the development of electronics, modern time measurement systems need to meet the requirement of excellent resolution specification as well as high integrity. Based on Field Programmable Gate Arrays (FPGAs), FPGA time-to-digital converters (TDCs) have become one of the most mature and prominent time measurement methods in recent years. For correcting the time-walk effect caused by leading timing, a time-over-threshold (TOT) measurement should be added to the FPGA TDC. TOT can be obtained by measuring the interval between the signal leading and trailing edges. Unfortunately, a traditional TDC can recognize only one kind of signal edge, the leading or the trailing. Generally, to measure the interval, two TDC channels need to be used at the same time, one for leading, the other for trailing. However, this method unavoidably increases the amount of FPGA resources used and reduces the TDC's integrity. This paper presents one method of TOT measurement implemented in a Xilinx Virtex-5 FPGA. In this method, TOT measurement can be achieved using only one TDC input channel. The consumed resources and time resolution can both be guaranteed. Testing shows that this TDC can achieve resolution better than 15ps for leading edge measurement and 37 ps for TOT measurement. Furthermore, the TDC measurement dead time is about two clock cycles, which makes it good for applications with higher physics event rates. Supported by National Natural Science Foundation of China (11079003, 10979003)

  3. A new aerodynamic integral equation based on an acoustic formula in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1984-01-01

    An aerodynamic integral equation for bodies moving at transonic and supersonic speeds is presented. Based on a time-dependent acoustic formula for calculating the noise emanating from the outer portion of a propeller blade travelling at high speed (the Ffowcs Williams-Hawking formulation), the loading terms and a conventional thickness source terms are retained. Two surface and three line integrals are employed to solve an equation for the loading noise. The near-field term is regularized using the collapsing sphere approach to obtain semiconvergence on the blade surface. A singular integral equation is thereby derived for the unknown surface pressure, and is amenable to numerical solutions using Galerkin or collocation methods. The technique is useful for studying the nonuniform inflow to the propeller.

  4. Development and Application of Compatible Discretizations of Maxwell's Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D; Koning, J; Rieben, R

    We present the development and application of compatible finite element discretizations of electromagnetics problems derived from the time dependent, full wave Maxwell equations. We review the H(curl)-conforming finite element method, using the concepts and notations of differential forms as a theoretical framework. We chose this approach because it can handle complex geometries, it is free of spurious modes, it is numerically stable without the need for filtering or artificial diffusion, it correctly models the discontinuity of fields across material boundaries, and it can be very high order. Higher-order H(curl) and H(div) conforming basis functions are not unique and we havemore » designed an extensible C++ framework that supports a variety of specific instantiations of these such as standard interpolatory bases, spectral bases, hierarchical bases, and semi-orthogonal bases. Virtually any electromagnetics problem that can be cast in the language of differential forms can be solved using our framework. For time dependent problems a method-of-lines scheme is used where the Galerkin method reduces the PDE to a semi-discrete system of ODE's, which are then integrated in time using finite difference methods. For time integration of wave equations we employ the unconditionally stable implicit Newmark-Beta method, as well as the high order energy conserving explicit Maxwell Symplectic method; for diffusion equations, we employ a generalized Crank-Nicholson method. We conclude with computational examples from resonant cavity problems, time-dependent wave propagation problems, and transient eddy current problems, all obtained using the authors massively parallel computational electromagnetics code EMSolve.« less

  5. A parallel time integrator for noisy nonlinear oscillatory systems

    NASA Astrophysics Data System (ADS)

    Subber, Waad; Sarkar, Abhijit

    2018-06-01

    In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).

  6. On the Analysis Methods for the Time Domain and Frequency Domain Response of a Buried Objects*

    NASA Astrophysics Data System (ADS)

    Poljak, Dragan; Šesnić, Silvestar; Cvetković, Mario

    2014-05-01

    There has been a continuous interest in the analysis of ground-penetrating radar systems and related applications in civil engineering [1]. Consequently, a deeper insight of scattering phenomena occurring in a lossy half-space, as well as the development of sophisticated numerical methods based on Finite Difference Time Domain (FDTD) method, Finite Element Method (FEM), Boundary Element Method (BEM), Method of Moments (MoM) and various hybrid methods, is required, e.g. [2], [3]. The present paper deals with certain techniques for time and frequency domain analysis, respectively, of buried conducting and dielectric objects. Time domain analysis is related to the assessment of a transient response of a horizontal straight thin wire buried in a lossy half-space using a rigorous antenna theory (AT) approach. The AT approach is based on the space-time integral equation of the Pocklington type (time domain electric field integral equation for thin wires). The influence of the earth-air interface is taken into account via the simplified reflection coefficient arising from the Modified Image Theory (MIT). The obtained results for the transient current induced along the electrode due to the transmitted plane wave excitation are compared to the numerical results calculated via an approximate transmission line (TL) approach and the AT approach based on the space-frequency variant of the Pocklington integro-differential approach, respectively. It is worth noting that the space-frequency Pocklington equation is numerically solved via the Galerkin-Bubnov variant of the Indirect Boundary Element Method (GB-IBEM) and the corresponding transient response is obtained by the aid of inverse fast Fourier transform (IFFT). The results calculated by means of different approaches agree satisfactorily. Frequency domain analysis is related to the assessment of frequency domain response of dielectric sphere using the full wave model based on the set of coupled electric field integral equations for surfaces. The numerical solution is carried out by means of the improved variant of the Method of Moments (MoM) providing numerically stable and an efficient procedure for the extraction of singularities arising in integral expressions. The proposed analysis method is compared to the results obtained by using some commercial software packages. A satisfactory agreement has been achieved. Both approaches discussed throughout this work and demonstrated on canonical geometries could be also useful for benchmark purpose. References [1] L. Pajewski et al., Applications of Ground Penetrating Radar in Civil Engineering - COST Action TU1208, 2013. [2] U. Oguz, L. Gurel, Frequency Responses of Ground-Penetrating Radars Operating Over Highly Lossy Grounds, IEEE Trans. Geosci. and Remote sensing, Vol. 40, No 6, 2002. [3] D.Poljak, Advanced Modeling in Computational electromagnetic Compatibility, John Wiley and Sons, New York 2007. *This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar."

  7. Maintenance of host DNA integrity in field-preserved mosquito (Diptera: Culicidae) blood meals for identification by DNA barcoding.

    PubMed

    Reeves, Lawrence E; Holderman, Chris J; Gillett-Kaufman, Jennifer L; Kawahara, Akito Y; Kaufman, Phillip E

    2016-09-15

    Determination of the interactions between hematophagous arthropods and their hosts is a necessary component to understanding the transmission dynamics of arthropod-vectored pathogens. Current molecular methods to identify hosts of blood-fed arthropods require the preservation of host DNA to serve as an amplification template. During transportation to the laboratory and storage prior to molecular analysis, genetic samples need to be protected from nucleases, and the degradation effects of hydrolysis, oxidation and radiation. Preservation of host DNA contained in field-collected blood-fed specimens has an additional caveat: suspension of the degradative effects of arthropod digestion on host DNA. Unless effective preservation methods are implemented promptly after blood-fed specimens are collected, host DNA will continue to degrade. Preservation methods vary in their efficacy, and need to be selected based on the logistical constraints of the research program. We compared four preservation methods (cold storage at -20 °C, desiccation, ethanol storage of intact mosquito specimens and crushed specimens on filter paper) for field storage of host DNA from blood-fed mosquitoes across a range of storage and post-feeding time periods. The efficacy of these techniques in maintaining host DNA integrity was evaluated using a polymerase chain reaction (PCR) to detect the presence of a sufficient concentration of intact host DNA templates for blood meal analysis. We applied a logistic regression model to assess the effects of preservation method, storage time and post-feeding time on the binomial response variable, amplification success. Preservation method, storage time and post-feeding time all significantly impacted PCR amplification success. Filter papers and, to a lesser extent, 95 % ethanol, were the most effective methods for the maintenance of host DNA templates. Amplification success of host DNA preserved in cold storage at -20 °C and desiccation was poor. Our data suggest that, of the methods tested, host DNA template integrity was most stable when blood meals were preserved using filter papers. Filter paper preservation is effective over short- and long-term storage, while ethanol preservation is only suitable for short-term storage. Cold storage at -20 °C, and desiccation of blood meal specimens, even for short time periods, should be avoided.

  8. Free-Lagrange methods for compressible hydrodynamics in two space dimensions

    NASA Astrophysics Data System (ADS)

    Crowley, W. E.

    1985-03-01

    Since 1970 a research and development program in Free-Lagrange methods has been active at Livermore. The initial steps were taken with incompressible flows for simplicity. Since then the effort has been concentrated on compressible flows with shocks in two space dimensions and time. In general, the line integral method has been used to evaluate derivatives and the artificial viscosity method has been used to deal with shocks. Basically, two Free-Lagrange formulations for compressible flows in two space dimensions and time have been tested and both will be described. In method one, all prognostic quantities were node centered and staggered in time. The artificial viscosity was zone centered. One mesh reconnection philosphy was that the mesh should be optimized so that nearest neighbors were connected together. Another was that vertex angles should tend toward equality. In method one, all mesh elements were triangles. In method two, both quadrilateral and triangular mesh elements are permitted. The mesh variables are staggered in space and time as suggested originally by Richtmyer and von Neumann. The mesh reconnection strategy is entirely different in method two. In contrast to the global strategy of nearest neighbors, we now have a more local strategy that reconnects in order to keep the integration time step above a user chosen threshold. An additional strategy reconnects in the vicinity of large relative fluid motions. Mesh reconnection consists of two parts: (1) the tools that permits nodes to be merged and quads to be split into triangles etc. and; (2) the strategy that dictates how and when to use the tools. Both tools and strategies change with time in a continuing effort to expand the capabilities of the method. New ideas are continually being tried and evaluated.

  9. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  10. A fast algorithm for forward-modeling of gravitational fields in spherical coordinates with 3D Gauss-Legendre quadrature

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.

    2017-12-01

    Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.

  11. Emergence of Integrated Urology-Radiation Oncology Practices in the State of Texas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jhaveri, Pavan M.; Sun Zhuyi; Ballas, Leslie

    2012-09-01

    Purpose: Integrated urology-radiation oncology (RO) practices have been advocated as a means to improve community-based prostate cancer care by joining urologic and radiation care in a single-practice environment. However, little is known regarding the scope and actual physical integration of such practices. We sought to characterize the emergence of such practices in Texas, their extent of physical integration, and their potential effect on patient travel times for radiation therapy. Methods and Materials: A telephone survey identified integrated urology-RO practices, defined as practices owned by urologists that offer RO services. Geographic information software was used to determine the proximity of integratedmore » urology-RO clinic sites with respect to the state's population. We calculated patient travel time and distance from each integrated urology-RO clinic offering urologic services to the RO treatment facility owned by the integrated practice and to the nearest nonintegrated (independent) RO facility. We compared these times and distances using the Wilcoxon-Mann-Whitney test. Results: Of 229 urology practices identified, 12 (5%) offered integrated RO services, and 182 (28%) of 640 Texas urologists worked in such practices. Approximately 53% of the state population resides within 10 miles of an integrated urology-RO clinic site. Patients with a diagnosis of prostate cancer at an integrated urology-RO clinic site travel a mean of 19.7 miles (26.1 min) from the clinic to reach the RO facility owned by the integrated urology-RO practice vs 5.9 miles (9.2 min) to reach the nearest nonintegrated RO facility (P<.001). Conclusions: Integrated urology-RO practices are common in Texas and are generally clustered in urban areas. In most integrated practices, the urology clinics and the integrated RO facilities are not at the same location, and driving times and distances from the clinic to the integrated RO facility exceed those from the clinic to the nearest nonintegrated RO facility.« less

  12. Combined Use of Integral Experiments and Covariance Data

    NASA Astrophysics Data System (ADS)

    Palmiotti, G.; Salvatores, M.; Aliberti, G.; Herman, M.; Hoblit, S. D.; McKnight, R. D.; Obložinský, P.; Talou, P.; Hale, G. M.; Hiruta, H.; Kawano, T.; Mattoon, C. M.; Nobre, G. P. A.; Palumbo, A.; Pigni, M.; Rising, M. E.; Yang, W.-S.; Kahler, A. C.

    2014-04-01

    In the frame of a US-DOE sponsored project, ANL, BNL, INL and LANL have performed a joint multidisciplinary research activity in order to explore the combined use of integral experiments and covariance data with the objective to both give quantitative indications on possible improvements of the ENDF evaluated data files and to reduce at the same time crucial reactor design parameter uncertainties. Methods that have been developed in the last four decades for the purposes indicated above have been improved by some new developments that benefited also by continuous exchanges with international groups working in similar areas. The major new developments that allowed significant progress are to be found in several specific domains: a) new science-based covariance data; b) integral experiment covariance data assessment and improved experiment analysis, e.g., of sample irradiation experiments; c) sensitivity analysis, where several improvements were necessary despite the generally good understanding of these techniques, e.g., to account for fission spectrum sensitivity; d) a critical approach to the analysis of statistical adjustments performance, both a priori and a posteriori; e) generalization of the assimilation method, now applied for the first time not only to multigroup cross sections data but also to nuclear model parameters (the "consistent" method). This article describes the major results obtained in each of these areas; a large scale nuclear data adjustment, based on the use of approximately one hundred high-accuracy integral experiments, will be reported along with a significant example of the application of the new "consistent" method of data assimilation.

  13. The calculation of viscosity of liquid n-decane and n-hexadecane by the Green-Kubo method

    NASA Astrophysics Data System (ADS)

    Cui, S. T.; Cummings, P. T.; Cochran, H. D.

    This short commentary presents the result of long molecular dynamics simulation calculations of the shear viscosity of liquid n-decane and n-hexadecane using the Green-Kubo integration method. The relaxation time of the stress-stress correlation function is compared with those of rotation and diffusion. The rotational and diffusional relaxation times, which are easy to calculate, provide useful guides for the required simulation time in viscosity calculations. Also, the computational time required for viscosity calculations of these systems by the Green-Kubo method is compared with the time required for previous non-equilibrium molecular dynamics calculations of the same systems. The method of choice for a particular calculation is determined largely by the properties of interest, since the efficiencies of the two methods are comparable for calculation of the zero strain rate viscosity.

  14. Depth extraction method with high accuracy in integral imaging based on moving array lenslet technique

    NASA Astrophysics Data System (ADS)

    Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing

    2018-03-01

    In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.

  15. Optical Spatial integration methods for ambiguity function generation

    NASA Technical Reports Server (NTRS)

    Tamura, P. N.; Rebholz, J. J.; Daehlin, O. T.; Lee, T. C.

    1981-01-01

    A coherent optical spatial integration approach to ambiguity function generation is described. It uses one dimensional acousto-optic Bragg cells as input tranducers in conjunction with a space variant linear phase shifter, a passive optical element, to generate the two dimensional ambiguity function in one exposure. Results of a real time implementation of this system are shown.

  16. Measurement of barrier tissue integrity with an organic electrochemical transistor.

    PubMed

    Jimison, Leslie H; Tria, Scherrine A; Khodagholy, Dion; Gurfinkel, Moshe; Lanzarini, Erica; Hama, Adel; Malliaras, George G; Owens, Róisín M

    2012-11-20

    The integration of an organic electrochemical transistor with human barrier tissue cells provides a novel method for assessing toxicology of compounds in vitro. Minute variations in paracellular ionic flux induced by toxic compounds are measured in real time, with unprecedented temporal resolution and extreme sensitivity. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  18. Determining integral density distribution in the mach reflection of shock waves

    NASA Astrophysics Data System (ADS)

    Shevchenko, A. M.; Golubev, M. P.; Pavlov, A. A.; Pavlov, Al. A.; Khotyanovsky, D. V.; Shmakov, A. S.

    2017-05-01

    We present a method for and results of determination of the field of integral density in the structure of flow corresponding to the Mach interaction of shock waves at Mach number M = 3. The optical diagnostics of flow was performed using an interference technique based on self-adjusting Zernike filters (SA-AVT method). Numerical simulations were carried out using the CFS3D program package for solving the Euler and Navier-Stokes equations. Quantitative data on the distribution of integral density on the path of probing radiation in one direction of 3D flow transillumination in the region of Mach interaction of shock waves were obtained for the first time.

  19. Detection of monohydroxylated polycyclic aromatic hydrocarbons in urine and particulate matter using LC separations coupled with integrated SPE and fluorescence detection or coupled with high-resolution time-of-flight mass spectrometry.

    PubMed

    Lintelmann, Jutta; Wu, Xiao; Kuhn, Evelyn; Ritter, Sebastian; Schmidt, Claudia; Zimmermann, Ralf

    2018-05-01

    A high-performance liquid chromatographic (HPLC) method with integrated solid-phase extraction for the determination of 1-hydroxypyrene and 1-, 2-, 3-, 4- and 9-hydroxyphenanthrene in urine was developed and validated. After enzymatic treatment and centrifugation of 500 μL urine, 100 μL of the sample was directly injected into the HPLC system. Integrated solid-phase extraction was performed on a selective, copper phthalocyanine modified packing material. Subsequent chromatographic separation was achieved on a pentafluorophenyl core-shell column using a methanol gradient. For quantification, time-programmed fluorescence detection was used. Matrix-dependent recoveries were between 94.8 and 102.4%, repeatability and reproducibility ranged from 2.2 to 17.9% and detection limits lay between 2.6 and 13.6 ng/L urine. A set of 16 samples from normally exposed adults was analyzed using this HPLC-fluorescence detection method. Results were comparable with those reported in other studies. The chromatographic separation of the method was transferred to an ultra-high-performance liquid chromatography pentafluorophenyl core-shell column and coupled to a high-resolution time-of-flight mass spectrometer (HR-TOF-MS). The resulting method was used to demonstrate the applicability of LC-HR-TOF-MS for simultaneous target and suspect screening of monohydroxylated polycyclic aromatic hydrocarbons in extracts of urine and particulate matter. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.; Harrison, D. E. Jr.

    A variable time step integration algorithm for carrying out molecular dynamics simulations of atomic collision cascades is proposed which evaluates the interaction forces only once per time step. The algorithm is tested on some model problems which have exact solutions and is compared against other common methods. These comparisons show that the method has good stability and accuracy. Applications to Ar/sup +/ bombardment of Cu and Si show good accuracy and improved speed to the original method (D. E. Harrison, W. L. Gay, and H. M. Effron, J. Math. Phys. /bold 10/, 1179 (1969)).

  2. Concept for an off-line gain stabilisation method.

    PubMed

    Pommé, S; Sibbens, G

    2004-01-01

    Conceptual ideas are presented for an off-line gain stabilisation method for spectrometry, in particular for alpha-particle spectrometry at low count rate. The method involves list mode storage of individual energy and time stamp data pairs. The 'Stieltjes integral' of measured spectra with respect to a reference spectrum is proposed as an indicator for gain instability. 'Exponentially moving averages' of the latter show the gain shift as a function of time. With this information, the data are relocated stochastically on a point-by-point basis.

  3. High sensitivity leak detection method and apparatus

    DOEpatents

    Myneni, Ganapatic R.

    1994-01-01

    An improved leak detection method is provided that utilizes the cyclic adsorption and desorption of accumulated helium on a non-porous metallic surface. The method provides reliable leak detection at superfluid helium temperatures. The zero drift that is associated with residual gas analyzers in common leak detectors is virtually eliminated by utilizing a time integration technique. The sensitivity of the apparatus of this disclosure is capable of detecting leaks as small as 1.times.10.sup.-18 atm cc sec.sup.-1.

  4. High sensitivity leak detection method and apparatus

    DOEpatents

    Myneni, G.R.

    1994-09-06

    An improved leak detection method is provided that utilizes the cyclic adsorption and desorption of accumulated helium on a non-porous metallic surface. The method provides reliable leak detection at superfluid helium temperatures. The zero drift that is associated with residual gas analyzers in common leak detectors is virtually eliminated by utilizing a time integration technique. The sensitivity of the apparatus of this disclosure is capable of detecting leaks as small as 1 [times] 10[sup [minus]18] atm cc sec[sup [minus]1]. 2 figs.

  5. New disinfection and sterilization methods.

    PubMed Central

    Rutala, W. A.; Weber, D. J.

    2001-01-01

    New disinfection methods include a persistent antimicrobial coating that can be applied to inanimate and animate objects (Surfacine), a high-level disinfectant with reduced exposure time (ortho-phthalaldehyde), and an antimicrobial agent that can be applied to animate and inanimate objects (superoxidized water). New sterilization methods include a chemical sterilization process for endoscopes that integrates cleaning (Endoclens), a rapid (4-hour) readout biological indicator for ethylene oxide sterilization (Attest), and a hydrogen peroxide plasma sterilizer that has a shorter cycle time and improved efficacy (Sterrad 50). PMID:11294738

  6. Understanding principles of integration and segregation using whole-brain computational connectomics: implications for neuropsychiatric disorders

    PubMed Central

    Lord, Louis-David; Stevner, Angus B.; Kringelbach, Morten L.

    2017-01-01

    To survive in an ever-changing environment, the brain must seamlessly integrate a rich stream of incoming information into coherent internal representations that can then be used to efficiently plan for action. The brain must, however, balance its ability to integrate information from various sources with a complementary capacity to segregate information into modules which perform specialized computations in local circuits. Importantly, evidence suggests that imbalances in the brain's ability to bind together and/or segregate information over both space and time is a common feature of several neuropsychiatric disorders. Most studies have, however, until recently strictly attempted to characterize the principles of integration and segregation in static (i.e. time-invariant) representations of human brain networks, hence disregarding the complex spatio-temporal nature of these processes. In the present Review, we describe how the emerging discipline of whole-brain computational connectomics may be used to study the causal mechanisms of the integration and segregation of information on behaviourally relevant timescales. We emphasize how novel methods from network science and whole-brain computational modelling can expand beyond traditional neuroimaging paradigms and help to uncover the neurobiological determinants of the abnormal integration and segregation of information in neuropsychiatric disorders. This article is part of the themed issue ‘Mathematical methods in medicine: neuroscience, cardiology and pathology’. PMID:28507228

  7. Comment on "Symplectic integration of magnetic systems" by Stephen D. Webb [J. Comput. Phys. 270 (2014) 570-576

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangxi; Jia, Yuesong; Sun, Qizhi

    2015-02-01

    Webb [1] proposed a method to get symplectic integrators of magnetic systems by Taylor expanding the discrete Euler-Lagrangian equations (DEL) which resulted from variational symplectic method by making the variation of the discrete action [2], and approximating the results to the order of O (h2), where h is the time step. And in that paper, Webb thought that the integrators obtained by that method are symplectic ones, especially, he treated Boris integrator (BI) as the symplectic one. However, we have questions about Webb's results. Theoretically the transformation of phase-space coordinates between two adjacent points induced by symplectic algorithm should conserve a symplectic 2-form [2-5]. As proved in Refs. [2,3], the transformations induced by the standard symplectic integrator derived from Hamilton and the variational symplectic integrator (VSI) [2,6] from Lagrangian should conserve a symplectic 2-forms. But the approximation of VSI to O (h2) obtained by that paper is hard to conserve a symplectic 2-form, contrary to the claim of [1]. In the next section, we will use BI as an example to support our point and will prove BI not to be a symplectic one but an integrator conserving discrete phase-space volume.

  8. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    NASA Astrophysics Data System (ADS)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  9. Reducing the risk of rear-end collisions with infrastructure-to-vehicle (I2V) integration of variable speed limit control and adaptive cruise control system.

    PubMed

    Li, Ye; Wang, Hao; Wang, Wei; Liu, Shanwen; Xiang, Yun

    2016-08-17

    Adaptive cruise control (ACC) has been investigated recently to explore ways to increase traffic capacity, stabilize traffic flow, and improve traffic safety. However, researchers seldom have studied the integration of ACC and roadside control methods such as the variable speed limit (VSL) to improve safety. The primary objective of this study was to develop an infrastructure-to-vehicle (I2V) integrated system that incorporated both ACC and VSL to reduce rear-end collision risks on freeways. The intelligent driver model was firstly modified to simulate ACC behavior and then the VSL strategy used in this article was introduced. Next, the I2V system was proposed to integrate the 2 advanced techniques, ACC and VSL. Four scenarios of no control, VSL only, ACC only, and the I2V system were tested in simulation experiments. Time exposed time to collision (TET) and time integrated time to collision (TIT), 2 surrogate safety measures derived from time to collision (TTC), were used to evaluate safety issues associated with rear-end collisions. The total travel times of each scenario were also compared. The simulation results indicated that both the VSL-only and ACC-only methods had a positive impact on reducing the TET and TIT values (reduced by 53.0 and 58.6% and 59.0 and 65.3%, respectively). The I2V system combined the advantages of both ACC and VSL to achieve the most safety benefits (reduced by 71.5 and 77.3%, respectively). Sensitivity analysis of the TTC threshold also showed that the I2V system obtained the largest safety benefits with all of the TTC threshold values. The impact of different market penetration rates of ACC vehicles in I2V system indicated that safety benefits increase with an increase in ACC proportions. Compared to VSL-only and ACC-only scenarios, this integrated I2V system is more effective in reducing rear-end collision risks. The findings of this study provide useful information for traffic agencies to implement novel techniques to improve safety on freeways.

  10. Finite-time output feedback stabilization of high-order uncertain nonlinear systems

    NASA Astrophysics Data System (ADS)

    Jiang, Meng-Meng; Xie, Xue-Jun; Zhang, Kemei

    2018-06-01

    This paper studies the problem of finite-time output feedback stabilization for a class of high-order nonlinear systems with the unknown output function and control coefficients. Under the weaker assumption that output function is only continuous, by using homogeneous domination method together with adding a power integrator method, introducing a new analysis method, the maximal open sector Ω of output function is given. As long as output function belongs to any closed sector included in Ω, an output feedback controller can be developed to guarantee global finite-time stability of the closed-loop system.

  11. A comparative study of shallow groundwater level simulation with three time series models in a coastal aquifer of South China

    NASA Astrophysics Data System (ADS)

    Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.

    2017-05-01

    Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination ( R 2), Nash-Sutcliffe model efficiency coefficient ( E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.

  12. The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Chen, Jundong

    2018-03-01

    Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.

  13. Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping

    2018-01-01

    Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.

  14. Real-time Feynman path integral with Picard–Lefschetz theory and its applications to quantum tunneling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanizaki, Yuya, E-mail: yuya.tanizaki@riken.jp; Theoretical Research Division, Nishina Center, RIKEN, Wako 351-0198; Koike, Takayuki, E-mail: tkoike@ms.u-tokyo.ac.jp

    Picard–Lefschetz theory is applied to path integrals of quantum mechanics, in order to compute real-time dynamics directly. After discussing basic properties of real-time path integrals on Lefschetz thimbles, we demonstrate its computational method in a concrete way by solving three simple examples of quantum mechanics. It is applied to quantum mechanics of a double-well potential, and quantum tunneling is discussed. We identify all of the complex saddle points of the classical action, and their properties are discussed in detail. However a big theoretical difficulty turns out to appear in rewriting the original path integral into a sum of path integralsmore » on Lefschetz thimbles. We discuss generality of that problem and mention its importance. Real-time tunneling processes are shown to be described by those complex saddle points, and thus semi-classical description of real-time quantum tunneling becomes possible on solid ground if we could solve that problem. - Highlights: • Real-time path integral is studied based on Picard–Lefschetz theory. • Lucid demonstration is given through simple examples of quantum mechanics. • This technique is applied to quantum mechanics of the double-well potential. • Difficulty for practical applications is revealed, and we discuss its generality. • Quantum tunneling is shown to be closely related to complex classical solutions.« less

  15. Communication and control in an integrated manufacturing system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Throne, Robert D.; Muthuswamy, Yogesh K.

    1987-01-01

    Typically, components in a manufacturing system are all centrally controlled. Due to possible communication bottlenecking, unreliability, and inflexibility caused by using a centralized controller, a new concept of system integration called an Integrated Multi-Robot System (IMRS) was developed. The IMRS can be viewed as a distributed real time system. Some of the current research issues being examined to extend the framework of the IMRS to meet its performance goals are presented. These issues include the use of communication coprocessors to enhance performance, the distribution of tasks and the methods of providing fault tolerance in the IMRS. An application example of real time collision detection, as it relates to the IMRS concept, is also presented and discussed.

  16. How to Integrate HIV and Sexual and Reproductive Health Services in Namibia, the Epako Clinic Case Study

    PubMed Central

    Forster, Norbert; Campuzano, Pedro; Kambapani, Rejoice; Brahmbhatt, Heena; Hidinua, Grace; Turay, Mohamed; Ikandi, Simon Kimathi; Kabongo, Leonard; Zariro, Farai

    2017-01-01

    Introduction: During the past two decades, HIV and Sexual and Reproductive Health services in Namibia have been provided in silos, with high fragmentation. As a consequence of this, quality and efficiency of services in Primary Health Care has been compromised. Methods: We conducted an operational research (observational pre-post study) in a public health facility in Namibia. A health facility assessment was conducted before and after the integration of health services. A person-centred integrated model was implemented to integrate all health services provided at the health facility in addition to HIV and Sexual and Reproductive Health services. Comprehensive services are provided by each health worker to the same patients over time (longitudinality), on a daily basis (accessibility) and with a good external referral system (coordination). Prevalence rates of time flows and productivity were done. Results: Integrated services improved accessibility, stigma and quality of antenatal care services by improving the provider-patient communication, reducing the time that patients stay in the clinic in 16% and reducing the waiting times in 14%. In addition, nurse productivity improved 85% and the expected time in the health facility was reduced 24% without compromising the uptake of TB, HIV, outpatient, antenatal care or first visit family planning services. Given the success on many indicators resulting from integration of services, the goal of this paper was to describe “how” health services have been integrated, the “process” followed and presenting some “results” from the integrated clinic. Conclusions: Our study shows that HIV and SRH services can be effectively integrated by following the person-centred integrated model. Based on the Namibian experience on “how” to integrate health services and the “process” to achieve it, other African countries can replicate the model to move away from the silo approach and contribute to the achievement of Universal Health Coverage. PMID:28970759

  17. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  18. Gamma ray spectroscopy employing divalent europium-doped alkaline earth halides and digital readout for accurate histogramming

    DOEpatents

    Cherepy, Nerine Jane; Payne, Stephen Anthony; Drury, Owen B; Sturm, Benjamin W

    2014-11-11

    A scintillator radiation detector system according to one embodiment includes a scintillator; and a processing device for processing pulse traces corresponding to light pulses from the scintillator, wherein pulse digitization is used to improve energy resolution of the system. A scintillator radiation detector system according to another embodiment includes a processing device for fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times and performing a direct integration of fit parameters. A method according to yet another embodiment includes processing pulse traces corresponding to light pulses from a scintillator, wherein pulse digitization is used to improve energy resolution of the system. A method in a further embodiment includes fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times; and performing a direct integration of fit parameters. Additional systems and methods are also presented.

  19. Off-Policy Integral Reinforcement Learning Method to Solve Nonlinear Continuous-Time Multiplayer Nonzero-Sum Games.

    PubMed

    Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai

    2017-03-01

    This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.

  20. SENS-5D trajectory and wind-sensitivity calculations for unguided rockets

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Huang, L. C. P.; Cook, R. A.

    1975-01-01

    A computational procedure is described which numerically integrates the equations of motion of an unguided rocket. Three translational and two angular (roll discarded) degrees of freedom are integrated through the final burnout; and then, through impact, only three translational motions are considered. Input to the routine is: initial time, altitude and velocity, vehicle characteristics, and other defined options. Input format has a wide range of flexibility for special calculations. Output is geared mainly to the wind-weighting procedure, and includes summary of trajectory at burnout, apogee and impact, summary of spent-stage trajectories, detailed position and vehicle data, unit-wind effects for head, tail and cross winds, coriolis deflections, range derivative, and the sensitivity curves (the so called F(Z) and DF(Z) curves). The numerical integration procedure is a fourth-order, modified Adams-Bashforth Predictor-Corrector method. This method is supplemented by a fourth-order Runge-Kutta method to start the integration at t=0 and whenever error criteria demand a change in step size.

  1. Quadrature imposition of compatibility conditions in Chebyshev methods

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Streett, C. L.

    1990-01-01

    Often, in solving an elliptic equation with Neumann boundary conditions, a compatibility condition has to be imposed for well-posedness. This condition involves integrals of the forcing function. When pseudospectral Chebyshev methods are used to discretize the partial differential equation, these integrals have to be approximated by an appropriate quadrature formula. The Gauss-Chebyshev (or any variant of it, like the Gauss-Lobatto) formula can not be used here since the integrals under consideration do not include the weight function. A natural candidate to be used in approximating the integrals is the Clenshaw-Curtis formula, however it is shown that this is the wrong choice and it may lead to divergence if time dependent methods are used to march the solution to steady state. The correct quadrature formula is developed for these problems. This formula takes into account the degree of the polynomials involved. It is shown that this formula leads to a well conditioned Chebyshev approximation to the differential equations and that the compatibility condition is automatically satisfied.

  2. Very high frame rate volumetric integration of depth images on mobile devices.

    PubMed

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  3. Provably secure time distribution for the electric grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith IV, Amos M; Evans, Philip G; Williams, Brian P

    We demonstrate a quantum time distribution (QTD) method that combines the precision of optical timing techniques with the integrity of quantum key distribution (QKD). Critical infrastructure is dependent on microprocessor- and programmable logic-based monitoring and control systems. The distribution of timing information across the electric grid is accomplished by GPS signals which are known to be vulnerable to spoofing. We demonstrate a method for synchronizing remote clocks based on the arrival time of photons in a modifed QKD system. This has the advantage that the signal can be veried by examining the quantum states of the photons similar to QKD.

  4. Geographic integration of hepatitis C virus: A global threat

    PubMed Central

    Daw, Mohamed A; El-Bouzedi, Abdallah A; Ahmed, Mohamed O; Dau, Aghnyia A; Agnan, Mohamed M; Drah, Aisha M

    2016-01-01

    AIM To assess hepatitis C virus (HCV) geographic integration, evaluate the spatial and temporal evolution of HCV worldwide and propose how to diminish its burden. METHODS A literature search of published articles was performed using PubMed, MEDLINE and other related databases up to December 2015. A critical data assessment and analysis regarding the epidemiological integration of HCV was carried out using the meta-analysis method. RESULTS The data indicated that HCV has been integrated immensely over time and through various geographical regions worldwide. The history of HCV goes back to 1535 but between 1935 and 1965 it exhibited a rapid, exponential spread. This integration is clearly seen in the geo-epidemiology and phylogeography of HCV. HCV integration can be mirrored either as intra-continental or trans-continental. Migration, drug trafficking and HCV co-infection, together with other potential risk factors, have acted as a vehicle for this integration. Evidence shows that the geographic integration of HCV has been important in the global and regional distribution of HCV. CONCLUSION HCV geographic integration is clearly evident and this should be reflected in the prevention and treatment of this ongoing pandemic. PMID:27878104

  5. Elimination of secular terms from the differential equations for the elements of perturbed two-body motion

    NASA Technical Reports Server (NTRS)

    Bond, Victor R.; Fraietta, Michael F.

    1991-01-01

    In 1961, Sperling linearized and regularized the differential equations of motion of the two-body problem by changing the independent variable from time to fictitious time by Sundman's transformation (r = dt/ds) and by embedding the two-body energy integral and the Laplace vector. In 1968, Burdet developed a perturbation theory which was uniformly valid for all types of orbits using a variation of parameters approach on the elements which appeared in Sperling's equations for the two-body solution. In 1973, Bond and Hanssen improved Burdet's set of differential equations by embedding the total energy (which is a constant when the potential function is explicitly dependent upon time.) The Jacobian constant was used as an element to replace the total energy in a reformulation of the differential equations of motion. In the process, another element which is proportional to a component of the angular momentum was introduced. Recently trajectories computed during numerical studies of atmospheric entry from circular orbits and low thrust beginning in near-circular orbits exhibited numerical instability when solved by the method of Bond and Gottlieb (1989) for long time intervals. It was found that this instability was due to secular terms which appear on the righthand sides of the differential equations of some of the elements. In this paper, this instability is removed by the introduction of another vector integral called the delta integral (which replaces the Laplace Vector) and another scalar integral which removes the secular terms. The introduction of these integrals requires a new derivation of the differential equations for most of the elements. For this rederivation, the Lagrange method of variation of parameters is used, making the development more concise. Numerical examples of this improvement are presented.

  6. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  7. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.

    PubMed

    Shelley, M J; Tao, L

    2001-01-01

    To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

  8. Fast integral methods for integrated optical systems simulations: a review

    NASA Astrophysics Data System (ADS)

    Kleemann, Bernd H.

    2015-09-01

    Boundary integral equation methods (BIM) or simply integral methods (IM) in the context of optical design and simulation are rigorous electromagnetic methods solving Helmholtz or Maxwell equations on the boundary (surface or interface of the structures between two materials) for scattering or/and diffraction purposes. This work is mainly restricted to integral methods for diffracting structures such as gratings, kinoforms, diffractive optical elements (DOEs), micro Fresnel lenses, computer generated holograms (CGHs), holographic or digital phase holograms, periodic lithographic structures, and the like. In most cases all of the mentioned structures have dimensions of thousands of wavelengths in diameter. Therefore, the basic methods necessary for the numerical treatment are locally applied electromagnetic grating diffraction algorithms. Interestingly, integral methods belong to the first electromagnetic methods investigated for grating diffraction. The development started in the mid 1960ies for gratings with infinite conductivity and it was mainly due to the good convergence of the integral methods especially for TM polarization. The first integral equation methods (IEM) for finite conductivity were the methods by D. Maystre at Fresnel Institute in Marseille: in 1972/74 for dielectric, and metallic gratings, and later for multiprofile, and other types of gratings and for photonic crystals. Other methods such as differential and modal methods suffered from unstable behaviour and slow convergence compared to BIMs for metallic gratings in TM polarization from the beginning to the mid 1990ies. The first BIM for gratings using a parametrization of the profile was developed at Karl-Weierstrass Institute in Berlin under a contract with Carl Zeiss Jena works in 1984-1986 by A. Pomp, J. Creutziger, and the author. Due to the parametrization, this method was able to deal with any kind of surface grating from the beginning: whether profiles with edges, overhanging non-functional profiles, very deep ones, very large ones compared to wavelength, or simple smooth profiles. This integral method with either trigonometric or spline collocation, iterative solver with O(N2) complexity, named IESMP, was significantly improved by an efficient mesh refinement, matrix preconditioning, Ewald summation method, and an exponentially convergent quadrature in 2006 by G. Schmidt and A. Rathsfeld from Weierstrass-Institute (WIAS) Berlin. The so-called modified integral method (MIM) is a modification of the IEM of D. Maystre and has been introduced by L. Goray in 1995. It has been improved for weak convergence problems in 2001 and it was the only commercial available integral method for a long time, known as PCGRATE. All referenced integral methods so far are for in-plane diffraction only, no conical diffraction was possible. The first integral method for gratings in conical mounting was developed and proven under very weak conditions by G. Schmidt (WIAS) in 2010. It works for separated interfaces and for inclusions as well as for interpenetrating interfaces and for a large number of thin and thick layers in the same stable way. This very fast method has then been implemented for parallel processing under Unix and Windows operating systems. This work gives an overview over the most important BIMs for grating diffraction. It starts by presenting the historical evolution of the methods, highlights their advantages and differences, and gives insight into new approaches and their achievements. It addresses future open challenges at the end.

  9. Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests

    NASA Astrophysics Data System (ADS)

    Toth, G.; Keppens, R.; Botchev, M. A.

    1998-04-01

    We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.

  10. Identification and quantification of genetically modified Moonshade carnation lines using conventional and TaqMan real-time polymerase chain reaction methods.

    PubMed

    Li, Peng; Jia, Junwei; Bai, Lan; Pan, Aihu; Tang, Xueming

    2013-07-01

    Genetically modified carnation (Dianthus caryophyllus L.) Moonshade was approved for planting and commercialization in several countries from 2004. Developing methods for analyzing Moonshade is necessary for implementing genetically modified organism labeling regulations. In this study, the 5'-transgene integration sequence was isolated using thermal asymmetric interlaced (TAIL)-PCR. Based upon the 5'-transgene integration sequence, conventional and TaqMan real-time PCR assays were established. The relative limit of detection for the conventional PCR assay was 0.05 % for Moonshade using 100 ng total carnation genomic DNA, corresponding to approximately 79 copies of the carnation haploid genome, and the limits of detection and quantification of the TaqMan real-time PCR assay were estimated to be 51 and 254 copies of haploid carnation genomic DNA, respectively. These results are useful for identifying and quantifying Moonshade and its derivatives.

  11. Real Time Monitoring System of Pollution Waste on Musi River Using Support Vector Machine (SVM) Method

    NASA Astrophysics Data System (ADS)

    Fachrurrozi, Muhammad; Saparudin; Erwin

    2017-04-01

    Real-time Monitoring and early detection system which measures the quality standard of waste in Musi River, Palembang, Indonesia is a system for determining air and water pollution level. This system was designed in order to create an integrated monitoring system and provide real time information that can be read. It is designed to measure acidity and water turbidity polluted by industrial waste, as well as to show and provide conditional data integrated in one system. This system consists of inputting and processing the data, and giving output based on processed data. Turbidity, substances, and pH sensor is used as a detector that produce analog electrical direct current voltage (DC). Early detection system works by determining the value of the ammonia threshold, acidity, and turbidity level of water in Musi River. The results is then presented based on the level group pollution by the Support Vector Machine classification method.

  12. Harmonic-phase path-integral approximation of thermal quantum correlation functions

    NASA Astrophysics Data System (ADS)

    Robertson, Christopher; Habershon, Scott

    2018-03-01

    We present an approximation to the thermal symmetric form of the quantum time-correlation function in the standard position path-integral representation. By transforming to a sum-and-difference position representation and then Taylor-expanding the potential energy surface of the system to second order, the resulting expression provides a harmonic weighting function that approximately recovers the contribution of the phase to the time-correlation function. This method is readily implemented in a Monte Carlo sampling scheme and provides exact results for harmonic potentials (for both linear and non-linear operators) and near-quantitative results for anharmonic systems for low temperatures and times that are likely to be relevant to condensed phase experiments. This article focuses on one-dimensional examples to provide insights into convergence and sampling properties, and we also discuss how this approximation method may be extended to many-dimensional systems.

  13. A Computer Program for the Computation of Running Gear Temperatures Using Green's Function

    NASA Technical Reports Server (NTRS)

    Koshigoe, S.; Murdock, J. W.; Akin, L. S.; Townsend, D. P.

    1996-01-01

    A new technique has been developed to study two dimensional heat transfer problems in gears. This technique consists of transforming the heat equation into a line integral equation with the use of Green's theorem. The equation is then expressed in terms of eigenfunctions that satisfy the Helmholtz equation, and their corresponding eigenvalues for an arbitrarily shaped region of interest. The eigenfunction are obtalned by solving an intergral equation. Once the eigenfunctions are found, the temperature is expanded in terms of the eigenfunctions with unknown time dependent coefficients that can be solved by using Runge Kutta methods. The time integration is extremely efficient. Therefore, any changes in the time dependent coefficients or source terms in the boundary conditions do not impose a great computational burden on the user. The method is demonstrated by applying it to a sample gear tooth. Temperature histories at representative surface locatons are given.

  14. High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2015-03-01

    In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.

  15. Fluid-structure interaction of turbulent boundary layer over a compliant surface

    NASA Astrophysics Data System (ADS)

    Anantharamu, Sreevatsa; Mahesh, Krishnan

    2016-11-01

    Turbulent flows induce unsteady loads on surfaces in contact with them, which affect material stresses, surface vibrations and far-field acoustics. We are developing a numerical methodology to study the coupled interaction of a turbulent boundary layer with the underlying surface. The surface is modeled as a linear elastic solid, while the fluid follows the spatially filtered incompressible Navier-Stokes equations. An incompressible Large Eddy Simulation finite volume flow approach based on the algorithm of Mahesh et al. is used in the fluid domain. The discrete kinetic energy conserving property of the method ensures robustness at high Reynolds number. The linear elastic model in the solid domain is integrated in space using finite element method and in time using the Newmark time integration method. The fluid and solid domain solvers are coupled using both weak and strong coupling methods. Details of the algorithm, validation, and relevant results will be presented. This work is supported by NSWCCD, ONR.

  16. GEMPIC: geometric electromagnetic particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Kraus, Michael; Kormann, Katharina; Morrison, Philip J.; Sonnendrücker, Eric

    2017-08-01

    We present a novel framework for finite element particle-in-cell methods based on the discretization of the underlying Hamiltonian structure of the Vlasov-Maxwell system. We derive a semi-discrete Poisson bracket, which retains the defining properties of a bracket, anti-symmetry and the Jacobi identity, as well as conservation of its Casimir invariants, implying that the semi-discrete system is still a Hamiltonian system. In order to obtain a fully discrete Poisson integrator, the semi-discrete bracket is used in conjunction with Hamiltonian splitting methods for integration in time. Techniques from finite element exterior calculus ensure conservation of the divergence of the magnetic field and Gauss' law as well as stability of the field solver. The resulting methods are gauge invariant, feature exact charge conservation and show excellent long-time energy and momentum behaviour. Due to the generality of our framework, these conservation properties are guaranteed independently of a particular choice of the finite element basis, as long as the corresponding finite element spaces satisfy certain compatibility conditions.

  17. Inverse identification of unknown finite-duration air pollutant release from a point source in urban environment

    NASA Astrophysics Data System (ADS)

    Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.

    2018-05-01

    In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.

  18. Slave finite elements: The temporal element approach to nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Gellin, S.

    1984-01-01

    A formulation method for finite elements in space and time incorporating nonlinear geometric and material behavior is presented. The method uses interpolation polynomials for approximating the behavior of various quantities over the element domain, and only explicit integration over space and time. While applications are general, the plate and shell elements that are currently being programmed are appropriate to model turbine blades, vanes, and combustor liners.

  19. Module-based construction of plasmids for chromosomal integration of the fission yeast Schizosaccharomyces pombe

    PubMed Central

    Kakui, Yasutaka; Sunaga, Tomonari; Arai, Kunio; Dodgson, James; Ji, Liang; Csikász-Nagy, Attila; Carazo-Salas, Rafael; Sato, Masamitsu

    2015-01-01

    Integration of an external gene into a fission yeast chromosome is useful to investigate the effect of the gene product. An easy way to knock-in a gene construct is use of an integration plasmid, which can be targeted and inserted to a chromosome through homologous recombination. Despite the advantage of integration, construction of integration plasmids is energy- and time-consuming, because there is no systematic library of integration plasmids with various promoters, fluorescent protein tags, terminators and selection markers; therefore, researchers are often forced to make appropriate ones through multiple rounds of cloning procedures. Here, we establish materials and methods to easily construct integration plasmids. We introduce a convenient cloning system based on Golden Gate DNA shuffling, which enables the connection of multiple DNA fragments at once: any kind of promoters and terminators, the gene of interest, in combination with any fluorescent protein tag genes and any selection markers. Each of those DNA fragments, called a ‘module’, can be tandemly ligated in the order we desire in a single reaction, which yields a circular plasmid in a one-step manner. The resulting plasmids can be integrated through standard methods for transformation. Thus, these materials and methods help easy construction of knock-in strains, and this will further increase the value of fission yeast as a model organism. PMID:26108218

  20. Characterization of HBV integration patterns and timing in liver cancer and HBV-infected livers.

    PubMed

    Furuta, Mayuko; Tanaka, Hiroko; Shiraishi, Yuichi; Unida, Takuro; Imamura, Michio; Fujimoto, Akihiro; Fujita, Masahi; Sasaki-Oku, Aya; Maejima, Kazuhiro; Nakano, Kaoru; Kawakami, Yoshiiku; Arihiro, Koji; Aikata, Hiroshi; Ueno, Masaki; Hayami, Shinya; Ariizumi, Shun-Ichi; Yamamoto, Masakazu; Gotoh, Kunihito; Ohdan, Hideki; Yamaue, Hiroki; Miyano, Satoru; Chayama, Kazuaki; Nakagawa, Hidewaki

    2018-05-18

    Integration of Hepatitis B virus (HBV) into the human genome can cause genetic instability, leading to selective advantages for HBV-induced liver cancer. Despite the large number of studies for HBV integration into liver cancer, little is known about the mechanism of initial HBV integration events owing to the limitations of materials and detection methods. We conducted an HBV sequence capture, followed by ultra-deep sequencing, to screen for HBV integrations in 111 liver samples from human-hepatocyte chimeric mice with HBV infection and human clinical samples containing 42 paired samples from non-tumorous and tumorous liver tissues. The HBV infection model using chimeric mice verified the efficiency of our HBV-capture analysis and demonstrated that HBV integration could occur 23 to 49 days after HBV infection via microhomology-mediated end joining and predominantly in mitochondrial DNA. Overall HBV integration sites in clinical samples were significantly enriched in regions annotated as exhibiting open chromatin, a high level of gene expression, and early replication timing in liver cells. These data indicate that HBV integration in liver tissue was biased according to chromatin accessibility, with additional selection pressures in the gene promoters of tumor samples. Moreover, an integrative analysis using paired non-tumorous and tumorous samples and HBV-related transcriptional change revealed the involvement of TERT and MLL4 in clonal selection. We also found frequent and non-tumorous liver-specific HBV integrations in FN1 and HBV-FN1 fusion transcript. Extensive survey of HBV integrations facilitates and improves the understanding of the timing and biology of HBV integration during infection and HBV-related hepatocarcinogenesis.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, S.

    This report describes the use of several subroutines from the CORLIB core mathematical subroutine library for the solution of a model fluid flow problem. The model consists of the Euler partial differential equations. The equations are spatially discretized using the method of pseudo-characteristics. The resulting system of ordinary differential equations is then integrated using the method of lines. The stiff ordinary differential equation solver LSODE (2) from CORLIB is used to perform the time integration. The non-stiff solver ODE (4) is used to perform a related integration. The linear equation solver subroutines DECOMP and SOLVE are used to solve linearmore » systems whose solutions are required in the calculation of the time derivatives. The monotone cubic spline interpolation subroutines PCHIM and PCHFE are used to approximate water properties. The report describes the use of each of these subroutines in detail. It illustrates the manner in which modules from a standard mathematical software library such as CORLIB can be used as building blocks in the solution of complex problems of practical interest. 9 refs., 2 figs., 4 tabs.« less

  2. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    NASA Astrophysics Data System (ADS)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  3. Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams

    NASA Astrophysics Data System (ADS)

    Willow, Soohaeng Yoo; Hirata, So

    2014-01-01

    A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.

  4. Optical device terahertz integration in a two-dimensional-three-dimensional heterostructure.

    PubMed

    Feng, Zhifang; Lin, Jie; Feng, Shuai

    2018-01-10

    The transmission properties of an off-planar integrated circuit including two wavelength division demultiplexers are designed, simulated, and analyzed in detail by the finite-difference time-domain method. The results show that the wavelength selection for different ports (0.404[c/a] at B 2 port, 0.389[c/a] at B 3 port, and 0.394[c/a] at B 4 port) can be realized by adjusting the parameters. It is especially important that the off-planar integration between two complex devices is also realized. These simulated results give valuable promotions in the all-optical integrated circuit, especially in compact integration.

  5. Response of MDOF strongly nonlinear systems to fractional Gaussian noises.

    PubMed

    Deng, Mao-Lin; Zhu, Wei-Qiu

    2016-08-01

    In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.

  6. Response of MDOF strongly nonlinear systems to fractional Gaussian noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Mao-Lin; Zhu, Wei-Qiu, E-mail: wqzhu@zju.edu.cn

    2016-08-15

    In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.

  7. Realization of a multipath ultrasonic gas flowmeter based on transit-time technique.

    PubMed

    Chen, Qiang; Li, Weihua; Wu, Jiangtao

    2014-01-01

    A microcomputer-based ultrasonic gas flowmeter with transit-time method is presented. Modules of the flowmeter are designed systematically, including the acoustic path arrangement, ultrasound emission and reception module, transit-time measurement module, the software and so on. Four 200 kHz transducers forming two acoustic paths are used to send and receive ultrasound simultaneously. The synchronization of the transducers can eliminate the influence caused by the inherent switch time in simple chord flowmeter. The distribution of the acoustic paths on the mechanical apparatus follows the Tailored integration, which could reduce the inherent error by 2-3% compared with the Gaussian integration commonly used in the ultrasonic flowmeter now. This work also develops timing modules to determine the flight time of the acoustic signal. The timing mechanism is different from the traditional method. The timing circuit here adopts high capability chip TDC-GP2, with the typical resolution of 50 ps. The software of Labview is used to receive data from the circuit and calculate the gas flow value. Finally, the two paths flowmeter has been calibrated and validated on the test facilities for air flow in Shaanxi Institute of Measurement & Testing. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Optimal preview control for a linear continuous-time stochastic control system in finite-time horizon

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi

    2017-01-01

    This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.

  9. Real-time electron density measurements from Cotton-Mouton effect in JET machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brombin, M.; Electrical Engineering Department, Padova University, via Gradenigo 6-A, 35131 Padova; Boboc, A.

    Real-time density profile measurements are essential for advanced fusion tokamak operation and interferometry is a proven method for this task. Nevertheless, as a consequence of edge localized modes, pellet injections, fast density increases, or disruptions, the interferometer is subject to fringe jumps, which produce loss of the signal preventing reliable use of the measured density in a real-time feedback controller. An alternative method to measure the density is polarimetry based on the Cotton-Mouton effect, which is proportional to the line-integrated electron density. A new analysis approach has been implemented and tested to verify the reliability of the Cotton-Mouton measurements formore » a wide range of plasma parameters and to compare the density evaluated from polarimetry with that from interferometry. The density measurements based on polarimetry are going to be integrated in the real-time control system of JET since the difference with the interferometry is within one fringe for more than 90% of the cases.« less

  10. Problem based learning: the effect of real time data on the website to student independence

    NASA Astrophysics Data System (ADS)

    Setyowidodo, I.; Pramesti, Y. S.; Handayani, A. D.

    2018-05-01

    Learning science developed as an integrative science rather than disciplinary education, the reality of the nation character development has not been able to form a more creative and independent Indonesian man. Problem Based Learning based on real time data in the website is a learning method focuses on developing high-level thinking skills in problem-oriented situations by integrating technology in learning. The essence of this study is the presentation of authentic problems in the real time data situation in the website. The purpose of this research is to develop student independence through Problem Based Learning based on real time data in website. The type of this research is development research with implementation using purposive sampling technique. Based on the study there is an increase in student self-reliance, where the students in very high category is 47% and in the high category is 53%. This learning method can be said to be effective in improving students learning independence in problem-oriented situations.

  11. Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions

    NASA Technical Reports Server (NTRS)

    Pilon, Anthony R.; Lyrintzis, Anastasios S.

    1997-01-01

    The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that they may be used in any aeroacoustics problem.

  12. Direct power control of DFIG wind turbine systems based on an intelligent proportional-integral sliding mode control.

    PubMed

    Li, Shanzhi; Wang, Haoping; Tian, Yang; Aitouch, Abdel; Klein, John

    2016-09-01

    This paper presents an intelligent proportional-integral sliding mode control (iPISMC) for direct power control of variable speed-constant frequency wind turbine system. This approach deals with optimal power production (in the maximum power point tracking sense) under several disturbance factors such as turbulent wind. This controller is made of two sub-components: (i) an intelligent proportional-integral module for online disturbance compensation and (ii) a sliding mode module for circumventing disturbance estimation errors. This iPISMC method has been tested on FAST/Simulink platform of a 5MW wind turbine system. The obtained results demonstrate that the proposed iPISMC method outperforms the classical PI and intelligent proportional-integral control (iPI) in terms of both active power and response time. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Parareal algorithms with local time-integrators for time fractional differential equations

    NASA Astrophysics Data System (ADS)

    Wu, Shu-Lin; Zhou, Tao

    2018-04-01

    It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.

  14. On-line estimation and compensation of measurement delay in GPS/SINS integration

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Wang, Wei

    2008-10-01

    The chief aim of this paper is to propose a simple on-line estimation and compensation method of GPS/SINS measurement delay. The causes of time delay for GPS/SINS integration are analyzed in this paper. New Kalman filter state equations augmented by measurement delay and modified measurement equations are derived. Based on an open-loop Kalman filter, several simulations are run, results of which show that by the proposed method, the estimation and compensation error of measurement delay is below 0.1s.

  15. An Operator-Integration-Factor Splitting (OIFS) method for Incompressible Flows in Moving Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Saumil S.; Fischer, Paul F.; Min, Misun

    In this paper, we present a characteristic-based numerical procedure for simulating incompressible flows in domains with moving boundaries. Our approach utilizes an operator-integration-factor splitting technique to help produce an effcient and stable numerical scheme. Using the spectral element method and an arbitrary Lagrangian-Eulerian formulation, we investigate flows where the convective acceleration effects are non-negligible. Several examples, ranging from laminar to turbulent flows, are considered. Comparisons with a standard, semi-implicit time-stepping procedure illustrate the improved performance of the scheme.

  16. Bayesian correlated clustering to integrate multiple datasets

    PubMed Central

    Kirk, Paul; Griffin, Jim E.; Savage, Richard S.; Ghahramani, Zoubin; Wild, David L.

    2012-01-01

    Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct—but often complementary—information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. Results: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI’s performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation–chip and protein–protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques—as well as to non-integrative approaches—demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods. Availability: A Matlab implementation of MDI is available from http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/. Contact: D.L.Wild@warwick.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047558

  17. Integral equation approach to time-dependent kinematic dynamos in finite domains

    NASA Astrophysics Data System (ADS)

    Xu, Mingtian; Stefani, Frank; Gerbeth, Gunter

    2004-11-01

    The homogeneous dynamo effect is at the root of cosmic magnetic field generation. With only a very few exceptions, the numerical treatment of homogeneous dynamos is carried out in the framework of the differential equation approach. The present paper tries to facilitate the use of integral equations in dynamo research. Apart from the pedagogical value to illustrate dynamo action within the well-known picture of the Biot-Savart law, the integral equation approach has a number of practical advantages. The first advantage is its proven numerical robustness and stability. The second and perhaps most important advantage is its applicability to dynamos in arbitrary geometries. The third advantage is its intimate connection to inverse problems relevant not only for dynamos but also for technical applications of magnetohydrodynamics. The paper provides the first general formulation and application of the integral equation approach to time-dependent kinematic dynamos, with stationary dynamo sources, in finite domains. The time dependence is restricted to the magnetic field, whereas the velocity or corresponding mean-field sources of dynamo action are supposed to be stationary. For the spherically symmetric α2 dynamo model it is shown how the general formulation is reduced to a coupled system of two radial integral equations for the defining scalars of the poloidal and toroidal field components. The integral equation formulation for spherical dynamos with general stationary velocity fields is also derived. Two numerical examples—the α2 dynamo model with radially varying α and the Bullard-Gellman model—illustrate the equivalence of the approach with the usual differential equation method. The main advantage of the method is exemplified by the treatment of an α2 dynamo in rectangular domains.

  18. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    NASA Technical Reports Server (NTRS)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  19. Optical isolation based on space-time engineered asymmetric photonic band gaps

    NASA Astrophysics Data System (ADS)

    Chamanara, Nima; Taravati, Sajjad; Deck-Léger, Zoé-Lise; Caloz, Christophe

    2017-10-01

    Nonreciprocal electromagnetic devices play a crucial role in modern microwave and optical technologies. Conventional methods for realizing such systems are incompatible with integrated circuits. With recent advances in integrated photonics, the need for efficient on-chip magnetless nonreciprocal devices has become more pressing than ever. This paper leverages space-time engineered asymmetric photonic band gaps to generate optical isolation. It shows that a properly designed space-time modulated slab is highly reflective/transparent for opposite directions of propagation. The corresponding design is magnetless, accommodates low modulation frequencies, and can achieve very high isolation levels. An experimental proof of concept at microwave frequencies is provided.

  20. Range of sound levels in the outdoor environment

    Treesearch

    Lewis S. Goodfriend

    1977-01-01

    Current methods of measuring and rating noise in a metropolitan area are examined, including real-time spectrum analysis and sound-level integration, producing a single-number value representing the noise impact for each hour or each day. Methods of noise rating for metropolitan areas are reviewed, and the various measures from multidimensional rating methods such as...

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Chakraborty, Sudipta; Lauss, Georg

    This paper presents a concise description of state-of-the-art real-time simulation-based testing methods and demonstrates how they can be used independently and/or in combination as an integrated development and validation approach for smart grid DERs and systems. A three-part case study demonstrating the application of this integrated approach at the different stages of development and validation of a system-integrated smart photovoltaic (PV) inverter is also presented. Laboratory testing results and perspectives from two international research laboratories are included in the case study.

  2. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  3. A Galleria Boundary Element Method for two-dimensional nonlinear magnetostatics

    NASA Astrophysics Data System (ADS)

    Brovont, Aaron D.

    The Boundary Element Method (BEM) is a numerical technique for solving partial differential equations that is used broadly among the engineering disciplines. The main advantage of this method is that one needs only to mesh the boundary of a solution domain. A key drawback is the myriad of integrals that must be evaluated to populate the full system matrix. To this day these integrals have been evaluated using numerical quadrature. In this research, a Galerkin formulation of the BEM is derived and implemented to solve two-dimensional magnetostatic problems with a focus on accurate, rapid computation. To this end, exact, closed-form solutions have been derived for all the integrals comprising the system matrix as well as those required to compute fields in post-processing; the need for numerical integration has been eliminated. It is shown that calculation of the system matrix elements using analytical solutions is 15-20 times faster than with numerical integration of similar accuracy. Furthermore, through the example analysis of a c-core inductor, it is demonstrated that the present BEM formulation is a competitive alternative to the Finite Element Method (FEM) for linear magnetostatic analysis. Finally, the BEM formulation is extended to analyze nonlinear magnetostatic problems via the Dual Reciprocity Method (DRBEM). It is shown that a coarse, meshless analysis using the DRBEM is able to achieve RMS error of 3-6% compared to a commercial FEM package in lightly saturated conditions.

  4. A third-order implicit discontinuous Galerkin method based on a Hermite WENO reconstruction for time-accurate solution of the compressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Liu, Xiaodong; Luo, Hong

    2015-06-01

    Here, a space and time third-order discontinuous Galerkin method based on a Hermite weighted essentially non-oscillatory reconstruction is presented for the unsteady compressible Euler and Navier–Stokes equations. At each time step, a lower-upper symmetric Gauss–Seidel preconditioned generalized minimal residual solver is used to solve the systems of linear equations arising from an explicit first stage, single diagonal coefficient, diagonally implicit Runge–Kutta time integration scheme. The performance of the developed method is assessed through a variety of unsteady flow problems. Numerical results indicate that this method is able to deliver the designed third-order accuracy of convergence in both space and time,more » while requiring remarkably less storage than the standard third-order discontinous Galerkin methods, and less computing time than the lower-order discontinous Galerkin methods to achieve the same level of temporal accuracy for computing unsteady flow problems.« less

  5. A new method for calculating differential distributions directly in Mellin space

    NASA Astrophysics Data System (ADS)

    Mitov, Alexander

    2006-12-01

    We present a new method for the calculation of differential distributions directly in Mellin space without recourse to the usual momentum-fraction (or z-) space. The method is completely general and can be applied to any process. It is based on solving the integration-by-parts identities when one of the powers of the propagators is an abstract number. The method retains the full dependence on the Mellin variable and can be implemented in any program for solving the IBP identities based on algebraic elimination, like Laporta. General features of the method are: (1) faster reduction, (2) smaller number of master integrals compared to the usual z-space approach and (3) the master integrals satisfy difference instead of differential equations. This approach generalizes previous results related to fully inclusive observables like the recently calculated three-loop space-like anomalous dimensions and coefficient functions in inclusive DIS to more general processes requiring separate treatment of the various physical cuts. Many possible applications of this method exist, the most notable being the direct evaluation of the three-loop time-like splitting functions in QCD.

  6. Evaluation of Contamination Inspection and Analysis Methods through Modeling System Performance

    NASA Technical Reports Server (NTRS)

    Seasly, Elaine; Dever, Jason; Stuban, Steven M. F.

    2016-01-01

    Contamination is usually identified as a risk on the risk register for sensitive space systems hardware. Despite detailed, time-consuming, and costly contamination control efforts during assembly, integration, and test of space systems, contaminants are still found during visual inspections of hardware. Improved methods are needed to gather information during systems integration to catch potential contamination issues earlier and manage contamination risks better. This research explores evaluation of contamination inspection and analysis methods to determine optical system sensitivity to minimum detectable molecular contamination levels based on IEST-STD-CC1246E non-volatile residue (NVR) cleanliness levels. Potential future degradation of the system is modeled given chosen modules representative of optical elements in an optical system, minimum detectable molecular contamination levels for a chosen inspection and analysis method, and determining the effect of contamination on the system. By modeling system performance based on when molecular contamination is detected during systems integration and at what cleanliness level, the decision maker can perform trades amongst different inspection and analysis methods and determine if a planned method is adequate to meet system requirements and manage contamination risk.

  7. 49 CFR Appendix E to Part 192 - Guidance on Determining High Consequence Areas and on Carrying out Requirements in the Integrity...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... addressing time dependent and independent threats for a transmission pipeline operating below 30% SMYS not in... pipeline system are covered for purposes of the integrity management program requirements, an operator must... system, or an operator may apply one method to individual portions of the pipeline system. (Refer to...

  8. An improved semi-implicit method for structural dynamics analysis

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1982-01-01

    A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.

  9. Pulling It Together: Using Integrative Assignments as Empirical Direct Measures of Student Learning for Learning Community Program Assessment

    ERIC Educational Resources Information Center

    Huerta, Juan Carlos; Sperry, Rita

    2013-01-01

    This article outlines a systematic and manageable method for learning community program assessment based on collecting empirical direct measures of student learning. Developed at Texas A&M University--Corpus Christi where all full-time, first-year students are in learning communities, the approach ties integrative assignment design to a rubric…

  10. The Toda lattice as a forced integrable system

    NASA Technical Reports Server (NTRS)

    Hansen, P. J.; Kaup, D. J.

    1985-01-01

    The analytic properties of the Jost functions for the inverse scattering transform associated with the forced Toda lattice are shown to determine the time evolution of this particular boundary value problem. It is suggested that inverse scattering methods may be used generally to analyze forced integrable systems. Thus an extension of the applicability of the inverse scattering transform is indicated.

  11. An effective pseudospectral method for constraint dynamic optimisation problems with characteristic times

    NASA Astrophysics Data System (ADS)

    Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin

    2018-03-01

    Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.

  12. Integrated Design of a Telerobotic Workstation

    NASA Technical Reports Server (NTRS)

    Rochlis, Jennifer L.; Clarke, John-Paul

    2001-01-01

    The experiments described in this paper are part of a larger joint MIT/NASA research effort that focuses on the development of a methodology for designing and evaluating integrated interfaces for highly dexterous and multi-functional telerobots. Specifically, a telerobotic workstation is being designed for an Extravehicular Activity (EVA) anthropomorphic space station telerobot. Previous researchers have designed telerobotic workstations based upon performance of discrete subsets of tasks (for example, peg-in-hole, tracking, etc.) without regard for transitions that operators go through between tasks performed sequentially in the context of larger integrated tasks. The exploratory research experiments presented here took an integrated approach and assessed how subjects operating a full-immersion telerobot perform during the transitions between sub-tasks of two common EVA tasks. Preliminary results show that up to 30% of total task time is spent gaining and maintaining Situation Awareness (SA) of their task space and environment during transitions. Although task performance improves over the two trial days, the percentage of time spent on SA remains the same. This method identifies areas where workstation displays and feedback mechanisms are most needed to increase operator performance and decrease operator workload - areas that previous research methods have not been able to address.

  13. Numerical integration and optimization of motions for multibody dynamic systems

    NASA Astrophysics Data System (ADS)

    Aguilar Mayans, Joan

    This thesis considers the optimization and simulation of motions involving rigid body systems. It does so in three distinct parts, with the following topics: optimization and analysis of human high-diving motions, efficient numerical integration of rigid body dynamics with contacts, and motion optimization of a two-link robot arm using Finite-Time Lyapunov Analysis. The first part introduces the concept of eigenpostures, which we use to simulate and analyze human high-diving motions. Eigenpostures are used in two different ways: first, to reduce the complexity of the optimal control problem that we solve to obtain such motions, and second, to generate an eigenposture space to which we map existing real world motions to better analyze them. The benefits of using eigenpostures are showcased through different examples. The second part reviews an extensive list of integration algorithms used for the integration of rigid body dynamics. We analyze the accuracy and stability of the different integrators in the three-dimensional space and the rotation space SO(3). Integrators with an accuracy higher than first order perform more efficiently than integrators with first order accuracy, even in the presence of contacts. The third part uses Finite-time Lyapunov Analysis to optimize motions for a two-link robot arm. Finite-Time Lyapunov Analysis diagnoses the presence of time-scale separation in the dynamics of the optimized motion and provides the information and methodology for obtaining an accurate approximation to the optimal solution, avoiding the complications that timescale separation causes for alternative solution methods.

  14. A new interpolation method for gridded extensive variables with application in Lagrangian transport and dispersion models

    NASA Astrophysics Data System (ADS)

    Hittmeir, Sabine; Philipp, Anne; Seibert, Petra

    2017-04-01

    In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.

  15. Machine remaining useful life prediction: An integrated adaptive neuro-fuzzy and high-order particle filtering approach

    NASA Astrophysics Data System (ADS)

    Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.

    2012-04-01

    Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.

  16. Investigation for connecting waveguide in off-planar integrated circuits.

    PubMed

    Lin, Jie; Feng, Zhifang

    2017-09-01

    The transmission properties of a vertical waveguide connected by different devices in off-planar integrated circuits are designed, investigated, and analyzed in detail by the finite-difference time-domain method. The results show that both guide bandwidth and transmission efficiency can be adjusted effectively by shifting the vertical waveguide continuously. Surprisingly, the wide guide band (0.385[c/a]∼0.407[c/a]) and well transmission (-6  dB) are observed simultaneously in several directions when the vertical waveguide is located at a specific location. The results are very important for all-optical integrated circuits, especially in compact integration.

  17. Master equations and the theory of stochastic path integrals

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a ‘generating functional’, which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a ‘forward’ and a ‘backward’ path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

  18. Master equations and the theory of stochastic path integrals.

    PubMed

    Weber, Markus F; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

  19. 45 CFR 225.2 - State plan requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: (1) Such methods of recruitment and selection as will offer opportunity for full-time or part-time... personnel of which subprofessional staff are an integral part; (3) A career service plan permitting persons... provide for: (1) A position in which rests responsibility for the development, organization, and...

  20. Satellite image fusion based on principal component analysis and high-pass filtering.

    PubMed

    Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E

    2010-06-01

    This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.

  1. Creep and shrinkage effects on integral abutment bridges

    NASA Astrophysics Data System (ADS)

    Munuswamy, Sivakumar

    Integral abutment bridges provide bridge engineers an economical design alternative to traditional bridges with expansion joints owing to the benefits, arising from elimination of expensive joints installation and reduced maintenance cost. The superstructure for integral abutment bridges is cast integrally with abutments. Time-dependent effects of creep, shrinkage of concrete, relaxation of prestressing steel, temperature gradient, restraints provided by abutment foundation and backfill and statical indeterminacy of the structure introduce time-dependent variations in the redundant forces. An analytical model and numerical procedure to predict instantaneous linear behavior and non-linear time dependent long-term behavior of continuous composite superstructure are developed in which the redundant forces in the integral abutment bridges are derived considering the time-dependent effects. The redistributions of moments due to time-dependent effects have been considered in the analysis. The analysis includes nonlinearity due to cracking of the concrete, as well as the time-dependent deformations. American Concrete Institute (ACI) and American Association of State Highway and Transportation Officials (AASHTO) models for creep and shrinkage are considered in modeling the time dependent material behavior. The variations in the material property of the cross-section corresponding to the constituent materials are incorporated and age-adjusted effective modulus method with relaxation procedure is followed to include the creep behavior of concrete. The partial restraint provided by the abutment-pile-soil system is modeled using discrete spring stiffness as translational and rotational degrees of freedom. Numerical simulation of the behavior is carried out on continuous composite integral abutment bridges and the deformations and stresses due to time-dependent effects due to typical sustained loads are computed. The results from the analytical model are compared with the published laboratory experimental and field data. The behavior of the laterally loaded piles supporting the integral abutments is evaluated and presented in terms of the lateral deflection, bending moment, shear force and stress along the pile depth.

  2. Optimization of processing parameters of UAV integral structural components based on yield response

    NASA Astrophysics Data System (ADS)

    Chen, Yunsheng

    2018-05-01

    In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.

  3. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DOE PAGES

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe; ...

    2017-04-07

    In this study, the increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course absolute quantitative metabolomics. This approach, termed “unsteady-state flux balance analysis” (uFBA), is applied to four cellular systems: three dynamic and one steady-state as a negative control. uFBA and FBA predictions are contrasted, and uFBA is found to be more accurate in predicting dynamic metabolic flux states for red blood cells, platelets, and Saccharomyces cerevisiae. Notably, only uFBAmore » predicts that stored red blood cells metabolize TCA intermediates to regenerate important cofactors, such as ATP, NADH, and NADPH. These pathway usage predictions were subsequently validated through 13C isotopic labeling and metabolic flux analysis in stored red blood cells. Utilizing time-course metabolomics data, uFBA provides an accurate method to predict metabolic physiology at the cellular scale for dynamic systems.« less

  4. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe

    In this study, the increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course absolute quantitative metabolomics. This approach, termed “unsteady-state flux balance analysis” (uFBA), is applied to four cellular systems: three dynamic and one steady-state as a negative control. uFBA and FBA predictions are contrasted, and uFBA is found to be more accurate in predicting dynamic metabolic flux states for red blood cells, platelets, and Saccharomyces cerevisiae. Notably, only uFBAmore » predicts that stored red blood cells metabolize TCA intermediates to regenerate important cofactors, such as ATP, NADH, and NADPH. These pathway usage predictions were subsequently validated through 13C isotopic labeling and metabolic flux analysis in stored red blood cells. Utilizing time-course metabolomics data, uFBA provides an accurate method to predict metabolic physiology at the cellular scale for dynamic systems.« less

  5. Integrated micro-optofluidic platform for real-time detection of airborne microorganisms

    NASA Astrophysics Data System (ADS)

    Choi, Jeongan; Kang, Miran; Jung, Jae Hee

    2015-11-01

    We demonstrate an integrated micro-optofluidic platform for real-time, continuous detection and quantification of airborne microorganisms. Measurements of the fluorescence and light scattering from single particles in a microfluidic channel are used to determine the total particle number concentration and the microorganism number concentration in real-time. The system performance is examined by evaluating standard particle measurements with various sample flow rates and the ratios of fluorescent to non-fluorescent particles. To apply this method to real-time detection of airborne microorganisms, airborne Escherichia coli, Bacillus subtilis, and Staphylococcus epidermidis cells were introduced into the micro-optofluidic platform via bioaerosol generation, and a liquid-type particle collection setup was used. We demonstrate successful discrimination of SYTO82-dyed fluorescent bacterial cells from other residue particles in a continuous and real-time manner. In comparison with traditional microscopy cell counting and colony culture methods, this micro-optofluidic platform is not only more accurate in terms of the detection efficiency for airborne microorganisms but it also provides additional information on the total particle number concentration.

  6. Using ontology databases for scalable query answering, inconsistency detection, and data integration

    PubMed Central

    Dou, Dejing

    2011-01-01

    An ontology database is a basic relational database management system that models an ontology plus its instances. To reason over the transitive closure of instances in the subsumption hierarchy, for example, an ontology database can either unfold views at query time or propagate assertions using triggers at load time. In this paper, we use existing benchmarks to evaluate our method—using triggers—and we demonstrate that by forward computing inferences, we not only improve query time, but the improvement appears to cost only more space (not time). However, we go on to show that the true penalties were simply opaque to the benchmark, i.e., the benchmark inadequately captures load-time costs. We have applied our methods to two case studies in biomedicine, using ontologies and data from genetics and neuroscience to illustrate two important applications: first, ontology databases answer ontology-based queries effectively; second, using triggers, ontology databases detect instance-based inconsistencies—something not possible using views. Finally, we demonstrate how to extend our methods to perform data integration across multiple, distributed ontology databases. PMID:22163378

  7. Integrated micro-optofluidic platform for real-time detection of airborne microorganisms

    PubMed Central

    Choi, Jeongan; Kang, Miran; Jung, Jae Hee

    2015-01-01

    We demonstrate an integrated micro-optofluidic platform for real-time, continuous detection and quantification of airborne microorganisms. Measurements of the fluorescence and light scattering from single particles in a microfluidic channel are used to determine the total particle number concentration and the microorganism number concentration in real-time. The system performance is examined by evaluating standard particle measurements with various sample flow rates and the ratios of fluorescent to non-fluorescent particles. To apply this method to real-time detection of airborne microorganisms, airborne Escherichia coli, Bacillus subtilis, and Staphylococcus epidermidis cells were introduced into the micro-optofluidic platform via bioaerosol generation, and a liquid-type particle collection setup was used. We demonstrate successful discrimination of SYTO82-dyed fluorescent bacterial cells from other residue particles in a continuous and real-time manner. In comparison with traditional microscopy cell counting and colony culture methods, this micro-optofluidic platform is not only more accurate in terms of the detection efficiency for airborne microorganisms but it also provides additional information on the total particle number concentration. PMID:26522006

  8. Application of process monitoring to anomaly detection in nuclear material processing systems via system-centric event interpretation of data from multiple sensors of varying reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao

    In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less

  9. Hybrid Wavelet De-noising and Rank-Set Pair Analysis approach for forecasting hydro-meteorological time series

    NASA Astrophysics Data System (ADS)

    WANG, D.; Wang, Y.; Zeng, X.

    2017-12-01

    Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, Wavelet De-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series.

  10. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  11. Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.

    2017-10-01

    The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.

  12. Nondestructive evaluation of composite materials by pulsed time domain methods in imbedded optical fibers

    NASA Technical Reports Server (NTRS)

    Claus, R. O.; Bennett, K. D.; Jackson, B. S.

    1986-01-01

    The application of fiber-optical time domain reflectometry (OTDR) to nondestructive quantitative measurements of distributed internal strain in graphite-epoxy composites, using optical fiber waveguides imbedded between plies, is discussed. The basic OTDR measurement system is described, together with the methods used to imbed optical fibers within composites. Measurement results, system limitations, and the effect of the imbedded fiber on the integrity of the host composite material are considered.

  13. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  14. Mixed-Methods Research in the Discipline of Nursing.

    PubMed

    Beck, Cheryl Tatano; Harrison, Lisa

    2016-01-01

    In this review article, we examined the prevalence and characteristics of 294 mixed-methods studies in the discipline of nursing. Creswell and Plano Clark's typology was most frequently used along with concurrent timing. Bivariate statistics was most often the highest level of statistics reported in the results. As for qualitative data analysis, content analysis was most frequently used. The majority of nurse researchers did not specifically address the purpose, paradigm, typology, priority, timing, interaction, or integration of their mixed-methods studies. Strategies are suggested for improving the design, conduct, and reporting of mixed-methods studies in the discipline of nursing.

  15. Stabilization of computational procedures for constrained dynamical systems

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Chiou, J. C.

    1988-01-01

    A new stabilization method of treating constraints in multibody dynamical systems is presented. By tailoring a penalty form of the constraint equations, the method achieves stabilization without artificial damping and yields a companion matrix differential equation for the constraint forces; hence, the constraint forces are obtained by integrating the companion differential equation for the constraint forces in time. A principal feature of the method is that the errors committed in each constraint condition decay with its corresponding characteristic time scale associated with its constraint force. Numerical experiments indicate that the method yields a marked improvement over existing techniques.

  16. Canonical Drude Weight for Non-integrable Quantum Spin Chains

    NASA Astrophysics Data System (ADS)

    Mastropietro, Vieri; Porta, Marcello

    2018-03-01

    The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.

  17. Photonic crystal ring resonator based optical filters for photonic integrated circuits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, S., E-mail: mail2robinson@gmail.com

    In this paper, a two Dimensional (2D) Photonic Crystal Ring Resonator (PCRR) based optical Filters namely Add Drop Filter, Bandpass Filter, and Bandstop Filter are designed for Photonic Integrated Circuits (PICs). The normalized output response of the filters is obtained using 2D Finite Difference Time Domain (FDTD) method and the band diagram of periodic and non-periodic structure is attained by Plane Wave Expansion (PWE) method. The size of the device is minimized from a scale of few tens of millimeters to the order of micrometers. The overall size of the filters is around 11.4 μm × 11.4 μm which ismore » highly suitable of photonic integrated circuits.« less

  18. On Fitting a Multivariate Two-Part Latent Growth Model

    PubMed Central

    Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.

    2017-01-01

    A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054

  19. Object-oriented integrated approach for the design of scalable ECG systems.

    PubMed

    Boskovic, Dusanka; Besic, Ingmar; Avdagic, Zikrija

    2009-01-01

    The paper presents the implementation of Object-Oriented (OO) integrated approaches to the design of scalable Electro-Cardio-Graph (ECG) Systems. The purpose of this methodology is to preserve real-world structure and relations with the aim to minimize the information loss during the process of modeling, especially for Real-Time (RT) systems. We report on a case study of the design that uses the integration of OO and RT methods and the Unified Modeling Language (UML) standard notation. OO methods identify objects in the real-world domain and use them as fundamental building blocks for the software system. The gained experience based on the strongly defined semantics of the object model is discussed and related problems are analyzed.

  20. Direction and Integration of Experimental Ground Test Capabilities and Computational Methods

    NASA Technical Reports Server (NTRS)

    Dunn, Steven C.

    2016-01-01

    This paper groups and summarizes the salient points and findings from two AIAA conference panels targeted at defining the direction, with associated key issues and recommendations, for the integration of experimental ground testing and computational methods. Each panel session utilized rapporteurs to capture comments from both the panel members and the audience. Additionally, a virtual panel of several experts were consulted between the two sessions and their comments were also captured. The information is organized into three time-based groupings, as well as by subject area. These panel sessions were designed to provide guidance to both researchers/developers and experimental/computational service providers in defining the future of ground testing, which will be inextricably integrated with the advancement of computational tools.

  1. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE PAGES

    Steyer, Andrew J.; Van Vleck, Erik S.

    2018-04-13

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  2. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steyer, Andrew J.; Van Vleck, Erik S.

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  3. Integrated sample-to-detection chip for nucleic acid test assays.

    PubMed

    Prakash, R; Pabbaraju, K; Wong, S; Tellier, R; Kaler, K V I S

    2016-06-01

    Nucleic acid based diagnostic techniques are routinely used for the detection of infectious agents. Most of these assays rely on nucleic acid extraction platforms for the extraction and purification of nucleic acids and a separate real-time PCR platform for quantitative nucleic acid amplification tests (NATs). Several microfluidic lab on chip (LOC) technologies have been developed, where mechanical and chemical methods are used for the extraction and purification of nucleic acids. Microfluidic technologies have also been effectively utilized for chip based real-time PCR assays. However, there are few examples of microfluidic systems which have successfully integrated these two key processes. In this study, we have implemented an electro-actuation based LOC micro-device that leverages multi-frequency actuation of samples and reagents droplets for chip based nucleic acid extraction and real-time, reverse transcription (RT) PCR (qRT-PCR) amplification from clinical samples. Our prototype micro-device combines chemical lysis with electric field assisted isolation of nucleic acid in a four channel parallel processing scheme. Furthermore, a four channel parallel qRT-PCR amplification and detection assay is integrated to deliver the sample-to-detection NAT chip. The NAT chip combines dielectrophoresis and electrostatic/electrowetting actuation methods with resistive micro-heaters and temperature sensors to perform chip based integrated NATs. The two chip modules have been validated using different panels of clinical samples and their performance compared with standard platforms. This study has established that our integrated NAT chip system has a sensitivity and specificity comparable to that of the standard platforms while providing up to 10 fold reduction in sample/reagent volumes.

  4. A General and Efficient Method for Incorporating Precise Spike Times in Globally Time-Driven Simulations

    PubMed Central

    Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus

    2010-01-01

    Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031

  5. Lagrangian velocity and acceleration correlations of large inertial particles in a closed turbulent flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machicoane, Nathanaël; Volk, Romain

    We investigate the response of large inertial particle to turbulent fluctuations in an inhomogeneous and anisotropic flow. We conduct a Lagrangian study using particles both heavier and lighter than the surrounding fluid, and whose diameters are comparable to the flow integral scale. Both velocity and acceleration correlation functions are analyzed to compute the Lagrangian integral time and the acceleration time scale of such particles. The knowledge of how size and density affect these time scales is crucial in understanding particle dynamics and may permit stochastic process modelization using two-time models (for instance, Sawford’s). As particles are tracked over long timesmore » in the quasi-totality of a closed flow, the mean flow influences their behaviour and also biases the velocity time statistics, in particular the velocity correlation functions. By using a method that allows for the computation of turbulent velocity trajectories, we can obtain unbiased Lagrangian integral time. This is particularly useful in accessing the scale separation for such particles and to comparing it to the case of fluid particles in a similar configuration.« less

  6. Decoding spike timing: the differential reverse correlation method

    PubMed Central

    Tkačik, Gašper; Magnasco, Marcelo O.

    2009-01-01

    It is widely acknowledged that detailed timing of action potentials is used to encode information, for example in auditory pathways; however the computational tools required to analyze encoding through timing are still in their infancy. We present a simple example of encoding, based on a recent model of time-frequency analysis, in which units fire action potentials when a certain condition is met, but the timing of the action potential depends also on other features of the stimulus. We show that, as a result, spike-triggered averages are smoothed so much they do not represent the true features of the encoding. Inspired by this example, we present a simple method, differential reverse correlations, that can separate an analysis of what causes a neuron to spike, and what controls its timing. We analyze with this method the leaky integrate-and-fire neuron and show the method accurately reconstructs the model's kernel. PMID:18597928

  7. Flexible Method for Inter-object Communication in C++

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Gould, Jack J.

    1994-01-01

    A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.

  8. Preservation and distribution of fungal cultures

    Treesearch

    Karen K. Nakasone; Stephen W. Peterson; Shung-Chang Jong

    2004-01-01

    Maintaining and preserving fungal cultures are essential elements of systematics and biodiversity studies. Because fungi are such a diverse group, several methods of cultivation and preservation are required to ensure the viability and morphological, physiological, and genetic integrity of the cultures over time. The cost and convenience of each method, however, also...

  9. A transformed path integral approach for solution of the Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Subramaniam, Gnana M.; Vedula, Prakash

    2017-10-01

    A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.

  10. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    NASA Astrophysics Data System (ADS)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  11. Efficient Meshfree Large Deformation Simulation of Rainfall Induced Soil Slope Failure

    NASA Astrophysics Data System (ADS)

    Wang, Dongdong; Li, Ling

    2010-05-01

    An efficient Lagrangian Galerkin meshfree framework is presented for large deformation simulation of rainfall-induced soil slope failure. Detailed coupled soil-rainfall seepage equations are given for the proposed formulation. This nonlinear meshfree formulation is featured by the Lagrangian stabilized conforming nodal integration method where the low cost nature of nodal integration approach is kept and at the same time the numerical stability is maintained. The initiation and evolution of progressive failure in the soil slope is modeled by the coupled constitutive equations of isotropic damage and Drucker-Prager pressure-dependent plasticity. The gradient smoothing in the stabilized conforming integration also serves as a non-local regularization of material instability and consequently the present method is capable of effectively capture the shear band failure. The efficacy of the present method is demonstrated by simulating the rainfall-induced failure of two typical soil slopes.

  12. [2-stage group psychotherapy with integrated autogenic training within the scope of a general integrated psychotherapy concept].

    PubMed

    Barolin, Gerhard S

    2003-01-01

    Group-therapy and autogenic training in combination show mutual potentiation. Our results have proved the hypothesis to be true and we have also been able to explain it by an analysis of the neurophysiological and psychological findings concerning both methods. Our "model" has proved to be very economical in time and can be easily applied. It needs basic psychotherapeutical education but no special additive schooling. It is particularly well employed in rehabilitation patients, elderly patients and geronto-rehabilitation patients. As numbers of such patients are steadily increasing, it could soon become highly important, and in the technically dominated medicine of today, the particularly communicative component that we postulate in integrated psychotherapy could also grow in importance. By combining the two methods, it is not method that is at the centre of our endeavours but the patient.

  13. Dynamic Analysis With Stress Mode Animation by the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.

    1997-01-01

    Dynamic animation of stresses and displacements, which complement each other, can be a useful tool in the analysis and design of structural components. At the present time only displacement-mode animation is available through the popular stiffness formulation. This paper attempts to complete this valuable visualization tool by augmenting the existing art with stress mode animation. The reformulated method of forces, which in the literature is known as the integrated force method (IFM), became the analyzer of choice for the development of stress mode animation because stresses are the primary unknowns of its dynamic analysis. Animation of stresses and displacements, which have been developed successfully through the IFM analyzers, is illustrated in several examples along with a brief introduction to IFM dynamic analysis. The usefulness of animation in design optimization is illustrated considering the spacer structure component of the International Space Station as an example. An overview of the integrated force method analysis code (IFM/ANALYZERS) is provided in the appendix.

  14. Integrated navigation fusion strategy of INS/UWB for indoor carrier attitude angle and position synchronous tracking.

    PubMed

    Fan, Qigao; Wu, Yaheng; Hui, Jing; Wu, Lei; Yu, Zhenzhong; Zhou, Lijuan

    2014-01-01

    In some GPS failure conditions, positioning for mobile target is difficult. This paper proposed a new method based on INS/UWB for attitude angle and position synchronous tracking of indoor carrier. Firstly, error model of INS/UWB integrated system is built, including error equation of INS and UWB. And combined filtering model of INS/UWB is researched. Simulation results show that the two subsystems are complementary. Secondly, integrated navigation data fusion strategy of INS/UWB based on Kalman filtering theory is proposed. Simulation results show that FAKF method is better than the conventional Kalman filtering. Finally, an indoor experiment platform is established to verify the integrated navigation theory of INS/UWB, which is geared to the needs of coal mine working environment. Static and dynamic positioning results show that the INS/UWB integrated navigation system is stable and real-time, positioning precision meets the requirements of working condition and is better than any independent subsystem.

  15. Adaptive fixed-time trajectory tracking control of a stratospheric airship.

    PubMed

    Zheng, Zewei; Feroskhan, Mir; Sun, Liang

    2018-05-01

    This paper addresses the fixed-time trajectory tracking control problem of a stratospheric airship. By extending the method of adding a power integrator to a novel adaptive fixed-time control method, the convergence of a stratospheric airship to its reference trajectory is guaranteed to be achieved within a fixed time. The control algorithm is firstly formulated without the consideration of external disturbances to establish the stability of the closed-loop system in fixed-time and demonstrate that the convergence time of the airship is essentially independent of its initial conditions. Subsequently, a smooth adaptive law is incorporated into the proposed fixed-time control framework to provide the system with robustness to external disturbances. Theoretical analyses demonstrate that under the adaptive fixed-time controller, the tracking errors will converge towards a residual set in fixed-time. The results of a comparative simulation study with other recent methods illustrate the remarkable performance and superiority of the proposed control method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.

    PubMed

    Bressloff, Paul C

    2015-01-01

    We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.

  17. An integrated error estimation and lag-aware data assimilation scheme for real-time flood forecasting

    USDA-ARS?s Scientific Manuscript database

    The performance of conventional filtering methods can be degraded by ignoring the time lag between soil moisture and discharge response when discharge observations are assimilated into streamflow modelling. This has led to the ongoing development of more optimal ways to implement sequential data ass...

  18. Modeling global vector fields of chaotic systems from noisy time series with the aid of structure-selection techniques.

    PubMed

    Xu, Daolin; Lu, Fangfang

    2006-12-01

    We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.

  19. Learning from the experience: preliminary results of integration experiments within PRE-EARTHQUAKES EU-FP7 Project.

    NASA Astrophysics Data System (ADS)

    Tramutoli, V.; Inan, S.; Jakowski, N.; Pulinets, S.; Romanov, A.; Filizzola, C.; Shagimuratov, I.; Pergola, N.; Genzano, N.; Lisi, M.; Alparslan, E.; Wilken, V.; Tsybulia, K.; Romanov, A.; Paciello, R.; Balasco, M.; Zakharenkova, I.; Ouzounov, D.; Papadopoulos, G. A.; Parrot, M.

    2012-04-01

    PRE-EARTHQUAKES (Processing Russian and European EARTH observations for earthQUAKE precursors Studies) EU-FP7 project is devoted to demonstrate - integrating different observational data, comparing and improving different data analysis methods - how it is possible to progressively increase reliability of short term seismic risk assessment. Three main testing area were selected (Italy, Turkey and Sakhalin ) in order to concentrate observations and integration efforts starting with a learning phase on selected event in the past devoted to identify the most suitable parameters, observations technologies, data analysis algorithms. To this aim events offering major possibilities (variety) of integration were particularly considered - Abruzzo EQ (April 6th 2009 Mw 6.3) for Italy, Elazig EQ (March 8th 2010 Mw 6.1) for Turkey and Nevelsk EQ (August 2nd 2007 Mw 6.2) for Sakhalin - without excluding other significant events occurred during 2011 like the ones of Tōhoku in Japan and Van in Turkey. For these events, different ground (80 radon and 29 spring water stations in Turkey region, 2 magneto-telluric in Italy) and satellite (18 different systems) based observations, 11 data analysis methods, for 7 measured parameters, have been compared and integrated. Results achieved by applying a validation/confutation approach devoted to evaluate the presence/absence of anomalous space-time transients in single and/or integrated observation time-series will be discussed also in comparison with results independently achieved by other authors.

  20. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

Top