Sample records for higher order accuracy

  1. 3D Higher Order Modeling in the BEM/FEM Hybrid Formulation

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Wilton, D. R.

    2000-01-01

    Higher order divergence- and curl-conforming bases have been shown to provide significant benefits, in both convergence rate and accuracy, in the 2D hybrid finite element/boundary element formulation (P. Fink and D. Wilton, National Radio Science Meeting, Boulder, CO, Jan. 2000). A critical issue in achieving the potential for accuracy of the approach is the accurate evaluation of all matrix elements. These involve products of high order polynomials and, in some instances, singular Green's functions. In the 2D formulation, the use of a generalized Gaussian quadrature method was found to greatly facilitate the computation and to improve the accuracy of the boundary integral equation self-terms. In this paper, a 3D, hybrid electric field formulation employing higher order bases and higher order elements is presented. The improvements in convergence rate and accuracy, compared to those resulting from lower order modeling, are established. Techniques developed to facilitate the computation of the boundary integral self-terms are also shown to improve the accuracy of these terms. Finally, simple preconditioning techniques are used in conjunction with iterative solution procedures to solve the resulting linear system efficiently. In order to handle the boundary integral singularities in the 3D formulation, the parent element- either a triangle or rectangle-is subdivided into a set of sub-triangles with a common vertex at the singularity. The contribution to the integral from each of the sub-triangles is computed using the Duffy transformation to remove the singularity. This method is shown to greatly facilitate t'pe self-term computation when the bases are of higher order. In addition, the sub-triangles can be further divided to achieve near arbitrary accuracy in the self-term computation. An efficient method for subdividing the parent element is presented. The accuracy obtained using higher order bases is compared to that obtained using lower order bases when the number of unknowns is approximately equal. Also, convergence rates obtained using higher order bases are compared to those obtained with lower order bases for selected sample

  2. Higher-Order Adaptive Finite-Element Methods for Kohn-Sham Density Functional Theory

    DTIC Science & Technology

    2012-07-03

    systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemi- cal accuracy...calculations. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of materials systems contain- ing a...benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy

  3. Higher-Order Compact Schemes for Numerical Simulation of Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Wilson, Robert V.; Demuren, Ayodeji O.; Carpenter, Mark

    1998-01-01

    A higher order accurate numerical procedure has been developed for solving incompressible Navier-Stokes equations for 2D or 3D fluid flow problems. It is based on low-storage Runge-Kutta schemes for temporal discretization and fourth and sixth order compact finite-difference schemes for spatial discretization. The particular difficulty of satisfying the divergence-free velocity field required in incompressible fluid flow is resolved by solving a Poisson equation for pressure. It is demonstrated that for consistent global accuracy, it is necessary to employ the same order of accuracy in the discretization of the Poisson equation. Special care is also required to achieve the formal temporal accuracy of the Runge-Kutta schemes. The accuracy of the present procedure is demonstrated by application to several pertinent benchmark problems.

  4. A fourth-order Cartesian grid embeddedboundary method for Poisson’s equation

    DOE PAGES

    Devendran, Dharshi; Graves, Daniel; Johansen, Hans; ...

    2017-05-08

    In this paper, we present a fourth-order algorithm to solve Poisson's equation in two and three dimensions. We use a Cartesian grid, embedded boundary method to resolve complex boundaries. We use a weighted least squares algorithm to solve for our stencils. We use convergence tests to demonstrate accuracy and we show the eigenvalues of the operator to demonstrate stability. We compare accuracy and performance with an established second-order algorithm. We also discuss in depth strategies for retaining higher-order accuracy in the presence of nonsmooth geometries.

  5. A fourth-order Cartesian grid embeddedboundary method for Poisson’s equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devendran, Dharshi; Graves, Daniel; Johansen, Hans

    In this paper, we present a fourth-order algorithm to solve Poisson's equation in two and three dimensions. We use a Cartesian grid, embedded boundary method to resolve complex boundaries. We use a weighted least squares algorithm to solve for our stencils. We use convergence tests to demonstrate accuracy and we show the eigenvalues of the operator to demonstrate stability. We compare accuracy and performance with an established second-order algorithm. We also discuss in depth strategies for retaining higher-order accuracy in the presence of nonsmooth geometries.

  6. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number of time derivatives of surface-normal velocity (consistent with no flow through the boundary) up to arbitrarily high order. The corrections for the first-order spatial derivatives of pressure are calculated by use of the first-order time derivative velocity. The corrected first-order spatial derivatives are used to calculate the second- order time derivatives of velocity, which, in turn, are used to calculate the corrections for the second-order pressure derivatives. The process as described is repeated, progressing through increasing orders of derivatives, until the desired accuracy is attained.

  7. Cost-effective accurate coarse-grid method for highly convective multidimensional unsteady flows

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1991-01-01

    A fundamentally multidimensional convection scheme is described based on vector transient interpolation modeling rewritten in conservative control-volume form. Vector third-order upwinding is used as the basis of the algorithm; this automatically introduces important cross-difference terms that are absent from schemes using component-wise one-dimensional formulas. Third-order phase accuracy is good; this is important for coarse-grid large-eddy or full simulation. Potential overshoots or undershoots are avoided by using a recently developed universal limiter. Higher order accuracy is obtained locally, where needed, by the cost-effective strategy of adaptive stencil expansion in a direction normal to each control-volume face; this is controlled by monitoring the absolute normal gradient and curvature across the face. Higher (than third) order cross-terms do not appear to be needed. Since the wider stencil is used only in isolated narrow regions (near discontinuities), extremely high (in this case, seventh) order accuracy can be achieved for little more than the cost of a globally third-order scheme.

  8. A fourth-order box method for solving the boundary layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1977-01-01

    A fourth order box method for calculating high accuracy numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations is presented. The method is the natural extension of the second order Keller Box scheme to fourth order and is demonstrated with application to the incompressible, laminar and turbulent boundary layer equations. Numerical results for high accuracy test cases show the method to be significantly faster than other higher order and second order methods.

  9. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn; Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn; Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate newmore » cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required. - Highlights: • Higher-order cubature points for degrees 7 to 9 are developed. • The effects of quadrature rule on the mass and stiffness matrices has been conducted. • The cubature points have always positive integration weights. • Freeing from the inversion of a wide bandwidth mass matrix. • The accuracy of the TSEM has been improved in about one order of magnitude.« less

  10. Formal Solutions for Polarized Radiative Transfer. II. High-order Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janett, Gioele; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch

    When integrating the radiative transfer equation for polarized light, the necessity of high-order numerical methods is well known. In fact, well-performing high-order formal solvers enable higher accuracy and the use of coarser spatial grids. Aiming to provide a clear comparison between formal solvers, this work presents different high-order numerical schemes and applies the systematic analysis proposed by Janett et al., emphasizing their advantages and drawbacks in terms of order of accuracy, stability, and computational cost.

  11. Conventional Energy and Macronutrient Variables Distort the Accuracy of Children’s Dietary Reports: Illustrative Data from a Validation Study of Effect of Order Prompts

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conventional energy and macronutrient (protein, carbohydrate, fat) variables, which disregard accuracy of reported items and amounts, misrepresent reporting accuracy. Reporting-error-sensitive variables are proposed which classify reported items as matches or intrusions, and reported amounts as corresponding or overreported. Methods 58 girls and 63 boys were each observed eating school meals on 2 days separated by ≥4 weeks, and interviewed the morning after each observation day. One interview per child had forward-order (morning-to-evening) prompts; one had reverse-order prompts. Original food-item-level analyses found a sex-x-order prompt interaction for omission rates. Current analyses compared reference (observed) and reported information transformed to energy and macronutrients. Results Using conventional variables, reported amounts were less than reference amounts (ps<0.001; paired t-tests); report rates were higher for the first than second interview for energy, protein, and carbohydrate (ps≤0.049; mixed models). Using reporting-error-sensitive variables, correspondence rates were higher for girls with forward- but boys with reverse-order prompts (ps≤0.041; mixed models); inflation ratios were lower with reverse- than forward-order prompts for energy, carbohydrate, and fat (ps≤0.045; mixed models). Conclusions Conventional variables overestimated reporting accuracy and masked order prompt and sex effects. Reporting-error-sensitive variables are recommended when assessing accuracy for energy and macronutrients in validation studies. PMID:16959308

  12. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.

  13. Construction of higher order accurate vortex and particle methods

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.

    1986-01-01

    The standard point vortex method has recently been shown to be of high order of accuracy for problems on the whole plane, when using a uniform initial subdivision for assigning the vorticity to the points. If obstacles are present in the flow, this high order deteriorates to first or second order. New vortex methods are introduced which are of arbitrary accuracy (under regularity assumptions) regardless of the presence of bodies and the uniformity of the initial subdivision.

  14. A three-dimensional parabolic equation model of sound propagation using higher-order operator splitting and Padé approximants.

    PubMed

    Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F

    2012-11-01

    An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.

  15. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    DOE PAGES

    Tao, Jianmin; Rappe, Andrew M.

    2016-01-20

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C 6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C 8 and C 10 between small molecules. We findmore » that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C 8 and 7% for C 10. As a result, inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.« less

  16. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  17. Higher Order Corrections in the CoLoRFulNNLO Framework

    NASA Astrophysics Data System (ADS)

    Somogyi, G.; Kardos, A.; Szőr, Z.; Trócsányi, Z.

    We discuss the CoLoRFulNNLO method for computing higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the calculation of event shapes and jet rates in three-jet production in electron-positron annihilation. We validate our code by comparing our predictions to previous results in the literature and present the jet cone energy fraction distribution at NNLO accuracy. We also present preliminary NNLO results for the three-jet rate using the Durham jet clustering algorithm matched to resummed predictions at NLL accuracy, and a comparison to LEP data.

  18. Exploiting Superconvergence in Discontinuous Galerkin Methods for Improved Time-Stepping and Visualization

    DTIC Science & Technology

    2016-09-08

    Accuracy Conserving (SIAC) filter when applied to nonuniform meshes; 2) Theoretically and numerical demonstration of the 2k+1 order accuracy of the SIAC...Establishing a more theoretical and numerical understanding of a computationally efficient scaling for the SIAC filter for nonuniform meshes [7]; 2...Li, “SIAC Filtering of DG Methods – Boundary and Nonuniform Mesh”, International Conference on Spectral and Higher Order Methods (ICOSAHOM

  19. High-order flux correction/finite difference schemes for strand grids

    NASA Astrophysics Data System (ADS)

    Katz, Aaron; Work, Dalon

    2015-02-01

    A novel high-order method combining unstructured flux correction along body surfaces and high-order finite differences normal to surfaces is formulated for unsteady viscous flows on strand grids. The flux correction algorithm is applied in each unstructured layer of the strand grid, and the layers are then coupled together via a source term containing derivatives in the strand direction. Strand-direction derivatives are approximated to high-order via summation-by-parts operators for first derivatives and second derivatives with variable coefficients. We show how this procedure allows for the proper truncation error canceling properties required for the flux correction scheme. The resulting scheme possesses third-order design accuracy, but often exhibits fourth-order accuracy when higher-order derivatives are employed in the strand direction, especially for highly viscous flows. We prove discrete conservation for the new scheme and time stability in the absence of the flux correction terms. Results in two dimensions are presented that demonstrate improvements in accuracy with minimal computational and algorithmic overhead over traditional second-order algorithms.

  20. Optical vector network analyzer with improved accuracy based on polarization modulation and polarization pulling.

    PubMed

    Li, Wei; Liu, Jian Guo; Zhu, Ning Hua

    2015-04-15

    We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.

  1. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system.

    PubMed

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-21

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer's Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26 cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients.

  2. Technique for Very High Order Nonlinear Simulation and Validation

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2001-01-01

    Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.

  3. Automated computation of autonomous spectral submanifolds for nonlinear modal analysis

    NASA Astrophysics Data System (ADS)

    Ponsioen, Sten; Pedergnana, Tiemo; Haller, George

    2018-04-01

    We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.

  4. Energy-energy correlation in electron-positron annihilation at NNLL + NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Tulipánt, Zoltán; Kardos, Adam; Somogyi, Gábor

    2017-11-01

    We present the computation of energy-energy correlation in e^+e^- collisions in the back-to-back region at next-to-next-to-leading logarithmic accuracy matched with the next-to-next-to-leading order perturbative prediction. We study the effect of the fixed higher-order corrections in a comparison of our results to LEP and SLC data. The next-to-next-to-leading order correction has a sizable impact on the extracted value of α S(M_Z), hence its inclusion is mandatory for a precise measurement of the strong coupling using energy-energy correlation.

  5. Critical study of higher order numerical methods for solving the boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1978-01-01

    A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.

  6. Third-order dissipative hydrodynamics from the entropy principle

    NASA Astrophysics Data System (ADS)

    El, Andrej; Xu, Zhe; Greiner, Carsten

    2010-06-01

    We review the entropy based derivation of third-order hydrodynamic equations and compare their solutions in one-dimensional boost-invariant geometry with calculations by the partonic cascade BAMPS. We demonstrate that Grad's approximation, which underlies the derivation of both Israel-Stewart and third-order equations, describes the transverse spectra from BAMPS with high accuracy. At the same time solutions of third-order equations are much closer to BAMPS results than solutions of Israel-Stewart equations. Introducing a resummation scheme for all higher-oder corrections to one-dimensional hydrodynamic equation we demonstrate the importance of higher-order terms if the Knudsen number is large.

  7. The effects of finite rate chemical processes on high enthalpy nozzle performance - A comparison between SPARK and SEAGULL

    NASA Technical Reports Server (NTRS)

    Carpenter, M. H.

    1988-01-01

    The generalized chemistry version of the computer code SPARK is extended to include two higher-order numerical schemes, yielding fourth-order spatial accuracy for the inviscid terms. The new and old formulations are used to study the influences of finite rate chemical processes on nozzle performance. A determination is made of the computationally optimum reaction scheme for use in high-enthalpy nozzles. Finite rate calculations are compared with the frozen and equilibrium limits to assess the validity of each formulation. In addition, the finite rate SPARK results are compared with the constant ratio of specific heats (gamma) SEAGULL code, to determine its accuracy in variable gamma flow situations. Finally, the higher-order SPARK code is used to calculate nozzle flows having species stratification. Flame quenching occurs at low nozzle pressures, while for high pressures, significant burning continues in the nozzle.

  8. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system

    PubMed Central

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-01

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer’s Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26-cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients. PMID:28033119

  9. Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models

    NASA Technical Reports Server (NTRS)

    Buchert, T.; Melott, A. L.; Weiss, A. G.

    1993-01-01

    We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.

  10. Application of a symmetric total variation diminishing scheme to aerodynamics of rotors

    NASA Astrophysics Data System (ADS)

    Usta, Ebru

    2002-09-01

    The aerodynamics characteristics of rotors in hover have been studied on stretched non-orthogonal grids using spatially high order symmetric total variation diminishing (STVD) schemes. Several companion numerical viscosity terms have been tested. The effects of higher order metrics, higher order load integrations and turbulence effects on the rotor performance have been studied. Where possible, calculations for 1-D and 2-D benchmark problems have been done on uniform grids, and comparisons with exact solutions have been made to understand the dispersion and dissipation characteristics of these algorithms. A baseline finite volume methodology termed TURNS (Transonic Unsteady Rotor Navier-Stokes) is the starting point for this effort. The original TURNS solver solves the 3-D compressible Navier-Stokes equations in an integral form using a third order upwind scheme. It is first or second order accurate in time. In the modified solver, the inviscid flux at a cell face is decomposed into two parts. The first part of the flux is symmetric in space, while the second part consists of an upwind-biased numerical viscosity term. The symmetric part of the flux at the cell face is computed to fourth-, sixth- or eighth order accuracy in space. The numerical viscosity portion of the flux is computed using either a third order accurate MUSCL scheme or a fifth order WENO scheme. A number of results are presented for the two-bladed Caradonna-Tung rotor and for a four-bladed UH-60A rotor in hover. Comparisons with the original TURNS code, and experiments are given. Results are also presented on the effects of metrics calculations, load integration algorithms, and turbulence models on the solution accuracy. A total of 64 combinations were studied in this thesis work. For brevity, only a small subset of results highlighting the most important conclusions are presented. It should be noted that use of higher order formulations did not affect the temporal stability of the algorithm and did not require any reduction in the time step. The calculations show that the solution accuracy increases when the 3 rd order upwind scheme in the baseline algorithm is replaced with 4th and 6th order accurate symmetric flux calculations. A point of diminishing returns is reached as increasingly larger stencils are used on highly stretched grids. The numerical viscosity term, when computed with the third order MUSCL scheme, is very dissipative, and does not resolve the tip vortex well. The WENO5 scheme, on the other hand significantly improves the tip vortex capturing. The STVD6+WENO5 scheme, in particular gave the best combinations of solution accuracy and efficiency on stretched grids. Spatially fourth order accurate metric calculations were found to be beneficial, but should be used in conjunction with a limiter that drops the metric calculation to a second order accuracy in the vicinity of grid discontinuities. High order integration of loads was found to have a beneficial, but small effect on the computed loads. Replacing the Baldwin-Lomax turbulence model with a one equation Spalart-Allmaras model resulted in higher than expected profile power contributions. Nevertheless the one-equation model is recommended for its robustness, its ability to model separated flows at high thrust settings, and the natural manner in which turbulence in the rotor wake may be treated.

  11. Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Huynh, H. T.

    1997-01-01

    A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray W. S.

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  13. Communication: Understanding molecular representations in machine learning: The role of uniqueness and target similarity

    NASA Astrophysics Data System (ADS)

    Huang, Bing; von Lilienfeld, O. Anatole

    2016-10-01

    The predictive accuracy of Machine Learning (ML) models of molecular properties depends on the choice of the molecular representation. Inspired by the postulates of quantum mechanics, we introduce a hierarchy of representations which meet uniqueness and target similarity criteria. To systematically control target similarity, we simply rely on interatomic many body expansions, as implemented in universal force-fields, including Bonding, Angular (BA), and higher order terms. Addition of higher order contributions systematically increases similarity to the true potential energy and predictive accuracy of the resulting ML models. We report numerical evidence for the performance of BAML models trained on molecular properties pre-calculated at electron-correlated and density functional theory level of theory for thousands of small organic molecules. Properties studied include enthalpies and free energies of atomization, heat capacity, zero-point vibrational energies, dipole-moment, polarizability, HOMO/LUMO energies and gap, ionization potential, electron affinity, and electronic excitations. After training, BAML predicts energies or electronic properties of out-of-sample molecules with unprecedented accuracy and speed.

  14. Computational electrodynamics in material media with constraint-preservation, multidimensional Riemann solvers and sub-cell resolution - Part II, higher order FVTD schemes

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Garain, Sudip; Taflove, Allen; Montecinos, Gino

    2018-02-01

    The Finite Difference Time Domain (FDTD) scheme has served the computational electrodynamics community very well and part of its success stems from its ability to satisfy the constraints in Maxwell's equations. Even so, in the previous paper of this series we were able to present a second order accurate Godunov scheme for computational electrodynamics (CED) which satisfied all the same constraints and simultaneously retained all the traditional advantages of Godunov schemes. In this paper we extend the Finite Volume Time Domain (FVTD) schemes for CED in material media to better than second order of accuracy. From the FDTD method, we retain a somewhat modified staggering strategy of primal variables which enables a very beneficial constraint-preservation for the electric displacement and magnetic induction vector fields. This is accomplished with constraint-preserving reconstruction methods which are extended in this paper to third and fourth orders of accuracy. The idea of one-dimensional upwinding from Godunov schemes has to be significantly modified to use the multidimensionally upwinded Riemann solvers developed by the first author. In this paper, we show how they can be used within the context of a higher order scheme for CED. We also report on advances in timestepping. We show how Runge-Kutta IMEX schemes can be adapted to CED even in the presence of stiff source terms brought on by large conductivities as well as strong spatial variations in permittivity and permeability. We also formulate very efficient ADER timestepping strategies to endow our method with sub-cell resolving capabilities. As a result, our method can be stiffly-stable and resolve significant sub-cell variation in the material properties within a zone. Moreover, we present ADER schemes that are applicable to all hyperbolic PDEs with stiff source terms and at all orders of accuracy. Our new ADER formulation offers a treatment of stiff source terms that is much more efficient than previous ADER schemes. The computer algebra system scripts for generating ADER time update schemes for any general PDE with stiff source terms are also given in the electronic supplements to this paper. Second, third and fourth order accurate schemes for numerically solving Maxwell's equations in material media are presented in this paper. Several stringent tests are also presented to show that the method works and meets its design goals even when material permittivity and permeability vary by an order of magnitude over just a few zones. Furthermore, since the method is unconditionally stable and sub-cell-resolving in the presence of stiff source terms (i.e. for problems involving giant variations in conductivity over just a few zones), it can accurately handle such problems without any reduction in timestep. We also show that increasing the order of accuracy offers distinct advantages for resolving sub-cell variations in material properties. Most importantly, we show that when the accuracy requirements are stringent the higher order schemes offer the shortest time to solution. This makes a compelling case for the use of higher order, sub-cell resolving schemes in CED.

  15. On the existence of the optimal order for wavefunction extrapolation in Born-Oppenheimer molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn; CAEP Software Center for High Performance Numerical Simulation, Beijing

    2016-06-28

    Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps ormore » more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.« less

  16. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    PubMed Central

    Motsa, S. S.; Magagula, V. M.; Sibanda, P.

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  17. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  18. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Jerome, Joseph; Osher, Stanley

    1989-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  19. Employment of sawtooth-shaped-function excitation signal and oversampling for improving resistance measurement accuracy

    NASA Astrophysics Data System (ADS)

    Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang

    2016-10-01

    In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.

  20. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.

    PubMed

    Shelley, M J; Tao, L

    2001-01-01

    To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

  1. Exact first order scattering correction for vector radiative transfer in coupled atmosphere and ocean systems

    NASA Astrophysics Data System (ADS)

    Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing

    2012-06-01

    We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.

  2. A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations

    NASA Technical Reports Server (NTRS)

    Dydson, Roger W.; Goodrich, John W.

    2000-01-01

    Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.

  3. Higher-order adaptive finite-element methods for Kohn–Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motamarri, P.; Nowak, M.R.; Leiter, K.

    2013-11-15

    We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system containing 1688 atoms using modest computational resources, and good scalability of the present implementation up to 192 processors.« less

  4. Precision studies of observables in $$p p \\rightarrow W \\rightarrow l\

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alioli, S.; Arbuzov, A. B.; Bardin, D. Yu.

    This report was prepared in the context of the LPCC "Electroweak Precision Measurements at the LHC WG" and summarizes the activity of a subgroup dedicated to the systematic comparison of public Monte Carlo codes, which describe the Drell-Yan processes at hadron colliders, in particular at the CERN Large Hadron Collider (LHC). This work represents an important step towards the definition of an accurate simulation framework necessary for very high-precision measurements of electroweak (EW) observables such as the $W$ boson mass and the weak mixing angle. All the codes considered in this report share at least next-to-leading-order (NLO) accuracy in themore » prediction of the total cross sections in an expansion either in the strong or in the EW coupling constant. The NLO fixed-order predictions have been scrutinized at the technical level, using exactly the same inputs, setup and perturbative accuracy, in order to quantify the level of agreement of different implementations of the same calculation. A dedicated comparison, again at the technical level, of three codes that reach next-to-next-to-leading-order (NNLO) accuracy in quantum chromodynamics (QCD) for the total cross section has also been performed. These fixed-order results are a well-defined reference that allows a classification of the impact of higher-order sets of radiative corrections. Several examples of higher-order effects due to the strong or the EW interaction are discussed in this common framework. Also the combination of QCD and EW corrections is discussed, together with the ambiguities that affect the final result, due to the choice of a specific combination recipe.« less

  5. Precision studies of observables in $$p p \\rightarrow W \\rightarrow l\

    DOE PAGES

    Alioli, S.; Arbuzov, A. B.; Bardin, D. Yu.; ...

    2017-05-03

    This report was prepared in the context of the LPCC "Electroweak Precision Measurements at the LHC WG" and summarizes the activity of a subgroup dedicated to the systematic comparison of public Monte Carlo codes, which describe the Drell-Yan processes at hadron colliders, in particular at the CERN Large Hadron Collider (LHC). This work represents an important step towards the definition of an accurate simulation framework necessary for very high-precision measurements of electroweak (EW) observables such as the $W$ boson mass and the weak mixing angle. All the codes considered in this report share at least next-to-leading-order (NLO) accuracy in themore » prediction of the total cross sections in an expansion either in the strong or in the EW coupling constant. The NLO fixed-order predictions have been scrutinized at the technical level, using exactly the same inputs, setup and perturbative accuracy, in order to quantify the level of agreement of different implementations of the same calculation. A dedicated comparison, again at the technical level, of three codes that reach next-to-next-to-leading-order (NNLO) accuracy in quantum chromodynamics (QCD) for the total cross section has also been performed. These fixed-order results are a well-defined reference that allows a classification of the impact of higher-order sets of radiative corrections. Several examples of higher-order effects due to the strong or the EW interaction are discussed in this common framework. Also the combination of QCD and EW corrections is discussed, together with the ambiguities that affect the final result, due to the choice of a specific combination recipe.« less

  6. Performance of Low Dissipative High Order Shock-Capturing Schemes for Shock-Turbulence Interactions

    NASA Technical Reports Server (NTRS)

    Sandham, N. D.; Yee, H. C.

    1998-01-01

    Accurate and efficient direct numerical simulation of turbulence in the presence of shock waves represents a significant challenge for numerical methods. The objective of this paper is to evaluate the performance of high order compact and non-compact central spatial differencing employing total variation diminishing (TVD) shock-capturing dissipations as characteristic based filters for two model problems combining shock wave and shear layer phenomena. A vortex pairing model evaluates the ability of the schemes to cope with shear layer instability and eddy shock waves, while a shock wave impingement on a spatially-evolving mixing layer model studies the accuracy of computation of vortices passing through a sequence of shock and expansion waves. A drastic increase in accuracy is observed if a suitable artificial compression formulation is applied to the TVD dissipations. With this modification to the filter step the fourth-order non-compact scheme shows improved results in comparison to second-order methods, while retaining the good shock resolution of the basic TVD scheme. For this characteristic based filter approach, however, the benefits of compact schemes or schemes with higher than fourth order are not sufficient to justify the higher complexity near the boundary and/or the additional computational cost.

  7. Flux Renormalization in Constant Power Burnup Calculations

    DOE PAGES

    Isotalo, Aarno E.; Aalto Univ., Otaniemi; Davidson, Gregory G.; ...

    2016-06-15

    To more accurately represent the desired power in a constant power burnup calculation, the depletion steps of the calculation can be divided into substeps and the neutron flux renormalized on each substep to match the desired power. Here, this paper explores how such renormalization should be performed, how large a difference it makes, and whether using renormalization affects results regarding the relative performance of different neutronics–depletion coupling schemes. When used with older coupling schemes, renormalization can provide a considerable improvement in overall accuracy. With previously published higher order coupling schemes, which are more accurate to begin with, renormalization has amore » much smaller effect. Finally, while renormalization narrows the differences in the accuracies of different coupling schemes, their order of accuracy is not affected.« less

  8. Final Technical Report: Increasing Prediction Accuracy.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  9. Translation among Symbolic Representations in Problem-Solving. Report on Studies Project: Alternative Strategies for Measuring Higher Order Skills: The Role of Symbol Systems.

    ERIC Educational Resources Information Center

    Shavelson, Richard J.; And Others

    Some aspects of the relationships among the symbolic representations (Rs) of problems given to students to solve, the Rs that students use to solve problems, and the accuracy of the solutions were studied. Focus was on determining: the mental Rs that students used while solving problems, the kinds of translation that takes place, the accuracy of…

  10. Using bivariate signal analysis to characterize the epileptic focus: the benefit of surrogates.

    PubMed

    Andrzejak, R G; Chicharro, D; Lehnertz, K; Mormann, F

    2011-04-01

    The disease epilepsy is related to hypersynchronous activity of networks of neurons. While acute epileptic seizures are the most extreme manifestation of this hypersynchronous activity, an elevated level of interdependence of neuronal dynamics is thought to persist also during the seizure-free interval. In multichannel recordings from brain areas involved in the epileptic process, this interdependence can be reflected in an increased linear cross correlation but also in signal properties of higher order. Bivariate time series analysis comprises a variety of approaches, each with different degrees of sensitivity and specificity for interdependencies reflected in lower- or higher-order properties of pairs of simultaneously recorded signals. Here we investigate which approach is best suited to detect putatively elevated interdependence levels in signals recorded from brain areas involved in the epileptic process. For this purpose, we use the linear cross correlation that is sensitive to lower-order signatures of interdependence, a nonlinear interdependence measure that integrates both lower- and higher-order properties, and a surrogate-corrected nonlinear interdependence measure that aims to specifically characterize higher-order properties. We analyze intracranial electroencephalographic recordings of the seizure-free interval from 29 patients with an epileptic focus located in the medial temporal lobe. Our results show that all three approaches detect higher levels of interdependence for signals recorded from the brain hemisphere containing the epileptic focus as compared to signals recorded from the opposite hemisphere. For the linear cross correlation, however, these differences are not significant. For the nonlinear interdependence measure, results are significant but only of moderate accuracy with regard to the discriminative power for the focal and nonfocal hemispheres. The highest significance and accuracy is obtained for the surrogate-corrected nonlinear interdependence measure.

  11. Using bivariate signal analysis to characterize the epileptic focus: The benefit of surrogates

    NASA Astrophysics Data System (ADS)

    Andrzejak, R. G.; Chicharro, D.; Lehnertz, K.; Mormann, F.

    2011-04-01

    The disease epilepsy is related to hypersynchronous activity of networks of neurons. While acute epileptic seizures are the most extreme manifestation of this hypersynchronous activity, an elevated level of interdependence of neuronal dynamics is thought to persist also during the seizure-free interval. In multichannel recordings from brain areas involved in the epileptic process, this interdependence can be reflected in an increased linear cross correlation but also in signal properties of higher order. Bivariate time series analysis comprises a variety of approaches, each with different degrees of sensitivity and specificity for interdependencies reflected in lower- or higher-order properties of pairs of simultaneously recorded signals. Here we investigate which approach is best suited to detect putatively elevated interdependence levels in signals recorded from brain areas involved in the epileptic process. For this purpose, we use the linear cross correlation that is sensitive to lower-order signatures of interdependence, a nonlinear interdependence measure that integrates both lower- and higher-order properties, and a surrogate-corrected nonlinear interdependence measure that aims to specifically characterize higher-order properties. We analyze intracranial electroencephalographic recordings of the seizure-free interval from 29 patients with an epileptic focus located in the medial temporal lobe. Our results show that all three approaches detect higher levels of interdependence for signals recorded from the brain hemisphere containing the epileptic focus as compared to signals recorded from the opposite hemisphere. For the linear cross correlation, however, these differences are not significant. For the nonlinear interdependence measure, results are significant but only of moderate accuracy with regard to the discriminative power for the focal and nonfocal hemispheres. The highest significance and accuracy is obtained for the surrogate-corrected nonlinear interdependence measure.

  12. Solution algorithms for the two-dimensional Euler equations on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Whitaker, D. L.; Slack, David C.; Walters, Robert W.

    1990-01-01

    The objective of the study was to analyze implicit techniques employed in structured grid algorithms for solving two-dimensional Euler equations and extend them to unstructured solvers in order to accelerate convergence rates. A comparison is made between nine different algorithms for both first-order and second-order accurate solutions. Higher-order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The discussion is illustrated by results for flow over a transonic circular arc.

  13. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.

    PubMed

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-09-24

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.

  14. Higher-order accurate space-time schemes for computational astrophysics—Part I: finite volume methods

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.

    2017-12-01

    As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.

  15. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms. [for junction diodes simulation

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Osher, Stanley; Jerome, Joseph

    1991-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially nonoscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  16. Accurate and diverse recommendations via eliminating redundant correlations

    NASA Astrophysics Data System (ADS)

    Zhou, Tao; Su, Ri-Qi; Liu, Run-Ran; Jiang, Luo-Luo; Wang, Bing-Hong; Zhang, Yi-Cheng

    2009-12-01

    In this paper, based on a weighted projection of a bipartite user-object network, we introduce a personalized recommendation algorithm, called network-based inference (NBI), which has higher accuracy than the classical algorithm, namely collaborative filtering. In NBI, the correlation resulting from a specific attribute may be repeatedly counted in the cumulative recommendations from different objects. By considering the higher order correlations, we design an improved algorithm that can, to some extent, eliminate the redundant correlations. We test our algorithm on two benchmark data sets, MovieLens and Netflix. Compared with NBI, the algorithmic accuracy, measured by the ranking score, can be further improved by 23 per cent for MovieLens and 22 per cent for Netflix. The present algorithm can even outperform the Latent Dirichlet Allocation algorithm, which requires much longer computational time. Furthermore, most previous studies considered the algorithmic accuracy only; in this paper, we argue that the diversity and popularity, as two significant criteria of algorithmic performance, should also be taken into account. With more or less the same accuracy, an algorithm giving higher diversity and lower popularity is more favorable. Numerical results show that the present algorithm can outperform the standard one simultaneously in all five adopted metrics: lower ranking score and higher precision for accuracy, larger Hamming distance and lower intra-similarity for diversity, as well as smaller average degree for popularity.

  17. Parametric Study of Decay of Homogeneous Isotropic Turbulence Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rumsey, Christopher L.; Rubinstein, Robert; Balakumar, Ponnampalam; Zang, Thomas A.

    2012-01-01

    Numerical simulations of decaying homogeneous isotropic turbulence are performed with both low-order and high-order spatial discretization schemes. The turbulent Mach and Reynolds numbers for the simulations are 0.2 and 250, respectively. For the low-order schemes we use either second-order central or third-order upwind biased differencing. For higher order approximations we apply weighted essentially non-oscillatory (WENO) schemes, both with linear and nonlinear weights. There are two objectives in this preliminary effort to investigate possible schemes for large eddy simulation (LES). One is to explore the capability of a widely used low-order computational fluid dynamics (CFD) code to perform LES computations. The other is to determine the effect of higher order accuracy (fifth, seventh, and ninth order) achieved with high-order upwind biased WENO-based schemes. Turbulence statistics, such as kinetic energy, dissipation, and skewness, along with the energy spectra from simulations of the decaying turbulence problem are used to assess and compare the various numerical schemes. In addition, results from the best performing schemes are compared with those from a spectral scheme. The effects of grid density, ranging from 32 cubed to 192 cubed, on the computations are also examined. The fifth-order WENO-based scheme is found to be too dissipative, especially on the coarser grids. However, with the seventh-order and ninth-order WENO-based schemes we observe a significant improvement in accuracy relative to the lower order LES schemes, as revealed by the computed peak in the energy dissipation and by the energy spectrum.

  18. Super-rogue waves in simulations based on weakly nonlinear and fully nonlinear hydrodynamic equations.

    PubMed

    Slunyaev, A; Pelinovsky, E; Sergeeva, A; Chabchoub, A; Hoffmann, N; Onorato, M; Akhmediev, N

    2013-07-01

    The rogue wave solutions (rational multibreathers) of the nonlinear Schrödinger equation (NLS) are tested in numerical simulations of weakly nonlinear and fully nonlinear hydrodynamic equations. Only the lowest order solutions from 1 to 5 are considered. A higher accuracy of wave propagation in space is reached using the modified NLS equation, also known as the Dysthe equation. This numerical modeling allowed us to directly compare simulations with recent results of laboratory measurements in Chabchoub et al. [Phys. Rev. E 86, 056601 (2012)]. In order to achieve even higher physical accuracy, we employed fully nonlinear simulations of potential Euler equations. These simulations provided us with basic characteristics of long time evolution of rational solutions of the NLS equation in the case of near-breaking conditions. The analytic NLS solutions are found to describe the actual wave dynamics of steep waves reasonably well.

  19. Comparison of sound reproduction using higher order loudspeakers and equivalent line arrays in free-field conditions.

    PubMed

    Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D

    2014-07-01

    Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.

  20. Static bending deflection and free vibration analysis of moderate thick symmetric laminated plates using multidimensional wave digital filters

    NASA Astrophysics Data System (ADS)

    Tseng, Chien-Hsun

    2018-06-01

    This paper aims to develop a multidimensional wave digital filtering network for predicting static and dynamic behaviors of composite laminate based on the FSDT. The resultant network is, thus, an integrated platform that can perform not only the free vibration but also the bending deflection of moderate thick symmetric laminated plates with low plate side-to-thickness ratios (< = 20). Safeguarded by the Courant-Friedrichs-Levy stability condition with the least restriction in terms of optimization technique, the present method offers numerically high accuracy, stability and efficiency to proceed a wide range of modulus ratios for the FSDT laminated plates. Instead of using a constant shear correction factor (SCF) with a limited numerical accuracy for the bending deflection, an optimum SCF is particularly sought by looking for a minimum ratio of change in the transverse shear energy. This way, it can predict as good results in terms of accuracy for certain cases of bending deflection. Extensive simulation results carried out for the prediction of maximum bending deflection have demonstratively proven that the present method outperforms those based on the higher-order shear deformation and layerwise plate theories. To the best of our knowledge, this is the first work that shows an optimal selection of SCF can significantly increase the accuracy of FSDT-based laminates especially compared to the higher order theory disclaiming any correction. The highest accuracy of overall solution is compared to the 3D elasticity equilibrium one.

  1. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-06-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).

  2. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  3. Integration of the Rotation of an Earth-like Body as a Perturbed Spherical Rotor

    NASA Astrophysics Data System (ADS)

    Ferrer, Sebastián; Lara, Martin

    2010-05-01

    For rigid bodies close to a sphere, we propose an analytical solution that is free from elliptic integrals and functions, and can be fundamental for application to perturbed problems. After reordering the Hamiltonian as a perturbed spherical rotor, the Lie-series solution is generated up to an arbitrary order. Using the inertia parameters of different solar system bodies, the comparison of the approximate series solution with the exact analytical one shows that the precision reached with relatively low orders is at the same level of the observational accuracy for the Earth and Mars. Thus, for instance, the periodic errors of the mathematical solution are confined to the microarcsecond level with a simple second-order truncation for the Earth. On the contrary, higher orders are required for the mathematical solution to reach a precision at the expected level of accuracy of proposed new theories for the rotational dynamics of the Moon.

  4. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  5. Stokes waves revisited: Exact solutions in the asymptotic limit

    NASA Astrophysics Data System (ADS)

    Davies, Megan; Chattopadhyay, Amit K.

    2016-03-01

    The Stokes perturbative solution of the nonlinear (boundary value dependent) surface gravity wave problem is known to provide results of reasonable accuracy to engineers in estimating the phase speed and amplitudes of such nonlinear waves. The weakling in this structure though is the presence of aperiodic "secular variation" in the solution that does not agree with the known periodic propagation of surface waves. This has historically necessitated increasingly higher-ordered (perturbative) approximations in the representation of the velocity profile. The present article ameliorates this long-standing theoretical insufficiency by invoking a compact exact n -ordered solution in the asymptotic infinite depth limit, primarily based on a representation structured around the third-ordered perturbative solution, that leads to a seamless extension to higher-order (e.g., fifth-order) forms existing in the literature. The result from this study is expected to improve phenomenological engineering estimates, now that any desired higher-ordered expansion may be compacted within the same representation, but without any aperiodicity in the spectral pattern of the wave guides.

  6. Image-based gradient non-linearity characterization to determine higher-order spherical harmonic coefficients for improved spatial position accuracy in magnetic resonance imaging.

    PubMed

    Weavers, Paul T; Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Tryggestad, Erik J; Gunter, Jeffrey L; McGee, Kiaran P; Litwiller, Daniel V; Hwang, Ken-Pin; Bernstein, Matt A

    2017-05-01

    Spatial position accuracy in magnetic resonance imaging (MRI) is an important concern for a variety of applications, including radiation therapy planning, surgical planning, and longitudinal studies of morphologic changes to study neurodegenerative diseases. Spatial accuracy is strongly influenced by gradient linearity. This work presents a method for characterizing the gradient non-linearity fields on a per-system basis, and using this information to provide improved and higher-order (9th vs. 5th) spherical harmonic coefficients for better spatial accuracy in MRI. A large fiducial phantom containing 5229 water-filled spheres in a grid pattern is scanned with the MR system, and the positions all the fiducials are measured and compared to the corresponding ground truth fiducial positions as reported from a computed tomography (CT) scan of the object. Systematic errors from off-resonance (i.e., B0) effects are minimized with the use of increased receiver bandwidth (±125kHz) and two acquisitions with reversed readout gradient polarity. The spherical harmonic coefficients are estimated using an iterative process, and can be subsequently used to correct for gradient non-linearity. Test-retest stability was assessed with five repeated measurements on a single scanner, and cross-scanner variation on four different, identically-configured 3T wide-bore systems. A decrease in the root-mean-square error (RMSE) over a 50cm diameter spherical volume from 1.80mm to 0.77mm is reported here in the case of replacing the vendor's standard 5th order spherical harmonic coefficients with custom fitted 9th order coefficients, and from 1.5mm to 1mm by extending custom fitted 5th order correction to the 9th order. Minimum RMSE varied between scanners, but was stable with repeated measurements in the same scanner. The results suggest that the proposed methods may be used on a per-system basis to more accurately calibrate MR gradient non-linearity coefficients when compared to vendor standard corrections. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Higher-order hybrid implicit/explicit FDTD time-stepping

    NASA Astrophysics Data System (ADS)

    Tierens, W.

    2016-12-01

    Both partially implicit FDTD methods, and symplectic FDTD methods of high temporal accuracy (3rd or 4th order), are well documented in the literature. In this paper we combine them: we construct a conservative FDTD method which is fourth order accurate in time and is partially implicit. We show that the stability condition for this method depends exclusively on the explicit part, which makes it suitable for use in e.g. modelling wave propagation in plasmas.

  8. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  9. Fourier analysis algorithm for the posterior corneal keratometric data: clinical usefulness in keratoconus.

    PubMed

    Sideroudi, Haris; Labiris, Georgios; Georgantzoglou, Kimon; Ntonti, Panagiota; Siganos, Charalambos; Kozobolis, Vassilios

    2017-07-01

    To develop an algorithm for the Fourier analysis of posterior corneal videokeratographic data and to evaluate the derived parameters in the diagnosis of Subclinical Keratoconus (SKC) and Keratoconus (KC). This was a cross-sectional, observational study that took place in the Eye Institute of Thrace, Democritus University, Greece. Eighty eyes formed the KC group, 55 eyes formed the SKC group while 50 normal eyes populated the control group. A self-developed algorithm in visual basic for Microsoft Excel performed a Fourier series harmonic analysis for the posterior corneal sagittal curvature data. The algorithm decomposed the obtained curvatures into a spherical component, regular astigmatism, asymmetry and higher order irregularities for averaged central 4 mm and for each individual ring separately (1, 2, 3 and 4 mm). The obtained values were evaluated for their diagnostic capacity using receiver operating curves (ROC). Logistic regression was attempted for the identification of a combined diagnostic model. Significant differences were detected in regular astigmatism, asymmetry and higher order irregularities among groups. For the SKC group, the parameters with high diagnostic ability (AUC > 90%) were the higher order irregularities, the asymmetry and the regular astigmatism, mainly in the corneal periphery. Higher predictive accuracy was identified using diagnostic models that combined the asymmetry, regular astigmatism and higher order irregularities in averaged 3and 4 mm area (AUC: 98.4%, Sensitivity: 91.7% and Specificity:100%). Fourier decomposition of posterior Keratometric data provides parameters with high accuracy in differentiating SKC from normal corneas and should be included in the prompt diagnosis of KC. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  10. Development of a three-dimensional high-order strand-grids approach

    NASA Astrophysics Data System (ADS)

    Tong, Oisin

    Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.

  11. Sixth- and eighth-order Hermite integrator for N-body simulations

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Makino, Junichiro

    2008-10-01

    We present sixth- and eighth-order Hermite integrators for astrophysical N-body simulations, which use the derivatives of accelerations up to second-order ( snap) and third-order ( crackle). These schemes do not require previous values for the corrector, and require only one previous value to construct the predictor. Thus, they are fairly easy to implement. The additional cost of the calculation of the higher-order derivatives is not very high. Even for the eighth-order scheme, the number of floating-point operations for force calculation is only about two times larger than that for traditional fourth-order Hermite scheme. The sixth-order scheme is better than the traditional fourth-order scheme for most cases. When the required accuracy is very high, the eighth-order one is the best. These high-order schemes have several practical advantages. For example, they allow a larger number of particles to be integrated in parallel than the fourth-order scheme does, resulting in higher execution efficiency in both general-purpose parallel computers and GRAPE systems.

  12. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  13. Second order symmetry-preserving conservative Lagrangian scheme for compressible Euler equations in two-dimensional cylindrical coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Juan, E-mail: cheng_juan@iapcm.ac.cn; Shu, Chi-Wang, E-mail: shu@dam.brown.edu

    In applications such as astrophysics and inertial confinement fusion, there are many three-dimensional cylindrical-symmetric multi-material problems which are usually simulated by Lagrangian schemes in the two-dimensional cylindrical coordinates. For this type of simulation, a critical issue for the schemes is to keep spherical symmetry in the cylindrical coordinate system if the original physical problem has this symmetry. In the past decades, several Lagrangian schemes with such symmetry property have been developed, but all of them are only first order accurate. In this paper, we develop a second order cell-centered Lagrangian scheme for solving compressible Euler equations in cylindrical coordinates, basedmore » on the control volume discretizations, which is designed to have uniformly second order accuracy and capability to preserve one-dimensional spherical symmetry in a two-dimensional cylindrical geometry when computed on an equal-angle-zoned initial grid. The scheme maintains several good properties such as conservation for mass, momentum and total energy, and the geometric conservation law. Several two-dimensional numerical examples in cylindrical coordinates are presented to demonstrate the good performance of the scheme in terms of accuracy, symmetry, non-oscillation and robustness. The advantage of higher order accuracy is demonstrated in these examples.« less

  14. Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3

    NASA Technical Reports Server (NTRS)

    Chakravarthy, Sukumar R.

    1990-01-01

    An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.

  15. Construction and accuracy of partial differential equation approximations to the chemical master equation.

    PubMed

    Grima, Ramon

    2011-11-01

    The mesoscopic description of chemical kinetics, the chemical master equation, can be exactly solved in only a few simple cases. The analytical intractability stems from the discrete character of the equation, and hence considerable effort has been invested in the development of Fokker-Planck equations, second-order partial differential equation approximations to the master equation. We here consider two different types of higher-order partial differential approximations, one derived from the system-size expansion and the other from the Kramers-Moyal expansion, and derive the accuracy of their predictions for chemical reactive networks composed of arbitrary numbers of unimolecular and bimolecular reactions. In particular, we show that the partial differential equation approximation of order Q from the Kramers-Moyal expansion leads to estimates of the mean number of molecules accurate to order Ω(-(2Q-3)/2), of the variance of the fluctuations in the number of molecules accurate to order Ω(-(2Q-5)/2), and of skewness accurate to order Ω(-(Q-2)). We also show that for large Q, the accuracy in the estimates can be matched only by a partial differential equation approximation from the system-size expansion of approximate order 2Q. Hence, we conclude that partial differential approximations based on the Kramers-Moyal expansion generally lead to considerably more accurate estimates in the mean, variance, and skewness than approximations of the same order derived from the system-size expansion.

  16. Comparison of four machine learning methods for object-oriented change detection in high-resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Bai, Ting; Sun, Kaimin; Deng, Shiquan; Chen, Yan

    2018-03-01

    High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.

  17. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    NASA Astrophysics Data System (ADS)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  18. Quadratic canonical transformation theory and higher order density matrices.

    PubMed

    Neuscamman, Eric; Yanai, Takeshi; Chan, Garnet Kin-Lic

    2009-03-28

    Canonical transformation (CT) theory provides a rigorously size-extensive description of dynamic correlation in multireference systems, with an accuracy superior to and cost scaling lower than complete active space second order perturbation theory. Here we expand our previous theory by investigating (i) a commutator approximation that is applied at quadratic, as opposed to linear, order in the effective Hamiltonian, and (ii) incorporation of the three-body reduced density matrix in the operator and density matrix decompositions. The quadratic commutator approximation improves CT's accuracy when used with a single-determinant reference, repairing the previous formal disadvantage of the single-reference linear CT theory relative to singles and doubles coupled cluster theory. Calculations on the BH and HF binding curves confirm this improvement. In multireference systems, the three-body reduced density matrix increases the overall accuracy of the CT theory. Tests on the H(2)O and N(2) binding curves yield results highly competitive with expensive state-of-the-art multireference methods, such as the multireference Davidson-corrected configuration interaction (MRCI+Q), averaged coupled pair functional, and averaged quadratic coupled cluster theories.

  19. An analysis of Landsat Thematic Mapper P-Product internal geometry and conformity to earth surface geometry

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.; Walker, R. E.; Gokhman, B.

    1985-01-01

    Performance requirements regarding geometric accuracy have been defined in terms of end product goals, but until recently no precise details have been given concerning the conditions under which that accuracy is to be achieved. In order to achieve higher spatial and spectral resolutions, the Thematic Mapper (TM) sensor was designed to image in both forward and reverse mirror sweeps in two separate focal planes. Both hardware and software have been augmented and changed during the course of the Landsat TM developments to achieve improved geometric accuracy. An investigation has been conducted to determine if the TM meets the National Map Accuracy Standards for geometric accuracy at larger scales. It was found that TM imagery, in terms of geometry, has come close to, and in some cases exceeded, its stringent specifications.

  20. DNS of Flows over Periodic Hills using a Discontinuous-Galerkin Spectral-Element Method

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo T.; Murman, Scott M.

    2014-01-01

    Direct numerical simulation (DNS) of turbulent compressible flows is performed using a higher-order space-time discontinuous-Galerkin finite-element method. The numerical scheme is validated by performing DNS of the evolution of the Taylor-Green vortex and turbulent flow in a channel. The higher-order method is shown to provide increased accuracy relative to low-order methods at a given number of degrees of freedom. The turbulent flow over a periodic array of hills in a channel is simulated at Reynolds number 10,595 using an 8th-order scheme in space and a 4th-order scheme in time. These results are validated against previous large eddy simulation (LES) results. A preliminary analysis provides insight into how these detailed simulations can be used to improve Reynoldsaveraged Navier-Stokes (RANS) modeling

  1. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  2. Guidelines and Recommendations on the Use of Higher Order Finite Elements for Bending Analysis of Plates

    NASA Astrophysics Data System (ADS)

    Carrera, E.; Miglioretti, F.; Petrolo, M.

    2011-11-01

    This paper compares and evaluates various plate finite elements to analyse the static response of thick and thin plates subjected to different loading and boundary conditions. Plate elements are based on different assumptions for the displacement distribution along the thickness direction. Classical (Kirchhoff and Reissner-Mindlin), refined (Reddy and Kant), and other higher-order displacement fields are implemented up to fourth-order expansion. The Unified Formulation UF by the first author is used to derive finite element matrices in terms of fundamental nuclei which consist of 3×3 arrays. The MITC4 shear-locking free type formulation is used for the FE approximation. Accuracy of a given plate element is established in terms of the error vs. thickness-to-length parameter. A significant number of finite elements for plates are implemented and compared using displacement and stress variables for various plate problems. Reduced models that are able to detect the 3D solution are built and a Best Plate Diagram (BPD) is introduced to give guidelines for the construction of plate theories based on a given accuracy and number of terms. It is concluded that the UF is a valuable tool to establish, for a given plate problem, the most accurate FE able to furnish results within a certain accuracy range. This allows us to obtain guidelines and recommendations in building refined elements in the bending analysis of plates for various geometries, loadings, and boundary conditions.

  3. Study on Fuzzy Adaptive Fractional Order PIλDμ Control for Maglev Guiding System

    NASA Astrophysics Data System (ADS)

    Hu, Qing; Hu, Yuwei

    The mathematical model of the linear elevator maglev guiding system is analyzed in this paper. For the linear elevator needs strong stability and robustness to run, the integer order PID was expanded to the fractional order, in order to improve the steady state precision, rapidity and robustness of the system, enhance the accuracy of the parameter in fractional order PIλDμ controller, the fuzzy control is combined with the fractional order PIλDμ control, using the fuzzy logic achieves the parameters online adjustment. The simulations reveal that the system has faster response speed, higher tracking precision, and has stronger robustness to the disturbance.

  4. The effects of missing data on global ozone estimates

    NASA Technical Reports Server (NTRS)

    Drewry, J. W.; Robbins, J. L.

    1981-01-01

    The effects of missing data and model truncation on estimates of the global mean, zonal distribution, and global distribution of ozone are considered. It is shown that missing data can introduce biased estimates with errors that are not accounted for in the accuracy calculations of empirical modeling techniques. Data-fill techniques are introduced and used for evaluating error bounds and constraining the estimate in areas of sparse and missing data. It is found that the accuracy of the global mean estimate is more dependent on data distribution than model size. Zonal features can be accurately described by 7th order models over regions of adequate data distribution. Data variance accounted for by higher order models appears to represent climatological features of columnar ozone rather than pure error. Data-fill techniques can prevent artificial feature generation in regions of sparse or missing data without degrading high order estimates over dense data regions.

  5. Working memory capacity and controlled serial memory search.

    PubMed

    Mızrak, Eda; Öztekin, Ilke

    2016-08-01

    The speed-accuracy trade-off (SAT) procedure was used to investigate the relationship between working memory capacity (WMC) and the dynamics of temporal order memory retrieval. High- and low-span participants (HSs, LSs) studied sequentially presented five-item lists, followed by two probes from the study list. Participants indicated the more recent probe. Overall, accuracy was higher for HSs compared to LSs. Crucially, in contrast to previous investigations that observed no impact of WMC on speed of access to item information in memory (e.g., Öztekin & McElree, 2010), recovery of temporal order memory was slower for LSs. While accessing an item's representation in memory can be direct, recovery of relational information such as temporal order information requires a more controlled serial memory search. Collectively, these data indicate that WMC effects are particularly prominent during high demands of cognitive control, such as serial search operations necessary to access temporal order information from memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Ordered questions bias eyewitnesses and jurors.

    PubMed

    Michael, Robert B; Garry, Maryanne

    2016-04-01

    Eyewitnesses play an important role in the justice system. But suggestive questioning can distort eyewitness memory and confidence, and these distorted beliefs influence jurors (Loftus, Learning & Memory, 12, 361-366, 2005; Penrod & Culter, Psychology, Public Policy, and Law, 1, 817-845, 1995). Recent research, however, hints that suggestion is not necessary: Simply changing the order of a set of trivia questions altered people's beliefs about their accuracy on those questions (Weinstein & Roediger, Memory & Cognition, 38, 366-376, 2010, Memory & Cognition, 40, 727-735, 2012). We wondered to what degree eyewitnesses' beliefs-and in turn the jurors who evaluate them-would be affected by this simple change to the order in which they answer questions. Across six experiments, we show that the order of questions matters. Eyewitnesses reported higher accuracy and were more confident about their memory when questions seemed initially easy, than when they seemed initially difficult. Moreover, jurors' beliefs about eyewitnesses closely matched those of the eyewitnesses themselves. These findings have implications for eyewitness metacognition and for eyewitness questioning procedures.

  7. Higher order approximation to the Hill problem dynamics about the libration points

    NASA Astrophysics Data System (ADS)

    Lara, Martin; Pérez, Iván L.; López, Rosario

    2018-06-01

    An analytical solution to the Hill problem Hamiltonian expanded about the libration points has been obtained by means of perturbation techniques. In order to compute the higher orders of the perturbation solution that are needed to capture all the relevant periodic orbits originated from the libration points within a reasonable accuracy, the normalization is approached in complex variables. The validity of the solution extends to energy values considerably far away from that of the libration points and, therefore, can be used in the computation of Halo orbits as an alternative to the classical Lindstedt-Poincaré approach. Furthermore, the theory correctly predicts the existence of the two-lane bridge of periodic orbits linking the families of planar and vertical Lyapunov orbits.

  8. Design-order, non-conformal low-Mach fluid algorithms using a hybrid CVFEM/DG approach

    NASA Astrophysics Data System (ADS)

    Domino, Stefan P.

    2018-04-01

    A hybrid, design-order sliding mesh algorithm, which uses a control volume finite element method (CVFEM), in conjunction with a discontinuous Galerkin (DG) approach at non-conformal interfaces, is outlined in the context of a low-Mach fluid dynamics equation set. This novel hybrid DG approach is also demonstrated to be compatible with a classic edge-based vertex centered (EBVC) scheme. For the CVFEM, element polynomial, P, promotion is used to extend the low-order P = 1 CVFEM method to higher-order, i.e., P = 2. An equal-order low-Mach pressure-stabilized methodology, with emphasis on the non-conformal interface boundary condition, is presented. A fully implicit matrix solver approach that accounts for the full stencil connectivity across the non-conformal interface is employed. A complete suite of formal verification studies using the method of manufactured solutions (MMS) is performed to verify the order of accuracy of the underlying methodology. The chosen suite of analytical verification cases range from a simple steady diffusion system to a traveling viscous vortex across mixed-order non-conformal interfaces. Results from all verification studies demonstrate either second- or third-order spatial accuracy and, for transient solutions, second-order temporal accuracy. Significant accuracy gains in manufactured solution error norms are noted even with modest promotion of the underlying polynomial order. The paper also demonstrates the CVFEM/DG methodology on two production-like simulation cases that include an inner block subjected to solid rotation, i.e., each of the simulations include a sliding mesh, non-conformal interface. The first production case presented is a turbulent flow past a high-rate-of-rotation cube (Re, 4000; RPM, 3600) on like and mixed-order polynomial interfaces. The final simulation case is a full-scale Vestas V27 225 kW wind turbine (tower and nacelle omitted) in which a hybrid topology, low-order mesh is used. Both production simulations provide confidence in the underlying capability and demonstrate the viability of this hybrid method for deployment towards high-fidelity wind energy validation and analysis.

  9. On the utility of GPU accelerated high-order methods for unsteady flow simulations: A comparison with industry-standard tools

    NASA Astrophysics Data System (ADS)

    Vermeire, B. C.; Witherden, F. D.; Vincent, P. E.

    2017-04-01

    First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier-Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor-Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.

  10. On the utility of GPU accelerated high-order methods for unsteady flow simulations: A comparison with industry-standard tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermeire, B.C., E-mail: brian.vermeire@concordia.ca; Witherden, F.D.; Vincent, P.E.

    First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier–Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to amore » range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor–Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.« less

  11. Multispectral Image Compression for Improvement of Colorimetric and Spectral Reproducibility by Nonlinear Spectral Transform

    NASA Astrophysics Data System (ADS)

    Yu, Shanshan; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2006-09-01

    The article proposes a multispectral image compression scheme using nonlinear spectral transform for better colorimetric and spectral reproducibility. In the method, we show the reduction of colorimetric error under a defined viewing illuminant and also that spectral accuracy can be improved simultaneously using a nonlinear spectral transform called Labplus, which takes into account the nonlinearity of human color vision. Moreover, we show that the addition of diagonal matrices to Labplus can further preserve the spectral accuracy and has a generalized effect of improving the colorimetric accuracy under other viewing illuminants than the defined one. Finally, we discuss the usage of the first-order Markov model to form the analysis vectors for the higher order channels in Labplus to reduce the computational complexity. We implement a multispectral image compression system that integrates Labplus with JPEG2000 for high colorimetric and spectral reproducibility. Experimental results for a 16-band multispectral image show the effectiveness of the proposed scheme.

  12. A third-order computational method for numerical fluxes to guarantee nonnegative difference coefficients for advection-diffusion equations in a semi-conservative form

    NASA Astrophysics Data System (ADS)

    Sakai, K.; Watabe, D.; Minamidani, T.; Zhang, G. S.

    2012-10-01

    According to Godunov theorem for numerical calculations of advection equations, there exist no higher-order schemes with constant positive difference coefficients in a family of polynomial schemes with an accuracy exceeding the first-order. We propose a third-order computational scheme for numerical fluxes to guarantee the non-negative difference coefficients of resulting finite difference equations for advection-diffusion equations in a semi-conservative form, in which there exist two kinds of numerical fluxes at a cell surface and these two fluxes are not always coincident in non-uniform velocity fields. The present scheme is optimized so as to minimize truncation errors for the numerical fluxes while fulfilling the positivity condition of the difference coefficients which are variable depending on the local Courant number and diffusion number. The feature of the present optimized scheme consists in keeping the third-order accuracy anywhere without any numerical flux limiter. We extend the present method into multi-dimensional equations. Numerical experiments for advection-diffusion equations showed nonoscillatory solutions.

  13. Fast sweeping methods for hyperbolic systems of conservation laws at steady state II

    NASA Astrophysics Data System (ADS)

    Engquist, Björn; Froese, Brittany D.; Tsai, Yen-Hsi Richard

    2015-04-01

    The idea of using fast sweeping methods for solving stationary systems of conservation laws has previously been proposed for efficiently computing solutions with sharp shocks. We further develop these methods to allow for a more challenging class of problems including problems with sonic points, shocks originating in the interior of the domain, rarefaction waves, and two-dimensional systems. We show that fast sweeping methods can produce higher-order accuracy. Computational results validate the claims of accuracy, sharp shock curves, and optimal computational efficiency.

  14. "MONSTROUS MOONSHINE" and Physics

    NASA Astrophysics Data System (ADS)

    Pushkin, A. V.

    The report presents some results obtained by the author on the quantum gravitation theory. Algebraic structure of this theory proves to be related to the commutative nonassociative Griess algebra. The theory symmetry is the automorphism group of Griess algebra: "Monster" simple group. Knowledge of the theory symmetry allows to compute observed physical values in the `zero' approximation. The report presents such computed results for values {m_{p}}/{m_{c}} and α, for the latter the `zero' approximation accuracy, controlled by the theory, being one order of magnitude higher than the accuracy of modern measurements.

  15. Exploring Mouse Protein Function via Multiple Approaches.

    PubMed

    Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong

    2016-01-01

    Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.

  16. Exploring Mouse Protein Function via Multiple Approaches

    PubMed Central

    Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning

    2016-01-01

    Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315

  17. Zero-field magnetic response functions in Landau levels

    PubMed Central

    Gao, Yang; Niu, Qian

    2017-01-01

    We present a fresh perspective on the Landau level quantization rule; that is, by successively including zero-field magnetic response functions at zero temperature, such as zero-field magnetization and susceptibility, the Onsager’s rule can be corrected order by order. Such a perspective is further reinterpreted as a quantization of the semiclassical electron density in solids. Our theory not only reproduces Onsager’s rule at zeroth order and the Berry phase and magnetic moment correction at first order but also explains the nature of higher-order corrections in a universal way. In applications, those higher-order corrections are expected to curve the linear relation between the level index and the inverse of the magnetic field, as already observed in experiments. Our theory then provides a way to extract the correct value of Berry phase as well as the magnetic susceptibility at zero temperature from Landau level fan diagrams in experiments. Moreover, it can be used theoretically to calculate Landau levels up to second-order accuracy for realistic models. PMID:28655849

  18. Double hard scattering without double counting

    NASA Astrophysics Data System (ADS)

    Diehl, Markus; Gaunt, Jonathan R.; Schönwald, Kay

    2017-06-01

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  19. A scheme to calculate higher-order homogenization as applied to micro-acoustic boundary value problems

    NASA Astrophysics Data System (ADS)

    Vagh, Hardik A.; Baghai-Wadji, Alireza

    2008-12-01

    Current technological challenges in materials science and high-tech device industry require the solution of boundary value problems (BVPs) involving regions of various scales, e.g. multiple thin layers, fibre-reinforced composites, and nano/micro pores. In most cases straightforward application of standard variational techniques to BVPs of practical relevance necessarily leads to unsatisfactorily ill-conditioned analytical and/or numerical results. To remedy the computational challenges associated with sub-sectional heterogeneities various sophisticated homogenization techniques need to be employed. Homogenization refers to the systematic process of smoothing out the sub-structural heterogeneities, leading to the determination of effective constitutive coefficients. Ordinarily, homogenization involves a sophisticated averaging and asymptotic order analysis to obtain solutions. In the majority of the cases only zero-order terms are constructed due to the complexity of the processes involved. In this paper we propose a constructive scheme for obtaining homogenized solutions involving higher order terms, and thus, guaranteeing higher accuracy and greater robustness of the numerical results. We present

  20. Free vibration analysis of single-walled boron nitride nanotubes based on a computational mechanics framework

    NASA Astrophysics Data System (ADS)

    Yan, J. W.; Tong, L. H.; Xiang, Ping

    2017-12-01

    Free vibration behaviors of single-walled boron nitride nanotubes are investigated using a computational mechanics approach. Tersoff-Brenner potential is used to reflect atomic interaction between boron and nitrogen atoms. The higher-order Cauchy-Born rule is employed to establish the constitutive relationship for single-walled boron nitride nanotubes on the basis of higher-order gradient continuum theory. It bridges the gaps between the nanoscale lattice structures with a continuum body. A mesh-free modeling framework is constructed, using the moving Kriging interpolation which automatically satisfies the higher-order continuity, to implement numerical simulation in order to match the higher-order constitutive model. In comparison with conventional atomistic simulation methods, the established atomistic-continuum multi-scale approach possesses advantages in tackling atomic structures with high-accuracy and high-efficiency. Free vibration characteristics of single-walled boron nitride nanotubes with different boundary conditions, tube chiralities, lengths and radii are examined in case studies. In this research, it is pointed out that a critical radius exists for the evaluation of fundamental vibration frequencies of boron nitride nanotubes; opposite trends can be observed prior to and beyond the critical radius. Simulation results are presented and discussed.

  1. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1995-01-01

    Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

  2. Extended Salecker-Wigner formula for optimal accuracy in reading a clock via a massive signal particle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudaka, Shoju; Matsumoto, Shuichi

    2007-07-15

    In order to acquire an extended Salecker-Wigner formula from which to derive the optimal accuracy in reading a clock with a massive particle as the signal, von Neumann's classical measurement is employed, by which simultaneously both position and momentum of the signal particle can be measured approximately. By an appropriate selection of wave function for the initial state of the composite system (a clock and a signal particle), the formula is derived accurately. Valid ranges of the running time of a clock with a given optimal accuracy are also given. The extended formula means that contrary to the Salecker-Wigner formulamore » there exists the possibility of a higher accuracy of time measurement, even if the mass of the clock is very small.« less

  3. Possibilities and limitations of rod-beam theories. [nonlinear distortion tensor and nonlinear stress tensors

    NASA Technical Reports Server (NTRS)

    Peterson, D.

    1979-01-01

    Rod-beam theories are founded on hypotheses such as Bernouilli's suggesting flat cross-sections under deformation. These assumptions, which make rod-beam theories possible, also limit the accuracy of their analysis. It is shown that from a certain order upward terms of geometrically nonlinear deformations contradict the rod-beam hypotheses. Consistent application of differential geometry calculus also reveals differences from existing rod theories of higher order. These differences are explained by simple examples.

  4. Method of ultrasonic measurement of texture

    DOEpatents

    Thompson, R. Bruce; Smith, John F.; Lee, Seung S.; Li, Yan

    1993-10-12

    A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improves reliability and accuracy. The method can be utilized in production on moving metal plate or sheet.

  5. Semi-automatic for ultrasonic measurement of texture

    DOEpatents

    Thompson, R. Bruce; Smith, John F.; Lee, Seung S.; Li, Yan

    1990-02-13

    A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improved reliability and accuracy. The method can be utilized in production on moving metal plate or sheet.

  6. Method of ultrasonic measurement of texture

    DOEpatents

    Thompson, R.B.; Smith, J.F.; Lee, S.S.; Taejon Ch'ungmam; Yan Li.

    1993-10-12

    A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improves reliability and accuracy. The method can be utilized in production on moving metal plate or sheet. 9 figures.

  7. Semi-automatic for ultrasonic measurement of texture

    DOEpatents

    Thompson, R.B.; Smith, J.F.; Lee, S.S.; Li, Y.

    1990-02-13

    A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improved reliability and accuracy. The method can be utilized in production on moving metal plate or sheet. 9 figs.

  8. Automated breast tissue density assessment using high order regional texture descriptors in mammography

    NASA Astrophysics Data System (ADS)

    Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun

    2014-03-01

    Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.

  9. A Handheld Open-Field Infant Keratometer (An American Ophthalmological Society Thesis)

    PubMed Central

    Miller, Joseph M.

    2010-01-01

    Purpose: To design and evaluate a new infant keratometer that incorporates an unobstructed view of the infant with both eyes (open-field design). Methods: The design of the open-field infant keratometer is presented, and details of its construction are given. The design incorporates a single-ring keratoscope for measurement of corneal astigmatism over a 4-mm region of the cornea and includes a rectangular grid target concentric within the ring to allow for the study of higher-order aberrations of the eye. In order to calibrate the lens and imaging system, a novel telecentric test object was constructed and used. The system was bench calibrated against steel ball bearings of known dimensions and evaluated for accuracy while being used in handheld mode in a group of 16 adult cooperative subjects. It was then evaluated for testability in a group of 10 infants and toddlers. Results: Results indicate that while the device achieved the goal of creating an open-field instrument containing a single-ring keratoscope with a concentric grid array for the study of higher-order aberrations, additional work is required to establish better control of the vertex distance. Conclusion: The handheld open-field infant keratometer demonstrates testability suitable for the study of infant corneal astigmatism. Use of collimated light sources in future iterations of the design must be incorporated in order to achieve the accuracy required for clinical investigation. PMID:21212850

  10. A handheld open-field infant keratometer (an american ophthalmological society thesis).

    PubMed

    Miller, Joseph M

    2010-12-01

    To design and evaluate a new infant keratometer that incorporates an unobstructed view of the infant with both eyes (open-field design). The design of the open-field infant keratometer is presented, and details of its construction are given. The design incorporates a single-ring keratoscope for measurement of corneal astigmatism over a 4-mm region of the cornea and includes a rectangular grid target concentric within the ring to allow for the study of higher-order aberrations of the eye. In order to calibrate the lens and imaging system, a novel telecentric test object was constructed and used. The system was bench calibrated against steel ball bearings of known dimensions and evaluated for accuracy while being used in handheld mode in a group of 16 adult cooperative subjects. It was then evaluated for testability in a group of 10 infants and toddlers. Results indicate that while the device achieved the goal of creating an open-field instrument containing a single-ring keratoscope with a concentric grid array for the study of higher-order aberrations, additional work is required to establish better control of the vertex distance. The handheld open-field infant keratometer demonstrates testability suitable for the study of infant corneal astigmatism. Use of collimated light sources in future iterations of the design must be incorporated in order to achieve the accuracy required for clinical investigation.

  11. A review of fractional-order techniques applied to lithium-ion batteries, lead-acid batteries, and supercapacitors

    NASA Astrophysics Data System (ADS)

    Zou, Changfu; Zhang, Lei; Hu, Xiaosong; Wang, Zhenpo; Wik, Torsten; Pecht, Michael

    2018-06-01

    Electrochemical energy storage systems play an important role in diverse applications, such as electrified transportation and integration of renewable energy with the electrical grid. To facilitate model-based management for extracting full system potentials, proper mathematical models are imperative. Due to extra degrees of freedom brought by differentiation derivatives, fractional-order models may be able to better describe the dynamic behaviors of electrochemical systems. This paper provides a critical overview of fractional-order techniques for managing lithium-ion batteries, lead-acid batteries, and supercapacitors. Starting with the basic concepts and technical tools from fractional-order calculus, the modeling principles for these energy systems are presented by identifying disperse dynamic processes and using electrochemical impedance spectroscopy. Available battery/supercapacitor models are comprehensively reviewed, and the advantages of fractional types are discussed. Two case studies demonstrate the accuracy and computational efficiency of fractional-order models. These models offer 15-30% higher accuracy than their integer-order analogues, but have reasonable complexity. Consequently, fractional-order models can be good candidates for the development of advanced battery/supercapacitor management systems. Finally, the main technical challenges facing electrochemical energy storage system modeling, state estimation, and control in the fractional-order domain, as well as future research directions, are highlighted.

  12. Very large radio surveys of the sky

    PubMed Central

    Condon, J. J.

    1999-01-01

    Recent advances in electronics and computing have made possible a new generation of large radio surveys of the sky that yield an order-of-magnitude higher sensitivity and positional accuracy. Combined with the unique properties of the radio universe, these quantitative improvements open up qualitatively different and exciting new scientific applications of radio surveys. PMID:10220365

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, Arno; Li, Z.; Ng, C.

    The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedentedmore » accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.« less

  14. Determination of intermediate perturbed orbits of Near-Earth asteroids from range and range rate measurements at three times

    NASA Astrophysics Data System (ADS)

    Shefer, V. A.

    2014-12-01

    Two methods that the author developed earlier for finding the intermediate perturbed orbit of a small celestial body from three pairs of range and range rate observations [1, 2] are applied to the determination of orbits of Near-Earth asteroids. The methods are based on using the superosculating orbits with three- and fourth-order tangency. The degrees of approximation of the real motion by the constructed intermediate orbits near the middle measurement time are two and three orders of magnitude higher than by the Keplerian orbit determined with the help of traditional methods. We calculated the orbits of the asteroids 99942 Apophis, 1566 Icarus, 4179 Toutatis, 2007 DN41 and 2012 DA14. For the sake of brevity, we call the method based on the orbit with third-order tangency as Algorithm A1 and the method based on the orbit with fourth-order tangency -- as Algorithm A2. The results of the calculations are compared with the results of the calculations by the version of the methods mentioned that allows us to construct the unperturbed Keplerian orbit. We call this version of the methods as Algorithm A. The observational data were simulated using the nominal trajectories of the selected asteroids. These trajectories were obtained by the numerical integration of the differential equations of motion subject to the perturbations from the eight major planets, Pluto, and the Moon. The integration was carried out with the help of the 15-order Everhart procedure [3]. The main results of the calculations are the following. When the reference time interval is shortened by half (for small sizes of this interval), the errors in the compared algorithms A, A1, A2 decrease approximately by the factors 4, 16, 64 in coordinates and by the factors 2, 8, 16 in velocities, respectively. Such behavior of the errors is most clearly seen with the asteroids 2007 DN41 and 2012 DA14. This leads to a significant increase in the accuracy of the real motion approximation by the intermediate orbits constructed using the A1 and A2 algorithms (2-4 orders of magnitude in coordinates and 4-7 orders of magnitude in velocities higher) compared to the accuracy of the approximation by Keplerian orbits with decreasing the reference arc of the trajectory. Here, the higher is the efficiency of the algorithms A1 and A2, the smaller are the values of the topocentric distances, i.e., the greater are the perturbations caused by the Earth's gravitation. The advantage of Algorithm A2 over Algorithm A1 in accuracy extends approximately one order of magnitude. The minimal methodic errors of the position vector by using the A1 and A2 algorithms range from several meters in the case of the asteroid Apophis to several millimeters in the case of the asteroid 2012 DA14. Hence, the numerical examples analyzed in this work lead us to conclude that the proposed in [1, 2] methods for determination of an intermediate perturbed orbit from range and range rate measurements at three time points allow for significantly raising the accuracy of the calculation of the initial asteroid orbits in comparison with the algorithm based on the finding the unperturbed Keplerian orbit. The shorter is the orbital arc specified by the extreme time points, the greater is the advantage of the algorithms suggested over the algorithms of the traditional approach in the accuracy. The advantage of the algorithms suggested in the accuracy increases with raising the perturbations too, which is especially important for calculation of the initial trajectories of the space objects detected in the Earth's neighbourhood. The work was supported by the Ministry of Education and Science of the Russian Federation, project no. 2014/223(1567).

  15. Parallelism measurement for base plate of standard artifact with multiple tactile approaches

    NASA Astrophysics Data System (ADS)

    Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie

    2018-01-01

    Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.

  16. A Reconstructed Discontinuous Galerkin Method for the Compressible Euler Equations on Arbitrary Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Luquing Luo; Robert Nourgaliev

    2009-06-01

    A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the samemore » nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.« less

  17. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  18. Higher-order QCD predictions for dark matter production at the LHC in simplified models with s-channel mediators.

    PubMed

    Backović, Mihailo; Krämer, Michael; Maltoni, Fabio; Martini, Antony; Mawatari, Kentarou; Pellen, Mathieu

    Weakly interacting dark matter particles can be pair-produced at colliders and detected through signatures featuring missing energy in association with either QCD/EW radiation or heavy quarks. In order to constrain the mass and the couplings to standard model particles, accurate and precise predictions for production cross sections and distributions are of prime importance. In this work, we consider various simplified models with s -channel mediators. We implement such models in the FeynRules/MadGraph5_aMC@NLO framework, which allows to include higher-order QCD corrections in realistic simulations and to study their effect systematically. As a first phenomenological application, we present predictions for dark matter production in association with jets and with a top-quark pair at the LHC, at next-to-leading order accuracy in QCD, including matching/merging to parton showers. Our study shows that higher-order QCD corrections to dark matter production via s -channel mediators have a significant impact not only on total production rates, but also on shapes of distributions. We also show that the inclusion of next-to-leading order effects results in a sizeable reduction of the theoretical uncertainties.

  19. Highly Accurate and Precise Infrared Transition Frequencies of the H_3^+ Cation

    NASA Astrophysics Data System (ADS)

    Perry, Adam J.; Markus, Charles R.; Hodges, James N.; Kocheril, G. Stephen; McCall, Benjamin J.

    2016-06-01

    Calculation of ab initio potential energy surfaces for molecules to high accuracy is only manageable for a handful of molecular systems. Among them is the simplest polyatomic molecule, the H_3^+ cation. In order to achieve a high degree of accuracy (<1 wn) corrections must be made to the to the traditional Born-Oppenheimer approximation that take into account not only adiabatic and non-adiabatic couplings, but quantum electrodynamic corrections as well. For the lowest rovibrational levels the agreement between theory and experiment is approaching 0.001 wn, whereas the agreement is on the order of 0.01 - 0.1 wn for higher levels which are closely rivaling the uncertainties on the experimental data. As method development for calculating these various corrections progresses it becomes necessary for the uncertainties on the experimental data to be improved in order to properly benchmark the calculations. Previously we have measured 20 rovibrational transitions of H_3^+ with MHz-level precision, all of which have arisen from low lying rotational levels. Here we present new measurements of rovibrational transitions arising from higher rotational and vibrational levels. These transitions not only allow for probing higher energies on the potential energy surface, but through the use of combination differences, will ultimately lead to prediction of the "forbidden" rotational transitions with MHz-level accuracy. L.G. Diniz, J.R. Mohallem, A. Alijah, M. Pavanello, L. Adamowicz, O.L. Polyansky, J. Tennyson Phys. Rev. A (2013), 88, 032506 O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R.I. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky, A.G. Császár Phil. Trans. R. Soc. A (2012), 370, 5014 J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, B.J. McCall J. Chem. Phys. (2013), 139, 164201 A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, B.J. McCall J. Molec. Spectrosc. (2015), 317, 71-73.

  20. Analysis of warping deformation modes using higher order ANCF beam element

    NASA Astrophysics Data System (ADS)

    Orzechowski, Grzegorz; Shabana, Ahmed A.

    2016-02-01

    Most classical beam theories assume that the beam cross section remains a rigid surface under an arbitrary loading condition. However, in the absolute nodal coordinate formulation (ANCF) continuum-based beams, this assumption can be relaxed allowing for capturing deformation modes that couple the cross-section deformation and beam bending, torsion, and/or elongation. The deformation modes captured by ANCF finite elements depend on the interpolating polynomials used. The most widely used spatial ANCF beam element employs linear approximation in the transverse direction, thereby restricting the cross section deformation and leading to locking problems. The objective of this investigation is to examine the behavior of a higher order ANCF beam element that includes quadratic interpolation in the transverse directions. This higher order element allows capturing warping and non-uniform stretching distribution. Furthermore, this higher order element allows for increasing the degree of continuity at the element interface. It is shown in this paper that the higher order ANCF beam element can be used effectively to capture warping and eliminate Poisson locking that characterizes lower order ANCF finite elements. It is also shown that increasing the degree of continuity requires a special attention in order to have acceptable results. Because higher order elements can be more computationally expensive than the lower order elements, the use of reduced integration for evaluating the stress forces and the use of explicit and implicit numerical integrations to solve the nonlinear dynamic equations of motion are investigated in this paper. It is shown that the use of some of these integration methods can be very effective in reducing the CPU time without adversely affecting the solution accuracy.

  1. Synergies from using higher order symplectic decompositions both for ordinary differential equations and quantum Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matuttis, Hans-Georg; Wang, Xiaoxing

    Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.

  2. Severity of depressive symptoms and accuracy of dietary reporting among obese women with major depressive disorder seeking weight loss treatment.

    PubMed

    Whited, Matthew C; Schneider, Kristin L; Appelhans, Bradley M; Ma, Yunsheng; Waring, Molly E; DeBiasse, Michele A; Busch, Andrew M; Oleski, Jessica L; Merriam, Philip A; Olendzki, Barbara C; Crawford, Sybil L; Ockene, Ira S; Lemon, Stephenie C; Pagoto, Sherry L

    2014-01-01

    An elevation in symptoms of depression has previously been associated with greater accuracy of reported dietary intake, however this association has not been investigated among individuals with a diagnosis of major depressive disorder. The purpose of this study was to investigate reporting accuracy of dietary intake among a group of women with major depressive disorder in order to determine if reporting accuracy is similarly associated with depressive symptoms among depressed women. Reporting accuracy of dietary intake was calculated based on three 24-hour phone-delivered dietary recalls from the baseline phase of a randomized trial of weight loss treatment for 161 obese women with major depressive disorder. Regression models indicated that higher severity of depressive symptoms was associated with greater reporting accuracy, even when controlling for other factors traditionally associated with reporting accuracy (coefficient  =  0.01 95% CI = 0.01 - 0.02). Seventeen percent of the sample was classified as low energy reporters. Reporting accuracy of dietary intake increases along with depressive symptoms, even among individuals with major depressive disorder. These results suggest that any study investigating associations between diet quality and depression should also include an index of reporting accuracy of dietary intake as accuracy varies with the severity of depressive symptoms.

  3. Asymptotic analysis of discrete schemes for non-equilibrium radiation diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xia, E-mail: cui_xia@iapcm.ac.cn; Yuan, Guang-wei; Shen, Zhi-jun

    Motivated by providing well-behaved fully discrete schemes in practice, this paper extends the asymptotic analysis on time integration methods for non-equilibrium radiation diffusion in [2] to space discretizations. Therein studies were carried out on a two-temperature model with Larsen's flux-limited diffusion operator, both the implicitly balanced (IB) and linearly implicit (LI) methods were shown asymptotic-preserving. In this paper, we focus on asymptotic analysis for space discrete schemes in dimensions one and two. First, in construction of the schemes, in contrast to traditional first-order approximations, asymmetric second-order accurate spatial approximations are devised for flux-limiters on boundary, and discrete schemes with second-ordermore » accuracy on global spatial domain are acquired consequently. Then by employing formal asymptotic analysis, the first-order asymptotic-preserving property for these schemes and furthermore for the fully discrete schemes is shown. Finally, with the help of manufactured solutions, numerical tests are performed, which demonstrate quantitatively the fully discrete schemes with IB time evolution indeed have the accuracy and asymptotic convergence as theory predicts, hence are well qualified for both non-equilibrium and equilibrium radiation diffusion. - Highlights: • Provide AP fully discrete schemes for non-equilibrium radiation diffusion. • Propose second order accurate schemes by asymmetric approach for boundary flux-limiter. • Show first order AP property of spatially and fully discrete schemes with IB evolution. • Devise subtle artificial solutions; verify accuracy and AP property quantitatively. • Ideas can be generalized to 3-dimensional problems and higher order implicit schemes.« less

  4. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    NASA Astrophysics Data System (ADS)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  5. Intermediary LEO propagation including higher order zonal harmonics

    NASA Astrophysics Data System (ADS)

    Hautesserres, Denis; Lara, Martin

    2017-04-01

    Two new intermediary orbits of the artificial satellite problem are proposed. The analytical solutions include higher order effects of the geopotential, and are obtained by means of a torsion transformation applied to the quasi-Keplerian system resulting after the elimination of the parallax simplification, for the first intermediary, and after the elimination of the parallax and perigee simplifications, for the second one. The new intermediaries perform notably well for low Earth orbits propagation, are free from special functions, and result advantageous, both in accuracy and efficiency, when compared to the standard Cowell integration of the J_2 problem, thus providing appealing alternatives for onboard, short-term, orbit propagation under limited computational resources.

  6. Boosting brain connectome classification accuracy in Alzheimer's disease using higher-order singular value decomposition

    PubMed Central

    Zhan, Liang; Liu, Yashu; Wang, Yalin; Zhou, Jiayu; Jahanshad, Neda; Ye, Jieping; Thompson, Paul M.

    2015-01-01

    Alzheimer's disease (AD) is a progressive brain disease. Accurate detection of AD and its prodromal stage, mild cognitive impairment (MCI), are crucial. There is also a growing interest in identifying brain imaging biomarkers that help to automatically differentiate stages of Alzheimer's disease. Here, we focused on brain structural networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying different stages of Alzheimer's disease. PMID:26257601

  7. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  8. Effects of high-order correlations on personalized recommendations for bipartite networks

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Zhou, Tao; Che, Hong-An; Wang, Bing-Hong; Zhang, Yi-Cheng

    2010-02-01

    In this paper, we introduce a modified collaborative filtering (MCF) algorithm, which has remarkably higher accuracy than the standard collaborative filtering. In the MCF, instead of the cosine similarity index, the user-user correlations are obtained by a diffusion process. Furthermore, by considering the second-order correlations, we design an effective algorithm that depresses the influence of mainstream preferences. Simulation results show that the algorithmic accuracy, measured by the average ranking score, is further improved by 20.45% and 33.25% in the optimal cases of MovieLens and Netflix data. More importantly, the optimal value λ depends approximately monotonously on the sparsity of the training set. Given a real system, we could estimate the optimal parameter according to the data sparsity, which makes this algorithm easy to be applied. In addition, two significant criteria of algorithmic performance, diversity and popularity, are also taken into account. Numerical results show that as the sparsity increases, the algorithm considering the second-order correlation can outperform the MCF simultaneously in all three criteria.

  9. A Case-Based Reasoning Method with Rank Aggregation

    NASA Astrophysics Data System (ADS)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  10. Numerically stable, scalable formulas for parallel and online computation of higher-order multivariate central moments with arbitrary weights

    DOE PAGES

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...

    2016-03-29

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  11. Precision studies of observables in p p → W → lν _l and pp → γ ,Z → l^+ l^- processes at the LHC

    NASA Astrophysics Data System (ADS)

    Alioli, S.; Arbuzov, A. B.; Bardin, D. Yu.; Barzè, L.; Bernaciak, C.; Bondarenko, S. G.; Carloni Calame, C. M.; Chiesa, M.; Dittmaier, S.; Ferrera, G.; de Florian, D.; Grazzini, M.; Höche, S.; Huss, A.; Jadach, S.; Kalinovskaya, L. V.; Karlberg, A.; Krauss, F.; Li, Y.; Martinez, H.; Montagna, G.; Mück, A.; Nason, P.; Nicrosini, O.; Petriello, F.; Piccinini, F.; Płaczek, W.; Prestel, S.; Re, E.; Sapronov, A. A.; Schönherr, M.; Schwinn, C.; Vicini, A.; Wackeroth, D.; Was, Z.; Zanderighi, G.

    2017-05-01

    This report was prepared in the context of the LPCC Electroweak Precision Measurements at the LHC WG (https://lpcc.web.cern.ch/lpcc/index.php?page=electroweak_wg) and summarizes the activity of a subgroup dedicated to the systematic comparison of public Monte Carlo codes, which describe the Drell-Yan processes at hadron colliders, in particular at the CERN Large Hadron Collider (LHC). This work represents an important step towards the definition of an accurate simulation framework necessary for very high-precision measurements of electroweak (EW) observables such as the W boson mass and the weak mixing angle. All the codes considered in this report share at least next-to-leading-order (NLO) accuracy in the prediction of the total cross sections in an expansion either in the strong or in the EW coupling constant. The NLO fixed-order predictions have been scrutinized at the technical level, using exactly the same inputs, setup and perturbative accuracy, in order to quantify the level of agreement of different implementations of the same calculation. A dedicated comparison, again at the technical level, of three codes that reach next-to-next-to-leading-order (NNLO) accuracy in quantum chromodynamics (QCD) for the total cross section has also been performed. These fixed-order results are a well-defined reference that allows a classification of the impact of higher-order sets of radiative corrections. Several examples of higher-order effects due to the strong or the EW interaction are discussed in this common framework. Also the combination of QCD and EW corrections is discussed, together with the ambiguities that affect the final result, due to the choice of a specific combination recipe. All the codes considered in this report have been run by the respective authors, and the results presented here constitute a benchmark that should be always checked/reproduced before any high-precision analysis is conducted based on these codes. In order to simplify these benchmarking procedures, the codes used in this report, together with the relevant input files and running instructions, can be found in a repository at https://twiki.cern.ch/twiki/bin/view/Main/DrellYanComparison.

  12. Fast higher-order MR image reconstruction using singular-vector separation.

    PubMed

    Wilm, Bertram J; Barmet, Christoph; Pruessmann, Klaas P

    2012-07-01

    Medical resonance imaging (MRI) conventionally relies on spatially linear gradient fields for image encoding. However, in practice various sources of nonlinear fields can perturb the encoding process and give rise to artifacts unless they are suitably addressed at the reconstruction level. Accounting for field perturbations that are neither linear in space nor constant over time, i.e., dynamic higher-order fields, is particularly challenging. It was previously shown to be feasible with conjugate-gradient iteration. However, so far this approach has been relatively slow due to the need to carry out explicit matrix-vector multiplications in each cycle. In this work, it is proposed to accelerate higher-order reconstruction by expanding the encoding matrix such that fast Fourier transform can be employed for more efficient matrix-vector computation. The underlying principle is to represent the perturbing terms as sums of separable functions of space and time. Compact representations with this property are found by singular-vector analysis of the perturbing matrix. Guidelines for balancing the accuracy and speed of the resulting algorithm are derived by error propagation analysis. The proposed technique is demonstrated for the case of higher-order field perturbations due to eddy currents caused by diffusion weighting. In this example, image reconstruction was accelerated by two orders of magnitude.

  13. Feature-fused SSD: fast detection for small objects

    NASA Astrophysics Data System (ADS)

    Cao, Guimei; Xie, Xuemei; Yang, Wenzhe; Liao, Quan; Shi, Guangming; Wu, Jinjian

    2018-04-01

    Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.

  14. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    PubMed

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eriksen, Janus J., E-mail: janusje@chem.au.dk; Jørgensen, Poul; Matthews, Devin A.

    The accuracy at which total energies of open-shell atoms and organic radicals may be calculated is assessed for selected coupled cluster perturbative triples expansions, all of which augment the coupled cluster singles and doubles (CCSD) energy by a non-iterative correction for the effect of triple excitations. Namely, the second- through sixth-order models of the recently proposed CCSD(T–n) triples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the acclaimed CCSD(T) model for both unrestricted as well as restricted open-shell Hartree-Fock (UHF/ROHF) reference determinants. By comparing UHF- and ROHF-based statistical results for a test setmore » of 18 modest-sized open-shell species with comparable RHF-based results, no behavioral differences are observed for the higher-order models of the CCSD(T–n) series in their correlated descriptions of closed- and open-shell species. In particular, we find that the convergence rate throughout the series towards the coupled cluster singles, doubles, and triples (CCSDT) solution is identical for the two cases. For the CCSD(T) model, on the other hand, not only its numerical consistency, but also its established, yet fortuitous cancellation of errors breaks down in the transition from closed- to open-shell systems. The higher-order CCSD(T–n) models (orders n > 3) thus offer a consistent and significant improvement in accuracy relative to CCSDT over the CCSD(T) model, equally for RHF, UHF, and ROHF reference determinants, albeit at an increased computational cost.« less

  16. Increasing the lensing figure of merit through higher order convergence moments

    NASA Astrophysics Data System (ADS)

    Vicinanza, Martina; Cardone, Vincenzo F.; Maoli, Roberto; Scaramella, Roberto; Er, Xinzhong

    2018-01-01

    The unprecedented quality, the increased data set, and the wide area of ongoing and near future weak lensing surveys allows one to move beyond the standard two points statistics, thus making it worthwhile to investigate higher order probes. As an interesting step toward this direction, we explore the use of higher order moments (HOM) of the convergence field as a way to increase the lensing figure of merit (FoM). To this end, we rely on simulated convergence to first show that HOM can be measured and calibrated so that it is indeed possible to predict them for a given cosmological model provided suitable nuisance parameters are introduced and then marginalized over. We then forecast the accuracy on cosmological parameters from the use of HOM alone and in combination with standard shear power spectra tomography. It turns out that HOM allow one to break some common degeneracies, thus significantly boosting the overall FoM. We also qualitatively discuss possible systematics and how they can be dealt with.

  17. Measuring Parameters of Massive Black Hole Binaries with Partially Aligned Spins

    NASA Technical Reports Server (NTRS)

    Lang, Ryan N.; Hughes, Scott A.; Cornish, Neil J.

    2011-01-01

    The future space-based gravitational wave detector LISA will be able to measure parameters of coalescing massive black hole binaries, often to extremely high accuracy. Previous work has demonstrated that the black hole spins can have a strong impact on the accuracy of parameter measurement. Relativistic spin-induced precession modulates the waveform in a manner which can break degeneracies between parameters, in principle significantly improving how well they are measured. Recent studies have indicated, however, that spin precession may be weak for an important subset of astrophysical binary black holes: those in which the spins are aligned due to interactions with gas. In this paper, we examine how well a binary's parameters can be measured when its spins are partially aligned and compare results using waveforms that include higher post-Newtonian harmonics to those that are truncated at leading quadrupole order. We find that the weakened precession can substantially degrade parameter estimation, particularly for the "extrinsic" parameters sky position and distance. Absent higher harmonics, LISA typically localizes the sky position of a nearly aligned binary about an order of magnitude less accurately than one for which the spin orientations are random. Our knowledge of a source's sky position will thus be worst for the gas-rich systems which are most likely to produce electromagnetic counterparts. Fortunately, higher harmonics of the waveform can make up for this degradation. By including harmonics beyond the quadrupole in our waveform model, we find that the accuracy with which most of the binary's parameters are measured can be substantially improved. In some cases, the improvement is such that they are measured almost as well as when the binary spins are randomly aligned.

  18. Efficient Unsteady Flow Visualization with High-Order Access Dependencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiencymore » of pathline computation.« less

  19. Endogenously and exogenously driven selective sustained attention: Contributions to learning in kindergarten children.

    PubMed

    Erickson, Lucy C; Thiessen, Erik D; Godwin, Karrie E; Dickerson, John P; Fisher, Anna V

    2015-10-01

    Selective sustained attention is vital for higher order cognition. Although endogenous and exogenous factors influence selective sustained attention, assessment of the degree to which these factors influence performance and learning is often challenging. We report findings from the Track-It task, a paradigm that aims to assess the contribution of endogenous and exogenous factors to selective sustained attention within the same task. Behavioral accuracy and eye-tracking data on the Track-It task were correlated with performance on an explicit learning task. Behavioral accuracy and fixations to distractors during the Track-It task did not predict learning when exogenous factors supported selective sustained attention. In contrast, when endogenous factors supported selective sustained attention, fixations to distractors were negatively correlated with learning. Similarly, when endogenous factors supported selective sustained attention, higher behavioral accuracy was correlated with greater learning. These findings suggest that endogenously and exogenously driven selective sustained attention, as measured through different conditions of the Track-It task, may support different kinds of learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Higgs boson decay into b-quarks at NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Tramontano, Francesco; Trócsányi, Zoltán

    2015-04-01

    We compute the fully differential decay rate of the Standard Model Higgs boson into b-quarks at next-to-next-to-leading order (NNLO) accuracy in αs. We employ a general subtraction scheme developed for the calculation of higher order perturbative corrections to QCD jet cross sections, which is based on the universal infrared factorization properties of QCD squared matrix elements. We show that the subtractions render the various contributions to the NNLO correction finite. In particular, we demonstrate analytically that the sum of integrated subtraction terms correctly reproduces the infrared poles of the two-loop double virtual contribution to this process. We present illustrative differential distributions obtained by implementing the method in a parton level Monte Carlo program. The basic ingredients of our subtraction scheme, used here for the first time to compute a physical observable, are universal and can be employed for the computation of more involved processes.

  1. Science Journals in the Garden: Developing the Skill of Observation in Elementary Age Students

    NASA Astrophysics Data System (ADS)

    Kelly, Karinsa Michelle

    The ability to make and record scientific observations is critical in order for students to engage in successful inquiry, and provides a sturdy foundation for children to develop higher order cognitive processes. Nevertheless, observation is taken for granted in the elementary classroom. This study explores how linking school garden experience with the use of science journals can support this skill. Students participated in a month-long unit in which they practiced their observation skills in the garden and recorded those observations in a science journal. Students' observational skills were assessed using pre- and post-assessments, student journals, and student interviews using three criteria: Accuracy, Detail, and Quantitative Data. Statistically significant improvements were found in the categories of Detail and Quantitative Data. Scores did improve in the category of Accuracy, but it was not found to be a statistically significant improvement.

  2. A Runge-Kutta discontinuous finite element method for high speed flows

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.; Oden, J. T.

    1991-01-01

    A Runge-Kutta discontinuous finite element method is developed for hyperbolic systems of conservation laws in two space variables. The discontinuous Galerkin spatial approximation to the conservation laws results in a system of ordinary differential equations which are marched in time using Runge-Kutta methods. Numerical results for the two-dimensional Burger's equation show that the method is (p+1)-order accurate in time and space, where p is the degree of the polynomial approximation of the solution within an element and is capable of capturing shocks over a single element without oscillations. Results for this problem also show that the accuracy of the solution in smooth regions is unaffected by the local projection and that the accuracy in smooth regions increases as p increases. Numerical results for the Euler equations show that the method captures shocks without oscillations and with higher resolution than a first-order scheme.

  3. Relativistic theory for time and frequency transfer to order c-3

    NASA Astrophysics Data System (ADS)

    Blanchet, L.; Salomon, C.; Teyssandier, P.; Wolf, P.

    2001-04-01

    This paper is motivated by the current development of several space missions (e.g. ACES on International Space Station) that will use Earth-orbit laser cooled atomic clocks, providing a time-keeping accuracy of the order of 5 10-17 in fractional frequency. We show that to such accuracy, the theory of frequency transfer between Earth and Space must be extended from the currently known relativistic order 1/c2 (which has been needed in previous space experiments such as GP-A) to the next relativistic correction of order 1/c3. We find that the frequency transfer includes the first and second-order Doppler contributions, the Einstein gravitational red-shift and, at the order 1/c3, a mixture of these effects. As for the time transfer, it contains the standard Shapiro time delay, and we present an expression also including the first and second-order Sagnac corrections. Higher-order relativistic corrections, at least {cal O}(1/c4), are numerically negligible for time and frequency transfers in these experiments, being for instance of order 10-20 in fractional frequency. Particular attention is paid to the problem of the frequency transfer in the two-way experimental configuration. In this case we find a simple theoretical expression which extends the previous formula (Vessot et al. \\cite{VessotLevine}) to the next order 1/c3. In the Appendix we present the detailed proofs of all the formulas which will be needed in such experiments.

  4. Computer-Assisted Classification Patterns in Autoimmune Diagnostics: The AIDA Project

    PubMed Central

    Benammar Elgaaied, Amel; Cascio, Donato; Bruno, Salvatore; Ciaccio, Maria Cristina; Cipolla, Marco; Fauci, Alessandro; Morgante, Rossella; Taormina, Vincenzo; Gorgi, Yousr; Marrakchi Triki, Raja; Ben Ahmed, Melika; Louzir, Hechmi; Yalaoui, Sadok; Imene, Sfar; Issaoui, Yassine; Abidi, Ahmed; Ammar, Myriam; Bedhiafi, Walid; Ben Fraj, Oussama; Bouhaha, Rym; Hamdi, Khouloud; Soumaya, Koudhi; Neili, Bilel; Asma, Gati; Lucchese, Mariano; Catanzaro, Maria; Barbara, Vincenza; Brusca, Ignazio; Fregapane, Maria; Amato, Gaetano; Friscia, Giuseppe; Neila, Trai; Turkia, Souayeh; Youssra, Haouami; Rekik, Raja; Bouokez, Hayet; Vasile Simone, Maria; Fauci, Francesco; Raso, Giuseppe

    2016-01-01

    Antinuclear antibodies (ANAs) are significant biomarkers in the diagnosis of autoimmune diseases in humans, done by mean of Indirect ImmunoFluorescence (IIF) method, and performed by analyzing patterns and fluorescence intensity. This paper introduces the AIDA Project (autoimmunity: diagnosis assisted by computer) developed in the framework of an Italy-Tunisia cross-border cooperation and its preliminary results. A database of interpreted IIF images is being collected through the exchange of images and double reporting and a Gold Standard database, containing around 1000 double reported images, has been settled. The Gold Standard database is used for optimization of a CAD (Computer Aided Detection) solution and for the assessment of its added value, in order to be applied along with an Immunologist as a second Reader in detection of autoantibodies. This CAD system is able to identify on IIF images the fluorescence intensity and the fluorescence pattern. Preliminary results show that CAD, used as second Reader, appeared to perform better than Junior Immunologists and hence may significantly improve their efficacy; compared with two Junior Immunologists, the CAD system showed higher Intensity Accuracy (85,5% versus 66,0% and 66,0%), higher Patterns Accuracy (79,3% versus 48,0% and 66,2%), and higher Mean Class Accuracy (79,4% versus 56,7% and 64.2%). PMID:27042658

  5. Mental fatigue impairs soccer-specific decision-making skill.

    PubMed

    Smith, Mitchell R; Zeuwts, Linus; Lenoir, Matthieu; Hens, Nathalie; De Jong, Laura M S; Coutts, Aaron J

    2016-07-01

    This study aimed to investigate the impact of mental fatigue on soccer-specific decision-making. Twelve well-trained male soccer players performed a soccer-specific decision-making task on two occasions, separated by at least 72 h. The decision-making task was preceded in a randomised order by 30 min of the Stroop task (mental fatigue) or 30 min of reading from magazines (control). Subjective ratings of mental fatigue were measured before and after treatment, and mental effort (referring to treatment) and motivation (referring to the decision-making task) were measured after treatment. Performance on the soccer-specific decision-making task was assessed using response accuracy and time. Visual search behaviour was also assessed throughout the decision-making task. Subjective ratings of mental fatigue and effort were almost certainly higher following the Stroop task compared to the magazines. Motivation for the upcoming decision-making task was possibly higher following the Stroop task. Decision-making accuracy was very likely lower and response time likely higher in the mental fatigue condition. Mental fatigue had unclear effects on most visual search behaviour variables. The results suggest that mental fatigue impairs accuracy and speed of soccer-specific decision-making. These impairments are not likely related to changes in visual search behaviour.

  6. Building a knowledge-based statistical potential by capturing high-order inter-residue interactions and its applications in protein secondary structure assessment.

    PubMed

    Li, Yaohang; Liu, Hui; Rata, Ionel; Jakobsson, Eric

    2013-02-25

    The rapidly increasing number of protein crystal structures available in the Protein Data Bank (PDB) has naturally made statistical analyses feasible in studying complex high-order inter-residue correlations. In this paper, we report a context-based secondary structure potential (CSSP) for assessing the quality of predicted protein secondary structures generated by various prediction servers. CSSP is a sequence-position-specific knowledge-based potential generated based on the potentials of mean force approach, where high-order inter-residue interactions are taken into consideration. The CSSP potential is effective in identifying secondary structure predictions with good quality. In 56% of the targets in the CB513 benchmark, the optimal CSSP potential is able to recognize the native secondary structure or a prediction with Q3 accuracy higher than 90% as best scored in the predicted secondary structures generated by 10 popularly used secondary structure prediction servers. In more than 80% of the CB513 targets, the predicted secondary structures with the lowest CSSP potential values yield higher than 80% Q3 accuracy. Similar performance of CSSP is found on the CASP9 targets as well. Moreover, our computational results also show that the CSSP potential using triplets outperforms the CSSP potential using doublets and is currently better than the CSSP potential using quartets.

  7. Speed-Accuracy Trade-Off in Skilled Typewriting: Decomposing the Contributions of Hierarchical Control Loops

    ERIC Educational Resources Information Center

    Yamaguchi, Motonori; Crump, Matthew J. C.; Logan, Gordon D.

    2013-01-01

    Typing performance involves hierarchically structured control systems: At the higher level, an outer loop generates a word or a series of words to be typed; at the lower level, an inner loop activates the keystrokes comprising the word in parallel and executes them in the correct order. The present experiments examined contributions of the outer-…

  8. Enhanced factoring with a bose-einstein condensate.

    PubMed

    Sadgrove, Mark; Kumar, Sanjay; Nakagawa, Ken'ichi

    2008-10-31

    We present a novel method to realize analog sum computation with a Bose-Einstein condensate in an optical lattice potential subject to controlled phase jumps. We use the method to implement the Gauss sum algorithm for factoring numbers. By exploiting higher order quantum momentum states, we are able to improve the algorithm's accuracy beyond the limits of the usual classical implementation.

  9. Seismic waves in heterogeneous material: subcell resolution of the discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Castro, Cristóbal E.; Käser, Martin; Brietzke, Gilbert B.

    2010-07-01

    We present an important extension of the arbitrary high-order discontinuous Galerkin (DG) finite-element method to model 2-D elastic wave propagation in highly heterogeneous material. In this new approach we include space-variable coefficients to describe smooth or discontinuous material variations inside each element using the same numerical approximation strategy as for the velocity-stress variables in the formulation of the elastic wave equation. The combination of the DG method with a time integration scheme based on the solution of arbitrary accuracy derivatives Riemann problems still provides an explicit, one-step scheme which achieves arbitrary high-order accuracy in space and time. Compared to previous formulations the new scheme contains two additional terms in the form of volume integrals. We show that the increasing computational cost per element can be overcompensated due to the improved material representation inside each element as coarser meshes can be used which reduces the total number of elements and therefore computational time to reach a desired error level. We confirm the accuracy of the proposed scheme performing convergence tests and several numerical experiments considering smooth and highly heterogeneous material. As the approximation of the velocity and stress variables in the wave equation and of the material properties in the model can be chosen independently, we investigate the influence of the polynomial material representation on the accuracy of the synthetic seismograms with respect to computational cost. Moreover, we study the behaviour of the new method on strong material discontinuities, in the case where the mesh is not aligned with such a material interface. In this case second-order linear material approximation seems to be the best choice, with higher-order intra-cell approximation leading to potential instable behaviour. For all test cases we validate our solution against the well-established standard fourth-order finite difference and spectral element method.

  10. Researches on the Orbit Determination and Positioning of the Chinese Lunar Exploration Program

    NASA Astrophysics Data System (ADS)

    Li, P. J.

    2015-07-01

    This dissertation studies the precise orbit determination (POD) and positioning of the Chinese lunar exploration spacecraft, emphasizing the variety of VLBI (very long baseline interferometry) technologies applied for the deep-space exploration, and their contributions to the methods and accuracies of the precise orbit determination and positioning. In summary, the main contents are as following: In this work, using the real-time data measured by the CE-2 (Chang'E-2) detector, the accuracy of orbit determination is analyzed for the domestic lunar probe under the present condition, and the role played by the VLBI tracking data is particularly reassessed through the precision orbit determination experiments for CE-2. The experiments of the short-arc orbit determination for the lunar probe show that the combination of the ranging and VLBI data with the arc of 15 minutes is able to improve the accuracy by 1-1.5 order of magnitude, compared to the cases for only using the ranging data with the arc of 3 hours. The orbital accuracy is assessed through the orbital overlapping analysis, and the results show that the VLBI data is able to contribute to the CE-2's long-arc POD especially in the along-track and orbital normal directions. For the CE-2's 100 km× 100 km lunar orbit, the position errors are better than 30 meters, and for the CE-2's 15 km× 100 km orbit, the position errors are better than 45 meters. The observational data with the delta differential one-way ranging (Δ DOR) from the CE-2's X-band monitoring and control system experimental are analyzed. It is concluded that the accuracy of Δ DOR delay is dramatically improved with the noise level better than 0.1 ns, and the systematic errors are well calibrated. Although it is unable to support the development of an independent lunar gravity model, the tracking data of CE-2 provided the evaluations of different lunar gravity models through POD, and the accuracies are examined in terms of orbit-to-orbit solution differences for several gravity models. It is found that for the 100 km× 100 km lunar orbit, with a degree and order expansion up to 165, the JPL's gravity model LP165P does not show noticeable improvement over Japan's SGM series models (100× 100), but for the 15 km× 100 km lunar orbit, a higher degree-order model can significantly improve the orbit accuracy. After accomplished its nominal mission, CE-2 launched its extended missions, which involving the L2 mission and the 4179 Toutatis mission. During the flight of the extended missions, the regime offers very little dynamics thus requires an extensive amount of time and tracking data in order to attain a solution. The overlap errors are computed, and it is indicated that the use of VLBI measurements is able to increase the accuracy and reduce the total amount of tracking time. An orbit determination method based on the polynomial fitting is proposed for the CE-3's planned lunar soft landing mission. In this method, spacecraft's dynamic modeling is not necessary, and its noise reduction is expected to be better than that of the point positioning method by making full use of all-arc observational data. The simulation experiments and real data processing showed that the optimal description of the CE-1's free-fall landing trajectory is a set of five-order polynomial functions for each of the position components as well as velocity components in J2000.0. The combination of the VLBI delay, the delay rate data, and the USB (united S-band) ranging data significantly improved the accuracy than the use of USB data alone. In order to determine the position for the CE-3's Lunar Lander, a kinematic statistical method is proposed. This method uses both ranging and VLBI measurements to the lander for a continuous arc, combing with precise knowledge about the motion of the moon as provided by planetary ephemeris, to estimate the lander's position on the lunar surface with high accuracy. Application of the lunar digital elevation model (DEM) as constraints in the lander positioning is helpful. The positioning method for the traverse of lunar rover is also investigated. The integration of delay-rate method is able to achieve higher precise positioning results than the point positioning method. This method provides a wide application of the VLBI data. In the automated sample return mission, the lunar orbit rendezvous and docking are involved. Precise orbit determination using the same-beam VLBI (SBI) measurement for two spacecraft at the same time is analyzed. The simulation results showed that the SBI data is able to improve the absolute and relative orbit accuracy for two targets by 1-2 orders of magnitude. In order to verify the simulation results and test the two-target POD software developed by SHAO (Shanghai Astronomical Observatory), the real SBI data of the SELENE (Selenological and Engineering Explorer) are processed. The POD results for the Rstar and the Vstar showed that the combination of SBI data could significantly improve the accuracy for the two spacecraft, especially for the Vstar with less ranging data, and the POD accuracy is improved by approximate one order of magnitude to the POD accuracy of the Rstar.

  11. Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.

    PubMed

    Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M

    2017-05-15

    We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.

  12. Variable High Order Multiblock Overlapping Grid Methods for Mixed Steady and Unsteady Multiscale Viscous Flows

    NASA Technical Reports Server (NTRS)

    Sjogreen, Bjoern; Yee, H. C.

    2007-01-01

    Flows containing steady or nearly steady strong shocks in parts of the flow field, and unsteady turbulence with shocklets on other parts of the flow field are difficult to capture accurately and efficiently employing the same numerical scheme even under the multiblock grid or adaptive grid refinement framework. On one hand, sixth-order or higher shock-capturing methods are appropriate for unsteady turbulence with shocklets. On the other hand, lower order shock-capturing methods are more effective for strong steady shocks in terms of convergence. In order to minimize the shortcomings of low order and high order shock-capturing schemes for the subject flows,a multi- block overlapping grid with different orders of accuracy on different blocks is proposed. Test cases to illustrate the performance of the new solver are included.

  13. Calorie labeling and consumer estimation of calories purchased

    PubMed Central

    2014-01-01

    Background Studies rarely find fewer calories purchased following calorie labeling implementation. However, few studies consider whether estimates of the number of calories purchased improved following calorie labeling legislation. Findings Researchers surveyed customers and collected purchase receipts at fast food restaurants in the United States cities of Philadelphia (which implemented calorie labeling policies) and Baltimore (a matched comparison city) in December 2009 (pre-implementation) and June 2010 (post-implementation). A difference-in-difference design was used to examine the difference between estimated and actual calories purchased, and the odds of underestimating calories. Participants in both cities, both pre- and post-calorie labeling, tended to underestimate calories purchased, by an average 216–409 calories. Adjusted difference-in-differences in estimated-actual calories were significant for individuals who ordered small meals and those with some college education (accuracy in Philadelphia improved by 78 and 231 calories, respectively, relative to Baltimore, p = 0.03-0.04). However, categorical accuracy was similar; the adjusted odds ratio [AOR] for underestimation by >100 calories was 0.90 (p = 0.48) in difference-in-difference models. Accuracy was most improved for subjects with a BA or higher education (AOR = 0.25, p < 0.001) and for individuals ordering small meals (AOR = 0.54, p = 0.001). Accuracy worsened for females (AOR = 1.38, p < 0.001) and for individuals ordering large meals (AOR = 1.27, p = 0.028). Conclusions We concluded that the odds of underestimating calories varied by subgroup, suggesting that at some level, consumers may incorporate labeling information. PMID:25015547

  14. Calorie labeling and consumer estimation of calories purchased.

    PubMed

    Taksler, Glen B; Elbel, Brian

    2014-07-12

    Studies rarely find fewer calories purchased following calorie labeling implementation. However, few studies consider whether estimates of the number of calories purchased improved following calorie labeling legislation. Researchers surveyed customers and collected purchase receipts at fast food restaurants in the United States cities of Philadelphia (which implemented calorie labeling policies) and Baltimore (a matched comparison city) in December 2009 (pre-implementation) and June 2010 (post-implementation). A difference-in-difference design was used to examine the difference between estimated and actual calories purchased, and the odds of underestimating calories.Participants in both cities, both pre- and post-calorie labeling, tended to underestimate calories purchased, by an average 216-409 calories. Adjusted difference-in-differences in estimated-actual calories were significant for individuals who ordered small meals and those with some college education (accuracy in Philadelphia improved by 78 and 231 calories, respectively, relative to Baltimore, p = 0.03-0.04). However, categorical accuracy was similar; the adjusted odds ratio [AOR] for underestimation by >100 calories was 0.90 (p = 0.48) in difference-in-difference models. Accuracy was most improved for subjects with a BA or higher education (AOR = 0.25, p < 0.001) and for individuals ordering small meals (AOR = 0.54, p = 0.001). Accuracy worsened for females (AOR = 1.38, p < 0.001) and for individuals ordering large meals (AOR = 1.27, p = 0.028). We concluded that the odds of underestimating calories varied by subgroup, suggesting that at some level, consumers may incorporate labeling information.

  15. A reduced-order model for compressible flows with buffeting condition using higher order dynamic mode decomposition with a mode selection criterion

    NASA Astrophysics Data System (ADS)

    Kou, Jiaqing; Le Clainche, Soledad; Zhang, Weiwei

    2018-01-01

    This study proposes an improvement in the performance of reduced-order models (ROMs) based on dynamic mode decomposition to model the flow dynamics of the attractor from a transient solution. By combining higher order dynamic mode decomposition (HODMD) with an efficient mode selection criterion, the HODMD with criterion (HODMDc) ROM is able to identify dominant flow patterns with high accuracy. This helps us to develop a more parsimonious ROM structure, allowing better predictions of the attractor dynamics. The method is tested in the solution of a NACA0012 airfoil buffeting in a transonic flow, and its good performance in both the reconstruction of the original solution and the prediction of the permanent dynamics is shown. In addition, the robustness of the method has been successfully tested using different types of parameters, indicating that the proposed ROM approach is a tool promising for using in both numerical simulations and experimental data.

  16. The refractive index in electron microscopy and the errors of its approximations.

    PubMed

    Lentzen, M

    2017-05-01

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Development of cognitive processing and judgments of knowledge in medical students: Analysis of progress test results.

    PubMed

    Cecilio-Fernandes, Dario; Kerdijk, Wouter; Jaarsma, A D Debbie C; Tio, René A

    2016-11-01

    Beside acquiring knowledge, medical students should also develop the ability to apply and reflect on it, requiring higher-order cognitive processing. Ideally, students should have reached higher-order cognitive processing when they enter the clinical program. Whether this is the case, is unknown. We investigated students' cognitive processing, and awareness of their knowledge during medical school. Data were gathered from 347 first-year preclinical and 196 first-year clinical students concerning the 2008 and 2011 Dutch progress tests. Questions were classified based upon Bloom's taxonomy: "simple questions" requiring lower and "vignette questions" requiring higher-order cognitive processing. Subsequently, we compared students' performance and awareness of their knowledge in 2008 to that in 2011 for each question type. Students' performance on each type of question increased as students progressed. Preclinical and first-year clinical students performed better on simple questions than on vignette questions. Third-year clinical students performed better on vignette questions than on simple questions. The accuracy of students' judgment of knowledge decreased over time. The progress test is a useful tool to assess students' cognitive processing and awareness of their knowledge. At the end of medical school, students achieved higher-order cognitive processing but their awareness of their knowledge had decreased.

  18. Low power and high accuracy spike sorting microprocessor with on-line interpolation and re-alignment in 90 nm CMOS process.

    PubMed

    Chen, Tung-Chien; Ma, Tsung-Chuan; Chen, Yun-Yu; Chen, Liang-Gee

    2012-01-01

    Accurate spike sorting is an important issue for neuroscientific and neuroprosthetic applications. The sorting of spikes depends on the features extracted from the neural waveforms, and a better sorting performance usually comes with a higher sampling rate (SR). However for the long duration experiments on free-moving subjects, the miniaturized and wireless neural recording ICs are the current trend, and the compromise on sorting accuracy is usually made by a lower SR for the lower power consumption. In this paper, we implement an on-chip spike sorting processor with integrated interpolation hardware in order to improve the performance in terms of power versus accuracy. According to the fabrication results in 90nm process, if the interpolation is appropriately performed during the spike sorting, the system operated at the SR of 12.5 k samples per second (sps) can outperform the one not having interpolation at 25 ksps on both accuracy and power.

  19. Speeding up spin-component-scaled third-order pertubation theory with the chain of spheres approximation: the COSX-SCS-MP3 method

    NASA Astrophysics Data System (ADS)

    Izsák, Róbert; Neese, Frank

    2013-07-01

    The 'chain of spheres' approximation, developed earlier for the efficient evaluation of the self-consistent field exchange term, is introduced here into the evaluation of the external exchange term of higher order correlation methods. Its performance is studied in the specific case of the spin-component-scaled third-order Møller--Plesset perturbation (SCS-MP3) theory. The results indicate that the approximation performs excellently in terms of both computer time and achievable accuracy. Significant speedups over a conventional method are obtained for larger systems and basis sets. Owing to this development, SCS-MP3 calculations on molecules of the size of penicillin (42 atoms) with a polarised triple-zeta basis set can be performed in ∼3 hours using 16 cores of an Intel Xeon E7-8837 processor with a 2.67 GHz clock speed, which represents a speedup by a factor of 8-9 compared to the previously most efficient algorithm. Thus, the increased accuracy offered by SCS-MP3 can now be explored for at least medium-sized molecules.

  20. Case studies in pathophysiology: The development and evaluation of an interactive online learning environment to develop higher order thinking and argumentation

    NASA Astrophysics Data System (ADS)

    Titterington, Lynda C.

    2007-12-01

    This study presents a framework for examining the effects of higher order thinking on the achievement of allied health students enrolled in a pathophysiology course. A series of clinical case studies was developed and published in an enriched online environment that guided students through the process of developing a solution and supporting it through data analysis and interpretation. The series of case study modules scaffolded argumentation through question prompts. The modules began with a simple, direct problem and they became progressively more complex throughout the quarter. A control group was assigned a pencil-and-paper case study based upon recall. The case studies were scored for content accuracy and evidence of higher order thinking skills. Higher order thinking was measured using a rubric based upon the Toulmin argumentation pattern. The results indicated implementing a case study of either online or traditional format was associated with significant gains in achievement. The Web-enhanced case studies were associated with modest gains in knowledge acquisition. The argumentation scores across the series followed two trends: directed case studies were associated with higher levels of argumentation than ill-structured case studies, and there appeared to be an inverse relationship between the students' argumentation and content scores. The protocols developed for this study can serve as a template for a larger, extended investigation into student learning in the online environment.

  1. Fourth-Grade Children are Less Accurate in Reporting School Breakfast than School Lunch during 24-Hour Dietary Recalls

    PubMed Central

    Baxter, Suzanne Domel; Royer, Julie A.; Hardin, James W.; Guinn, Caroline H.; Smith, Albert F.

    2008-01-01

    Objective To compare reporting accuracy for breakfast and lunch in two studies. Design Children were observed eating school meals and interviewed the following morning about the previous day. Study 1 – 104 children were each interviewed one to three times with ≥25 days separating any two interviews. Study 2 – 121 children were each interviewed once in forward (morning-to-evening) and once in reverse (evening-to-morning) order, separated by ≥29 days. Setting 12 schools. Participants Fourth-grade children. Main Outcome Measures For each meal: food-item variables – observed number, reported number, omission rate, intrusion rate, total inaccuracy; kilocalorie variables – observed, reported, correspondence rate, inflation ratio. Analysis General linear mixed-models. Results For each study, observed and reported numbers of items and kilocalories, and correspondence rate (reporting accuracy), were greater for lunch than breakfast; omission rate, intrusion rate, and inflation ratio (measures of reporting error) were greater for breakfast than lunch. Study 1 – for each meal over interviews, total inaccuracy decreased and correspondence rate increased. Study 2 – for each meal for boys for reverse and girls for forward order, omission rate was lower and correspondence rate was higher. Conclusions and Implications Breakfast was reported less accurately than lunch. Despite improvement over interviews (Study 1) and differences for order × sex (Study 2), reporting accuracy was low for breakfast and lunch. PMID:17493562

  2. Proper Analytic Point Spread Function for Lateral Modulation

    NASA Astrophysics Data System (ADS)

    Chikayoshi Sumi,; Kunio Shimizu,; Norihiko Matsui,

    2010-07-01

    For ultrasonic lateral modulation for the imaging and measurement of tissue motion, better envelope shapes of the point spread function (PSF) than of a parabolic function are searched for within analytic functions or windows on the basis of the knowledge of the ideal shape of PSF previously obtained, i.e., having a large full width at half maximum and short feet. Through simulation of displacement vector measurement, better shapes are determined. As a better shape, a new window is obtained from a Turkey window by changing Hanning windows by power functions with an order larger than the second order. The order of measurement accuracies obtained is as follows, the new window > rectangular window > power function with a higher order > parabolic function > Akaike window.

  3. Verification of Software: The Textbook and Real Problems

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee

    2006-01-01

    The process of verification, or determining the order of accuracy of computational codes, can be problematic when working with large, legacy computational methods that have been used extensively in industry or government. Verification does not ensure that the computer program is producing a physically correct solution, it ensures merely that the observed order of accuracy of solutions are the same as the theoretical order of accuracy. The Method of Manufactured Solutions (MMS) is one of several ways for determining the order of accuracy. MMS is used to verify a series of computer codes progressing in sophistication from "textbook" to "real life" applications. The degree of numerical precision in the computations considerably influenced the range of mesh density to achieve the theoretical order of accuracy even for 1-D problems. The choice of manufactured solutions and mesh form shifted the observed order in specific areas but not in general. Solution residual (iterative) convergence was not always achieved for 2-D Euler manufactured solutions. L(sub 2,norm) convergence differed variable to variable therefore an observed order of accuracy could not be determined conclusively in all cases, the cause of which is currently under investigation.

  4. Program VSAERO theory document: A computer program for calculating nonlinear aerodynamic characteristics of arbitrary configurations

    NASA Technical Reports Server (NTRS)

    Maskew, Brian

    1987-01-01

    The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.

  5. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    2004-01-01

    A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.

  6. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  7. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less

  8. For numerical differentiation, dimensionality can be a blessing!

    NASA Astrophysics Data System (ADS)

    Anderssen, Robert S.; Hegland, Markus

    Finite difference methods, such as the mid-point rule, have been applied successfully to the numerical solution of ordinary and partial differential equations. If such formulas are applied to observational data, in order to determine derivatives, the results can be disastrous. The reason for this is that measurement errors, and even rounding errors in computer approximations, are strongly amplified in the differentiation process, especially if small step-sizes are chosen and higher derivatives are required. A number of authors have examined the use of various forms of averaging which allows the stable computation of low order derivatives from observational data. The size of the averaging set acts like a regularization parameter and has to be chosen as a function of the grid size h. In this paper, it is initially shown how first (and higher) order single-variate numerical differentiation of higher dimensional observational data can be stabilized with a reduced loss of accuracy than occurs for the corresponding differentiation of one-dimensional data. The result is then extended to the multivariate differentiation of higher dimensional data. The nature of the trade-off between convergence and stability is explicitly characterized, and the complexity of various implementations is examined.

  9. Higher order QCD predictions for associated Higgs production with anomalous couplings to gauge bosons

    NASA Astrophysics Data System (ADS)

    Mimasu, Ken; Sanz, Verónica; Williams, Ciaran

    2016-08-01

    We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.

  10. Integrated and differential accuracy in resummed cross sections

    DOE PAGES

    Bertolini, Daniele; Solon, Mikhail P.; Walsh, Jonathan R.

    2017-03-30

    Standard QCD resummation techniques provide precise predictions for the spectrum and the cumulant of a given observable. The integrated spectrum and the cumulant differ by higher-order terms which, however, can be numerically significant. Here in this paper we propose a method, which we call the σ-improved scheme, to resolve this issue. It consists of two steps: (i) include higher-order terms in the spectrum to improve the agreement with the cumulant central value, and (ii) employ profile scales that encode correlations between different points to give robust uncertainty estimates for the integrated spectrum. We provide a generic algorithm for determining suchmore » profile scales, and show the application to the thrust distribution in e +e - collisions at NLL'+NLO and NNLL'+NNLO.« less

  11. Boosting Classification Accuracy of Diffusion MRI Derived Brain Networks for the Subtypes of Mild Cognitive Impairment Using Higher Order Singular Value Decomposition

    PubMed Central

    Zhan, L.; Liu, Y.; Zhou, J.; Ye, J.; Thompson, P.M.

    2015-01-01

    Mild cognitive impairment (MCI) is an intermediate stage between normal aging and Alzheimer's disease (AD), and around 10-15% of people with MCI develop AD each year. More recently, MCI has been further subdivided into early and late stages, and there is interest in identifying sensitive brain imaging biomarkers that help to differentiate stages of MCI. Here, we focused on anatomical brain networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying early versus late MCI. PMID:26413202

  12. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  13. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  14. Initial overview of the San Francisco Bay and Santa Cruz mountains ground motion

    USGS Publications Warehouse

    Brady, A. Gerald

    1990-01-01

    The strong-motion accelerograms from the Loma Prieta earthquake are analyzed for their long-period content in order to obtain a clearer picture of the long-period wave propogation details. Shear waves having periods in the 3.5 to 4 sec, and 5 to 7 sec ranges travel across four groups of stations with satisfactory coherency. Displacement accuracies are of the order of 0.5 cm for most of this data, with signal amplitudes an order of magnitude higher than the noise. Resonances associated with shear waves of 1.5 sec period are responsible for about 3/4 of the differential displacement necessary to unseat the 15 m section of the Bay Bridge.

  15. Development of quadrilateral spline thin plate elements using the B-net method

    NASA Astrophysics Data System (ADS)

    Chen, Juan; Li, Chong-Jun

    2013-08-01

    The quadrilateral discrete Kirchhoff thin plate bending element DKQ is based on the isoparametric element Q8, however, the accuracy of the isoparametric quadrilateral elements will drop significantly due to mesh distortions. In a previouswork, we constructed an 8-node quadrilateral spline element L8 using the triangular area coordinates and the B-net method, which can be insensitive to mesh distortions and possess the second order completeness in the Cartesian coordinates. In this paper, a thin plate spline element is developed based on the spline element L8 and the refined technique. Numerical examples show that the present element indeed possesses higher accuracy than the DKQ element for distorted meshes.

  16. Accuracy versus convergence rates for a three dimensional multistage Euler code

    NASA Technical Reports Server (NTRS)

    Turkel, Eli

    1988-01-01

    Using a central difference scheme, it is necessary to add an artificial viscosity in order to reach a steady state. This viscosity usually consists of a linear fourth difference to eliminate odd-even oscillations and a nonlinear second difference to suppress oscillations in the neighborhood of steep gradients. There are free constants in these differences. As one increases the artificial viscosity, the high modes are dissipated more and the scheme converges more rapidly. However, this higher level of viscosity smooths the shocks and eliminates other features of the flow. Thus, there is a conflict between the requirements of accuracy and efficiency. Examples are presented for a variety of three-dimensional inviscid solutions over isolated wings.

  17. Accuracy Study of the Space-Time CE/SE Method for Computational Aeroacoustics Problems Involving Shock Waves

    NASA Technical Reports Server (NTRS)

    Wang, Xiao Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.

    1999-01-01

    The space-time conservation element and solution element(CE/SE) method is used to study the sound-shock interaction problem. The order of accuracy of numerical schemes is investigated. The linear model problem.govemed by the 1-D scalar convection equation, sound-shock interaction problem governed by the 1-D Euler equations, and the 1-D shock-tube problem which involves moving shock waves and contact surfaces are solved to investigate the order of accuracy of numerical schemes. It is concluded that the accuracy of the CE/SE numerical scheme with designed 2nd-order accuracy becomes 1st order when a moving shock wave exists. However, the absolute error in the CE/SE solution downstream of the shock wave is on the same order as that obtained using a fourth-order accurate essentially nonoscillatory (ENO) scheme. No special techniques are used for either high-frequency low-amplitude waves or shock waves.

  18. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  19. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.

    2004-01-01

    A new, high-order, conservative, and efficient method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. A discussion on the Discontinuous Spectral Difference (SD) Method, locations of the unknowns and flux points and numerical results are also presented.

  20. Identification of Phragmites australis and Spartina alterniflora in the Yangtze Estuary between Bayes and BP neural network using hyper-spectral data

    NASA Astrophysics Data System (ADS)

    Liu, Pudong; Zhou, Jiayuan; Shi, Runhe; Zhang, Chao; Liu, Chaoshun; Sun, Zhibin; Gao, Wei

    2016-09-01

    The aim of this work was to identify the coastal wetland plants between Bayes and BP neural network using hyperspectral data in order to optimize the classification method. For this purpose, we chose two dominant plants (invasive S. alterniflora and native P. australis) in the Yangtze Estuary, the leaf spectral reflectance of P. australis and S. alterniflora were measured by ASD field spectral machine. We tested the Bayes method and BP neural network for the identification of these two species. Results showed that three different bands (i.e., 555 nm 711 nm and 920 nm) could be identified as the sensitive bands for the input parameters for the two methods. Bayes method and BP neural network prediction model both performed well (Bayes prediction for 88.57% accuracy, BP neural network model prediction for about 80% accuracy), but Bayes theorem method could give higher accuracy and stability.

  1. NLO renormalization in the Hamiltonian truncation

    NASA Astrophysics Data System (ADS)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-09-01

    Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.

  2. On the feasibility of sub-100 nm rad emittance measurement in plasma accelerators using permanent magnetic quadrupoles

    NASA Astrophysics Data System (ADS)

    Li, F.; Wu, Y. P.; Nie, Z.; Guo, B.; Zhang, X. H.; Huang, S.; Zhang, J.; Cheng, Z.; Ma, Y.; Fang, Y.; Zhang, C. J.; Wan, Y.; Xu, X. L.; Hua, J. F.; Pai, C. H.; Lu, W.; Gu, Y. Q.

    2018-01-01

    Low emittance (sub-100 nm rad) measurement of electron beams in plasma accelerators has been a challenging issue for a while. Among various measurement schemes, measurements based on single-shot quad-scan using permanent magnetic quadrupoles (PMQs) has been recently reported with emittance as low as ˜200 nm Weingartner (2012 Phys. Rev. Spec. Top. Accel. Beams 15 111302). However, the accuracy and reliability of this method have not been systematically analyzed. Such analysis is critical for evaluating the potential of sub-100 nm rad emittance measurement using any scheme. In this paper, we analyze the effects of various nonideal physical factors on the accuracy and reliability using the PMQ method. These factors include aberration induced by a high order field, PMQ misalignment and angular fluctuation of incoming beams. Our conclusions are as follows: (i) the aberrations caused by high order fields of PMQs are relatively weak for low emittance measurement as long as the PMQs are properly constructed. A series of PMQs were manufactured and measured at Tsinghua University, and using numerical simulations their high order field effects were found to be negligible . (ii) The largest measurement error of emittance is caused by the angular misalignment between PMQs. For low emittance measurement of ˜100 MeV beams, an angular alignment accuracy of 0.1° is necessary. This requirement can be eased for beams with higher energies. (iii) The transverse position misalignment of PMQs and angular fluctuation of incoming beams only cause a translational and rotational shift of measured signals, respectively, therefore, there is no effect on the measured value of emittance. (iv) The spatial resolution and efficiency of the detection system need to be properly designed to guarantee the accuracy of sub-100 nm rad emittance measurement.

  3. Numerical investigation of implementation of air-earth boundary by acoustic-elastic boundary approach

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2007-01-01

    The need for incorporating the traction-free condition at the air-earth boundary for finite-difference modeling of seismic wave propagation has been discussed widely. A new implementation has been developed for simulating elastic wave propagation in which the free-surface condition is replaced by an explicit acoustic-elastic boundary. Detailed comparisons of seismograms with different implementations for the air-earth boundary were undertaken using the (2,2) (the finite-difference operators are second order in time and space) and the (2,6) (second order in time and sixth order in space) standard staggered-grid (SSG) schemes. Methods used in these comparisons to define the air-earth boundary included the stress image method (SIM), the heterogeneous approach, the scheme of modifying material properties based on transversely isotropic medium approach, the acoustic-elastic boundary approach, and an analytical approach. The method proposed achieves the same or higher accuracy of modeled body waves relative to the SIM. Rayleigh waves calculated using the explicit acoustic-elastic boundary approach differ slightly from those calculated using the SIM. Numerical results indicate that when using the (2,2) SSG scheme for SIM and our new method, a spatial step of 16 points per minimum wavelength is sufficient to achieve 90% accuracy; 32 points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. When using the (2,6) SSG scheme for the two methods, a spatial step of eight points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. Our proposed method is physically reasonable and, based on dispersive analysis of simulated seismographs from a layered half-space model, is highly accurate. As a bonus, our proposed method is easy to program and slightly faster than the SIM. ?? 2007 Society of Exploration Geophysicists.

  4. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  5. Dynamics and Control of Tethered Antennas/Reflectors in Orbit

    DTIC Science & Technology

    1992-02-01

    reflector system. The optimal linear quadratic Gaussian (LQG) digital con- trol of the orbiting tethered antenna/reflector system is analyzed. The...flexibility of both the antenna and the tether are included in this high order system model. With eight point actuators optimally positioned together with...able to maintain satisfactory pointing accuracy for low and moderate altitude orbits under the influence of solar pressure. For the higher altitudes a

  6. Artificial Intelligence (Al) Center of Excellence at the University of Pennsylvania

    DTIC Science & Technology

    1995-07-01

    Approach and repel behaviors were implemented in order to study higher level behavioral simulation . Parallel algorithms for motion planning (as a...of decision-making accuracy can be specified for this graph-reduction process. We have also developed a mixed qualitative/quantitative simulation ...system, called QobiSIM. QobiSIM has been used to develop a cardiovascular simulation to be incorporated into the TraumAID system. This cardiovascular

  7. Assessment of mean-field microkinetic models for CO methanation on stepped metal surfaces using accelerated kinetic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Andersen, Mie; Plaisance, Craig P.; Reuter, Karsten

    2017-10-01

    First-principles screening studies aimed at predicting the catalytic activity of transition metal (TM) catalysts have traditionally been based on mean-field (MF) microkinetic models, which neglect the effect of spatial correlations in the adsorbate layer. Here we critically assess the accuracy of such models for the specific case of CO methanation over stepped metals by comparing to spatially resolved kinetic Monte Carlo (kMC) simulations. We find that the typical low diffusion barriers offered by metal surfaces can be significantly increased at step sites, which results in persisting correlations in the adsorbate layer. As a consequence, MF models may overestimate the catalytic activity of TM catalysts by several orders of magnitude. The potential higher accuracy of kMC models comes at a higher computational cost, which can be especially challenging for surface reactions on metals due to a large disparity in the time scales of different processes. In order to overcome this issue, we implement and test a recently developed algorithm for achieving temporal acceleration of kMC simulations. While the algorithm overall performs quite well, we identify some challenging cases which may lead to a breakdown of acceleration algorithms and discuss possible directions for future algorithm development.

  8. The elimination of influence of disturbing bodies' coordinates and derivatives discontinuity on the accuracy of asteroid motion simulation

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.; Votchel, I. A.

    2013-12-01

    The problem of asteroid motion sumulation has been considered. At present this simulation is being performed by means of numerical integration taking into account the pertubations from planets and the Moon with some their ephemerides (DE405, DE422, etc.). All these ephemerides contain coefficients for Chebyshev polinomials for the great amount of equal interpolation intervals. However, all ephemerides has been constructed to keep at the junctions of adjacent intervals a continuity of just coordinates and their first derivatives (just in 16-digit decimal format corre-sponding to 64-bit floating-point numbers). But as for the second and higher order derivatives, they have breaks at these junctions. These breaks, if they are within an integration step, decrease the accuracy of numerical integration. If to consider 34-digit format (128-bit floating point numbers) the coordinates and their first derivatives will also have breaks (at 15-16 decimal digit) at interpolation intervals' junctions. Two ways of elimination of influence of such breaks have been considered. The first one is a "smoothing" of ephemerides so that planets' coordinates and their de-rivatives up to some order will be continuous at the junctions. The smoothing algorithm is based on conditional least-square fitting of coefficients for Chebyshev polynomials, the conditions are equalities of coordinates and derivatives up to some order "from the left" and "from the right" at the each junction. The algorithm has been applied for the smoothing of ephemerides DE430 just up to the first-order derivatives. The second way is a correction of integration step so that junctions does not lie within the step and always coincide with its end. But this way may be applied just at 16-digit decimal precision because it assumes a continuity of planets' coordinates and their first derivatives. Both ways was applied in forward and backward numerical integration for asteroids Apophis and 2012 DA14 by means of 15- and 31-order Everhart method at 16- and 34-digit decimal precision correspondently. The ephemerides DE430 (in its original and smoothed form) has been used for the calculation of perturbations. The results of the research indicate that the integration step correction increases a numercal integration accuracy by 3-4 orders. If, in addition, to replace the original ephemerides by the smoothed ones the accuracy increases approximately by 10 orders.

  9. The Voronoi Implicit Interface Method for computing multiphase physics

    PubMed Central

    Saye, Robert I.; Sethian, James A.

    2011-01-01

    We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269

  10. Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples

    PubMed Central

    Chen, Andrew; Chen, Chiachung

    2013-01-01

    Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627

  11. The Voronoi Implicit Interface Method for computing multiphase physics.

    PubMed

    Saye, Robert I; Sethian, James A

    2011-12-06

    We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method's accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann's law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.

  12. The Voronoi Implicit Interface Method for computing multiphase physics

    DOE PAGES

    Saye, Robert I.; Sethian, James A.

    2011-11-21

    In this paper, we introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarilymore » high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. Finally, we test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.« less

  13. The speed of metacognition: taking time to get to know one's structural knowledge.

    PubMed

    Mealor, Andy D; Dienes, Zoltan

    2013-03-01

    The time course of different metacognitive experiences of knowledge was investigated using artificial grammar learning. Experiment 1 revealed that when participants are aware of the basis of their judgments (conscious structural knowledge) decisions are made most rapidly, followed by decisions made with conscious judgment but without conscious knowledge of underlying structure (unconscious structural knowledge), and guess responses (unconscious judgment knowledge) were made most slowly, even when controlling for differences in confidence and accuracy. In experiment 2, short response deadlines decreased the accuracy of unconscious but not conscious structural knowledge. Conversely, the deadline decreased the proportion of conscious structural knowledge in favour of guessing. Unconscious structural knowledge can be applied rapidly but becomes more reliable with additional metacognitive processing time whereas conscious structural knowledge is an all-or-nothing response that cannot always be applied rapidly. These dissociations corroborate quite separate theories of recognition (dual-process) and metacognition (higher order thought and cross-order integration). Copyright © 2012 Elsevier Inc. All rights reserved.

  14. The Accuracy of Shock Capturing in Two Spatial Dimensions

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Casper, Jay H.

    1997-01-01

    An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.

  15. Sea ice motion measurements from Seasat SAR images

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Raggam, J.; Elachi, C.; Campbell, W. J.

    1983-01-01

    Data from the Seasat synthetic aperture radar (SAR) experiment are analyzed in order to determine the accuracy of this information for mapping the distribution of sea ice and its motion. Data from observations of sea ice in the Beaufort Sea from seven sequential orbits of the satellite were selected to study the capabilities and limitations of spaceborne radar application to sea-ice mapping. Results show that there is no difficulty in identifying homologue ice features on sequential radar images and the accuracy is entirely controlled by the accuracy of the orbit data and the geometric calibration of the sensor. Conventional radargrammetric methods are found to serve well for satellite radar ice mapping, while ground control points can be used to calibrate the ice location and motion measurements in the cases where orbit data and sensor calibration are lacking. The ice motion was determined to be approximately 6.4 + or - 0.5 km/day. In addition, the accuracy of pixel location was found over land areas. The use of one control point in 10,000 sq km produced an accuracy of about + or 150 m, while with a higher density of control points (7 in 1000 sq km) the location accuracy improves to the image resolution of + or - 25 m. This is found to be applicable for both optical and digital data.

  16. Unbound motion on a Schwarzschild background: Practical approaches to frequency domain computations

    NASA Astrophysics Data System (ADS)

    Hopper, Seth

    2018-03-01

    Gravitational perturbations due to a point particle moving on a static black hole background are naturally described in Regge-Wheeler gauge. The first-order field equations reduce to a single master wave equation for each radiative mode. The master function satisfying this wave equation is a linear combination of the metric perturbation amplitudes with a source term arising from the stress-energy tensor of the point particle. The original master functions were found by Regge and Wheeler (odd parity) and Zerilli (even parity). Subsequent work by Moncrief and then Cunningham, Price and Moncrief introduced new master variables which allow time domain reconstruction of the metric perturbation amplitudes. Here, I explore the relationship between these different functions and develop a general procedure for deriving new higher-order master functions from ones already known. The benefit of higher-order functions is that their source terms always converge faster at large distance than their lower-order counterparts. This makes for a dramatic improvement in both the speed and accuracy of frequency domain codes when analyzing unbound motion.

  17. Buckling Analysis of Angle-ply Composite and Sandwich Plates by Combination of Geometric Stiffness Matrix

    NASA Astrophysics Data System (ADS)

    Zhen, Wu; Wanji, Chen

    2007-05-01

    Buckling response of angle-ply laminated composite and sandwich plates are analyzed using the global-local higher order theory with combination of geometric stiffness matrix in this paper. This global-local theory completely fulfills the free surface conditions and the displacement and stress continuity conditions at interfaces. Moreover, the number of unknowns in this theory is independent of the number of layers in the laminate. Based on this global-local theory, a three-noded triangular element satisfying C1 continuity conditions has also been proposed. The bending part of this element is constructed from the concept of DKT element. In order to improve the accuracy of the analysis, a method of modified geometric stiffness matrix has been introduced. Numerical results show that the present theory not only computes accurately the buckling response of general laminated composite plates but also predicts the critical buckling loads of soft-core sandwiches. However, the global higher-order theories as well as first order theories might encounter some difficulties and overestimate the critical buckling loads for soft-core sandwich plates.

  18. Reliable prediction of three-body intermolecular interactions using dispersion-corrected second-order Møller-Plesset perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yuanhang; Beran, Gregory J. O., E-mail: gregory.beran@ucr.edu

    2015-07-28

    Three-body and higher intermolecular interactions can play an important role in molecular condensed phases. Recent benchmark calculations found problematic behavior for many widely used density functional approximations in treating 3-body intermolecular interactions. Here, we demonstrate that the combination of second-order Møller-Plesset (MP2) perturbation theory plus short-range damped Axilrod-Teller-Muto (ATM) dispersion accurately describes 3-body interactions with reasonable computational cost. The empirical damping function used in the ATM dispersion term compensates both for the absence of higher-order dispersion contributions beyond the triple-dipole ATM term and non-additive short-range exchange terms which arise in third-order perturbation theory and beyond. Empirical damping enables this simplemore » model to out-perform a non-expanded coupled Kohn-Sham dispersion correction for 3-body intermolecular dispersion. The MP2 plus ATM dispersion model approaches the accuracy of O(N{sup 6}) methods like MP2.5 or even spin-component-scaled coupled cluster models for 3-body intermolecular interactions with only O(N{sup 5}) computational cost.« less

  19. Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.

    PubMed

    Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth

    2017-05-26

    In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.

  20. Characterization of high order spatial discretizations and lumping techniques for discontinuous finite element SN transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, P. G.; Ragusa, J. C.; Morel, J. E.

    2013-07-01

    We examine several possible methods of mass matrix lumping for discontinuous finite element discrete ordinates transport using a Lagrange interpolatory polynomial trial space. Though positive outflow angular flux is guaranteed with traditional mass matrix lumping in a purely absorbing 1-D slab cell for the linear discontinuous approximation, we show that when used with higher degree interpolatory polynomial trial spaces, traditional lumping does yield strictly positive outflows and does not increase in accuracy with an increase in trial space polynomial degree. As an alternative, we examine methods which are 'self-lumping'. Self-lumping methods yield diagonal mass matrices by using numerical quadrature restrictedmore » to the Lagrange interpolatory points. Using equally-spaced interpolatory points, self-lumping is achieved through the use of closed Newton-Cotes formulas, resulting in strictly positive outflows in pure absorbers for odd power polynomials in 1-D slab geometry. By changing interpolatory points from the traditional equally-spaced points to the quadrature points of the Gauss-Legendre or Lobatto-Gauss-Legendre quadratures, it is possible to generate solution representations with a diagonal mass matrix and a strictly positive outflow for any degree polynomial solution representation in a pure absorber medium in 1-D slab geometry. Further, there is no inherent limit to local truncation error order of accuracy when using interpolatory points that correspond to the quadrature points of high order accuracy numerical quadrature schemes. (authors)« less

  1. SELF-GRAVITATIONAL FORCE CALCULATION OF SECOND-ORDER ACCURACY FOR INFINITESIMALLY THIN GASEOUS DISKS IN POLAR COORDINATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw

    Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less

  2. Stability and natural vibration analysis of laminated plates by using a mixed element based on a refined plate theory

    NASA Technical Reports Server (NTRS)

    Putcha, N. S.; Reddy, J. N.

    1986-01-01

    A mixed shear flexible finite element, with relaxed continuity, is developed for the geometrically linear and nonlinear analysis of layered anisotropic plates. The element formulation is based on a refined higher order theory which satisfies the zero transverse shear stress boundary conditions on the top and bottom faces of the plate and requires no shear correction coefficients. The mixed finite element developed herein consists of eleven degrees of freedom per node which include three displacements, two rotations and six moment resultants. The element is evaluated for its accuracy in the analysis of the stability and vibration of anisotropic rectangular plates with different lamination schemes and boundary conditions. The mixed finite element described here for the higher order theory gives very accurate results for buckling loads and natural frequencies.

  3. A modified anomaly detection method for capsule endoscopy images using non-linear color conversion and Higher-order Local Auto-Correlation (HLAC).

    PubMed

    Hu, Erzhong; Nosato, Hirokazu; Sakanashi, Hidenori; Murakawa, Masahiro

    2013-01-01

    Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.

  4. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.

  5. Efficient Reformulation of the Thermoelastic Higher-order Theory for Fgms

    NASA Technical Reports Server (NTRS)

    Bansal, Yogesh; Pindera, Marek-Jerzy; Arnold, Steven M. (Technical Monitor)

    2002-01-01

    Functionally graded materials (FGMs) are characterized by spatially variable microstructures which are introduced to satisfy given performance requirements. The microstructural gradation gives rise to continuously or discretely changing material properties which complicate FGM analysis. Various techniques have been developed during the past several decades for analyzing traditional composites and many of these have been adapted for the analysis of FGMs. Most of the available techniques use the so-called uncoupled approach in order to analyze graded structures. These techniques ignore the effect of microstructural gradation by employing specific spatial material property variations that are either assumed or obtained by local homogenization. The higher-order theory for functionally graded materials (HOTFGM) is a coupled approach developed by Aboudi et al. (1999) which takes the effect of microstructural gradation into consideration and does not ignore the local-global interaction of the spatially variable inclusion phase(s). Despite its demonstrated utility, however, the original formulation of the higher-order theory is computationally intensive. Herein, an efficient reformulation of the original higher-order theory for two-dimensional elastic problems is developed and validated. The use of the local-global conductivity and local-global stiffness matrix approach is made in order to reduce the number of equations involved. In this approach, surface-averaged quantities are the primary variables which replace volume-averaged quantities employed in the original formulation. The reformulation decreases the size of the global conductivity and stiffness matrices by approximately sixty percent. Various thermal, mechanical, and combined thermomechanical problems are analyzed in order to validate the accuracy of the reformulated theory through comparison with analytical and finite-element solutions. The presented results illustrate the efficiency of the reformulation and its advantages in analyzing functionally graded materials.

  6. A method of extracting impervious surface based on rule algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Shuangyun; Hong, Liang; Xu, Quanli

    2018-02-01

    The impervious surface has become an important index to evaluate the urban environmental quality and measure the development level of urbanization. At present, the use of remote sensing technology to extract impervious surface has become the main way. In this paper, a method to extract impervious surface based on rule algorithm is proposed. The main ideas of the method is to use the rule-based algorithm to extract impermeable surface based on the characteristics and the difference which is between the impervious surface and the other three types of objects (water, soil and vegetation) in the seven original bands, NDWI and NDVI. The steps can be divided into three steps: 1) Firstly, the vegetation is extracted according to the principle that the vegetation is higher in the near-infrared band than the other bands; 2) Then, the water is extracted according to the characteristic of the water with the highest NDWI and the lowest NDVI; 3) Finally, the impermeable surface is extracted based on the fact that the impervious surface has a higher NDWI value and the lowest NDVI value than the soil.In order to test the accuracy of the rule algorithm, this paper uses the linear spectral mixed decomposition algorithm, the CART algorithm, the NDII index algorithm for extracting the impervious surface based on six remote sensing image of the Dianchi Lake Basin from 1999 to 2014. Then, the accuracy of the above three methods is compared with the accuracy of the rule algorithm by using the overall classification accuracy method. It is found that the extraction method based on the rule algorithm is obviously higher than the above three methods.

  7. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing

    PubMed Central

    Wen, Tailai; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-01

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors’ responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose’s classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods. PMID:29382146

  8. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing.

    PubMed

    Wen, Tailai; Yan, Jia; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-29

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors' responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose's classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods.

  9. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  10. Reproducibility of UAV-based earth topography reconstructions based on Structure-from-Motion algorithms

    NASA Astrophysics Data System (ADS)

    Clapuyt, Francois; Vanacker, Veerle; Van Oost, Kristof

    2016-05-01

    Combination of UAV-based aerial pictures and Structure-from-Motion (SfM) algorithm provides an efficient, low-cost and rapid framework for remote sensing and monitoring of dynamic natural environments. This methodology is particularly suitable for repeated topographic surveys in remote or poorly accessible areas. However, temporal analysis of landform topography requires high accuracy of measurements and reproducibility of the methodology as differencing of digital surface models leads to error propagation. In order to assess the repeatability of the SfM technique, we surveyed a study area characterized by gentle topography with an UAV platform equipped with a standard reflex camera, and varied the focal length of the camera and location of georeferencing targets between flights. Comparison of different SfM-derived topography datasets shows that precision of measurements is in the order of centimetres for identical replications which highlights the excellent performance of the SfM workflow, all parameters being equal. The precision is one order of magnitude higher for 3D topographic reconstructions involving independent sets of ground control points, which results from the fact that the accuracy of the localisation of ground control points strongly propagates into final results.

  11. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  12. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  13. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  14. Classifying the hierarchy of nonlinear-Schrödinger-equation rogue-wave solutions.

    PubMed

    Kedziora, David J; Ankiewicz, Adrian; Akhmediev, Nail

    2013-07-01

    We present a systematic classification for higher-order rogue-wave solutions of the nonlinear Schrödinger equation, constructed as the nonlinear superposition of first-order breathers via the recursive Darboux transformation scheme. This hierarchy is subdivided into structures that exhibit varying degrees of radial symmetry, all arising from independent degrees of freedom associated with physical translations of component breathers. We reveal the general rules required to produce these fundamental patterns. Consequently, we are able to extrapolate the general shape for rogue-wave solutions beyond order 6, at which point accuracy limitations due to current standards of numerical generation become non-negligible. Furthermore, we indicate how a large set of irregular rogue-wave solutions can be produced by hybridizing these fundamental structures.

  15. Numerical Methods for Nonlinear Fokker-Planck Collision Operator in TEMPEST

    NASA Astrophysics Data System (ADS)

    Kerbel, G.; Xiong, Z.

    2006-10-01

    Early implementations of Fokker-Planck collision operator and moment computations in TEMPEST used low order polynomial interpolation schemes to reuse conservative operators developed for speed/pitch-angle (v, θ) coordinates. When this approach proved to be too inaccurate we developed an alternative higher order interpolation scheme for the Rosenbluth potentials and a high order finite volume method in TEMPEST (,) coordinates. The collision operator is thus generated by using the expansion technique in (v, θ) coordinates for the diffusion coefficients only, and then the fluxes for the conservative differencing are computed directly in the TEMPEST (,) coordinates. Combined with a cut-cell treatment at the turning-point boundary, this new approach is shown to have much better accuracy and conservation properties.

  16. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  17. Accurate disulfide-bonding network predictions improve ab initio structure prediction of cysteine-rich proteins

    PubMed Central

    Yang, Jing; He, Bao-Ji; Jang, Richard; Zhang, Yang; Shen, Hong-Bin

    2015-01-01

    Abstract Motivation: Cysteine-rich proteins cover many important families in nature but there are currently no methods specifically designed for modeling the structure of these proteins. The accuracy of disulfide connectivity pattern prediction, particularly for the proteins of higher-order connections, e.g. >3 bonds, is too low to effectively assist structure assembly simulations. Results: We propose a new hierarchical order reduction protocol called Cyscon for disulfide-bonding prediction. The most confident disulfide bonds are first identified and bonding prediction is then focused on the remaining cysteine residues based on SVR training. Compared with purely machine learning-based approaches, Cyscon improved the average accuracy of connectivity pattern prediction by 21.9%. For proteins with more than 5 disulfide bonds, Cyscon improved the accuracy by 585% on the benchmark set of PDBCYS. When applied to 158 non-redundant cysteine-rich proteins, Cyscon predictions helped increase (or decrease) the TM-score (or RMSD) of the ab initio QUARK modeling by 12.1% (or 14.4%). This result demonstrates a new avenue to improve the ab initio structure modeling for cysteine-rich proteins. Availability and implementation: http://www.csbio.sjtu.edu.cn/bioinf/Cyscon/ Contact: zhng@umich.edu or hbshen@sjtu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26254435

  18. The modelling of the flow-induced vibrations of periodic flat and axial-symmetric structures with a wave-based method

    NASA Astrophysics Data System (ADS)

    Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.

    2018-06-01

    The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.

  19. Optimetrics for Precise Navigation

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Heckler, Gregory; Gramling, Cheryl

    2017-01-01

    Optimetrics for Precise Navigation will be implemented on existing optical communication links. The ranging and Doppler measurements are conducted over communication data frame and clock. The measurement accuracy is two orders of magnitude better than TDRSS. It also has other advantages of: The high optical carrier frequency enables: (1) Immunity from ionosphere and interplanetary Plasma noise floor, which is a performance limitation for RF tracking; and (2) High antenna gain reduces terminal size and volume, enables high precision tracking in Cubesat, and in deep space smallsat. High Optical Pointing Precision provides: (a) spacecraft orientation, (b) Minimal additional hardware to implement Precise Optimetrics over optical comm link; and (c) Continuous optical carrier phase measurement will enable the system presented here to accept future optical frequency standard with much higher clock accuracy.

  20. GENERAL: Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem

    NASA Astrophysics Data System (ADS)

    Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin

    2008-07-01

    Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.

  1. New trends in Taylor series based applications

    NASA Astrophysics Data System (ADS)

    Kocina, Filip; Šátek, Václav; Veigend, Petr; Nečasová, Gabriela; Valenta, Václav; Kunovský, Jiří

    2016-06-01

    The paper deals with the solution of large system of linear ODEs when minimal comunication among parallel processors is required. The Modern Taylor Series Method (MTSM) is used. The MTSM allows using a higher order during the computation that means a larger integration step size while keeping desired accuracy. As an example of complex systems we can take the Telegraph Equation Model. Symbolic and numeric solutions are compared when harmonic input signal is used.

  2. Jet production in the CoLoRFulNNLO method: Event shapes in electron-positron collisions

    NASA Astrophysics Data System (ADS)

    Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Szőr, Zoltán; Trócsányi, Zoltán; Tulipánt, Zoltán

    2016-10-01

    We present the CoLoRFulNNLO method to compute higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the computation of event shape observables in electron-positron collisions at NNLO accuracy and validate our code by comparing our predictions to previous results in the literature. We also calculate for the first time jet cone energy fraction at NNLO.

  3. Higher-order time integration of Coulomb collisions in a plasma using Langevin equations

    DOE PAGES

    Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...

    2013-02-08

    The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less

  4. Doppler Radar Vital Signs Detection Method Based on Higher Order Cyclostationary.

    PubMed

    Yu, Zhibin; Zhao, Duo; Zhang, Zhiqiang

    2017-12-26

    Due to the non-contact nature, using Doppler radar sensors to detect vital signs such as heart and respiration rates of a human subject is getting more and more attention. However, the related detection-method research meets lots of challenges due to electromagnetic interferences, clutter and random motion interferences. In this paper, a novel third-order cyclic cummulant (TOCC) detection method, which is insensitive to Gaussian interference and non-cyclic signals, is proposed to investigate the heart and respiration rate based on continuous wave Doppler radars. The k -th order cyclostationary properties of the radar signal with hidden periodicities and random motions are analyzed. The third-order cyclostationary detection theory of the heart and respiration rate is studied. Experimental results show that the third-order cyclostationary approach has better estimation accuracy for detecting the vital signs from the received radar signal under low SNR, strong clutter noise and random motion interferences.

  5. 3D airborne EM modeling based on the spectral-element time-domain (SETD) method

    NASA Astrophysics Data System (ADS)

    Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.

    2017-12-01

    In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays important roles. This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). Figure 1: (a) AEM system over a 3D earth model; (b) magnetic field Bz; (c) magnetic induction dBz/dt.

  6. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.

  7. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  8. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less

  9. Blind motion image deblurring using nonconvex higher-order total variation model

    NASA Astrophysics Data System (ADS)

    Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo

    2016-09-01

    We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.

  10. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    PubMed Central

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  11. A Kinematic Calibration Process for Flight Robotic Arms

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.

  12. Comparison of Diagnostic Accuracy of Radiation Dose-Equivalent Radiography, Multidetector Computed Tomography and Cone Beam Computed Tomography for Fractures of Adult Cadaveric Wrists

    PubMed Central

    Neubauer, Jakob; Benndorf, Matthias; Reidelbach, Carolin; Krauß, Tobias; Lampert, Florian; Zajonc, Horst; Kotter, Elmar; Langer, Mathias; Fiebich, Martin; Goerke, Sebastian M.

    2016-01-01

    Purpose To compare the diagnostic accuracy of radiography, to radiography equivalent dose multidetector computed tomography (RED-MDCT) and to radiography equivalent dose cone beam computed tomography (RED-CBCT) for wrist fractures. Methods As study subjects we obtained 10 cadaveric human hands from body donors. Distal radius, distal ulna and carpal bones (n = 100) were artificially fractured in random order in a controlled experimental setting. We performed radiation dose equivalent radiography (settings as in standard clinical care), RED-MDCT in a 320 row MDCT with single shot mode and RED-CBCT in a device dedicated to musculoskeletal imaging. Three raters independently evaluated the resulting images for fractures and the level of confidence for each finding. Gold standard was evaluated by consensus reading of a high-dose MDCT. Results Pooled sensitivity was higher in RED-MDCT with 0.89 and RED-MDCT with 0.81 compared to radiography with 0.54 (P = < .004). No significant differences were detected concerning the modalities’ specificities (with values between P = .98). Raters' confidence was higher in RED-MDCT and RED-CBCT compared to radiography (P < .001). Conclusion The diagnostic accuracy of RED-MDCT and RED-CBCT for wrist fractures proved to be similar and in some parts even higher compared to radiography. Readers are more confident in their reporting with the cross sectional modalities. Dose equivalent cross sectional computed tomography of the wrist could replace plain radiography for fracture diagnosis in the long run. PMID:27788215

  13. Subgraph augmented non-negative tensor factorization (SANTF) for modeling clinical narrative text

    PubMed Central

    Xin, Yu; Hochberg, Ephraim; Joshi, Rohit; Uzuner, Ozlem; Szolovits, Peter

    2015-01-01

    Objective Extracting medical knowledge from electronic medical records requires automated approaches to combat scalability limitations and selection biases. However, existing machine learning approaches are often regarded by clinicians as black boxes. Moreover, training data for these automated approaches at often sparsely annotated at best. The authors target unsupervised learning for modeling clinical narrative text, aiming at improving both accuracy and interpretability. Methods The authors introduce a novel framework named subgraph augmented non-negative tensor factorization (SANTF). In addition to relying on atomic features (e.g., words in clinical narrative text), SANTF automatically mines higher-order features (e.g., relations of lymphoid cells expressing antigens) from clinical narrative text by converting sentences into a graph representation and identifying important subgraphs. The authors compose a tensor using patients, higher-order features, and atomic features as its respective modes. We then apply non-negative tensor factorization to cluster patients, and simultaneously identify latent groups of higher-order features that link to patient clusters, as in clinical guidelines where a panel of immunophenotypic features and laboratory results are used to specify diagnostic criteria. Results and Conclusion SANTF demonstrated over 10% improvement in averaged F-measure on patient clustering compared to widely used non-negative matrix factorization (NMF) and k-means clustering methods. Multiple baselines were established by modeling patient data using patient-by-features matrices with different feature configurations and then performing NMF or k-means to cluster patients. Feature analysis identified latent groups of higher-order features that lead to medical insights. We also found that the latent groups of atomic features help to better correlate the latent groups of higher-order features. PMID:25862765

  14. Development of an automated assessment tool for MedWatch reports in the FDA adverse event reporting system.

    PubMed

    Han, Lichy; Ball, Robert; Pamer, Carol A; Altman, Russ B; Proestel, Scott

    2017-09-01

    As the US Food and Drug Administration (FDA) receives over a million adverse event reports associated with medication use every year, a system is needed to aid FDA safety evaluators in identifying reports most likely to demonstrate causal relationships to the suspect medications. We combined text mining with machine learning to construct and evaluate such a system to identify medication-related adverse event reports. FDA safety evaluators assessed 326 reports for medication-related causality. We engineered features from these reports and constructed random forest, L1 regularized logistic regression, and support vector machine models. We evaluated model accuracy and further assessed utility by generating report rankings that represented a prioritized report review process. Our random forest model showed the best performance in report ranking and accuracy, with an area under the receiver operating characteristic curve of 0.66. The generated report ordering assigns reports with a higher probability of medication-related causality a higher rank and is significantly correlated to a perfect report ordering, with a Kendall's tau of 0.24 ( P  = .002). Our models produced prioritized report orderings that enable FDA safety evaluators to focus on reports that are more likely to contain valuable medication-related adverse event information. Applying our models to all FDA adverse event reports has the potential to streamline the manual review process and greatly reduce reviewer workload. Published by Oxford University Press on behalf of the American Medical Informatics Association 2017. This work is written by US Government employees and is in the public domain in the United States.

  15. Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    1997-01-01

    An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.

  16. Stability and stabilization of the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Brownlee, R. A.; Gorban, A. N.; Levesley, J.

    2007-03-01

    We revisit the classical stability versus accuracy dilemma for the lattice Boltzmann methods (LBM). Our goal is a stable method of second-order accuracy for fluid dynamics based on the lattice Bhatnager-Gross-Krook method (LBGK). The LBGK scheme can be recognized as a discrete dynamical system generated by free flight and entropic involution. In this framework the stability and accuracy analysis are more natural. We find the necessary and sufficient conditions for second-order accurate fluid dynamics modeling. In particular, it is proven that in order to guarantee second-order accuracy the distribution should belong to a distinguished surface—the invariant film (up to second order in the time step). This surface is the trajectory of the (quasi)equilibrium distribution surface under free flight. The main instability mechanisms are identified. The simplest recipes for stabilization add no artificial dissipation (up to second order) and provide second-order accuracy of the method. Two other prescriptions add some artificial dissipation locally and prevent the system from loss of positivity and local blowup. Demonstration of the proposed stable LBGK schemes are provided by the numerical simulation of a one-dimensional (1D) shock tube and the unsteady 2D flow around a square cylinder up to Reynolds number Rẽ20000 .

  17. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Trócsányi, Zoltán; Somogyi, Gábor

    2008-10-01

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  18. Improved finite-difference computation of the van der Waals force: One-dimensional case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinto, Fabrizio

    2009-10-15

    We present an improved demonstration of the calculation of Casimir forces in one-dimensional systems based on the recently proposed numerical imaginary frequency Green's function computation approach. The dispersion force on two thick lossy dielectric slabs separated by an empty gap and placed within a perfectly conducting cavity is obtained from the Green's function of the modified Helmholtz equation by means of an ordinary finite-difference method. In order to demonstrate the possibility to develop algorithms to explore complex geometries in two and three dimensions to higher order in the mesh spacing, we generalize existing classical electromagnetism algebraic methods to generate themore » difference equations for dielectric boundaries not coinciding with any grid points. Diagnostic tests are presented to monitor the accuracy of our implementation of the method and follow-up applications in higher dimensions are introduced.« less

  19. The Reference Ability Neural Network Study: Life-time stability of reference-ability neural networks derived from task maps of young adults.

    PubMed

    Habeck, C; Gazes, Y; Razlighi, Q; Steffener, J; Brickman, A; Barulli, D; Salthouse, T; Stern, Y

    2016-01-15

    Analyses of large test batteries administered to individuals ranging from young to old have consistently yielded a set of latent variables representing reference abilities (RAs) that capture the majority of the variance in age-related cognitive change: Episodic Memory, Fluid Reasoning, Perceptual Processing Speed, and Vocabulary. In a previous paper (Stern et al., 2014), we introduced the Reference Ability Neural Network Study, which administers 12 cognitive neuroimaging tasks (3 for each RA) to healthy adults age 20-80 in order to derive unique neural networks underlying these 4 RAs and investigate how these networks may be affected by aging. We used a multivariate approach, linear indicator regression, to derive a unique covariance pattern or Reference Ability Neural Network (RANN) for each of the 4 RAs. The RANNs were derived from the neural task data of 64 younger adults of age 30 and below. We then prospectively applied the RANNs to fMRI data from the remaining sample of 227 adults of age 31 and above in order to classify each subject-task map into one of the 4 possible reference domains. Overall classification accuracy across subjects in the sample age 31 and above was 0.80±0.18. Classification accuracy by RA domain was also good, but variable; memory: 0.72±0.32; reasoning: 0.75±0.35; speed: 0.79±0.31; vocabulary: 0.94±0.16. Classification accuracy was not associated with cross-sectional age, suggesting that these networks, and their specificity to the respective reference domain, might remain intact throughout the age range. Higher mean brain volume was correlated with increased overall classification accuracy; better overall performance on the tasks in the scanner was also associated with classification accuracy. For the RANN network scores, we observed for each RANN that a higher score was associated with a higher corresponding classification accuracy for that reference ability. Despite the absence of behavioral performance information in the derivation of these networks, we also observed some brain-behavioral correlations, notably for the fluid-reasoning network whose network score correlated with performance on the memory and fluid-reasoning tasks. While age did not influence the expression of this RANN, the slope of the association between network score and fluid-reasoning performance was negatively associated with higher ages. These results provide support for the hypothesis that a set of specific, age-invariant neural networks underlies these four RAs, and that these networks maintain their cognitive specificity and level of intensity across age. Activation common to all 12 tasks was identified as another activation pattern resulting from a mean-contrast Partial-Least-Squares technique. This common pattern did show associations with age and some subject demographics for some of the reference domains, lending support to the overall conclusion that aspects of neural processing that are specific to any cognitive reference ability stay constant across age, while aspects that are common to all reference abilities differ across age. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM) Using Monte Carlo Simulation.

    PubMed

    Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid

    2016-01-01

    In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.

  1. Approximate solution of space and time fractional higher order phase field equation

    NASA Astrophysics Data System (ADS)

    Shamseldeen, S.

    2018-03-01

    This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.

  2. Surface charge method for molecular surfaces with curved areal elements I. Spherical triangles

    NASA Astrophysics Data System (ADS)

    Yu, Yi-Kuo

    2018-03-01

    Parametrizing a curved surface with flat triangles in electrostatics problems creates a diverging electric field. One way to avoid this is to have curved areal elements. However, charge density integration over curved patches appears difficult. This paper, dealing with spherical triangles, is the first in a series aiming to solve this problem. Here, we lay the ground work for employing curved patches for applying the surface charge method to electrostatics. We show analytically how one may control the accuracy by expanding in powers of the the arc length (multiplied by the curvature). To accommodate not extremely small curved areal elements, we have provided enough details to include higher order corrections that are needed for better accuracy when slightly larger surface elements are used.

  3. The Complex-Step-Finite-Difference method

    NASA Astrophysics Data System (ADS)

    Abreu, Rafael; Stich, Daniel; Morales, Jose

    2015-07-01

    We introduce the Complex-Step-Finite-Difference method (CSFDM) as a generalization of the well-known Finite-Difference method (FDM) for solving the acoustic and elastic wave equations. We have found a direct relationship between modelling the second-order wave equation by the FDM and the first-order wave equation by the CSFDM in 1-D, 2-D and 3-D acoustic media. We present the numerical methodology in order to apply the introduced CSFDM and show an example for wave propagation in simple homogeneous and heterogeneous models. The CSFDM may be implemented as an extension into pre-existing numerical techniques in order to obtain fourth- or sixth-order accurate results with compact three time-level stencils. We compare advantages of imposing various types of initial motion conditions of the CSFDM and demonstrate its higher-order accuracy under the same computational cost and dispersion-dissipation properties. The introduced method can be naturally extended to solve different partial differential equations arising in other fields of science and engineering.

  4. Resumming double logarithms in the QCD evolution of color dipoles

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-05-01

    The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten inmore » local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.« less

  5. Three-Jet Production in Electron-Positron Collisions at Next-to-Next-to-Leading Order Accuracy

    NASA Astrophysics Data System (ADS)

    Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Trócsányi, Zoltán

    2016-10-01

    We introduce a completely local subtraction method for fully differential predictions at next-to-next-to-leading order (NNLO) accuracy for jet cross sections and use it to compute event shapes in three-jet production in electron-positron collisions. We validate our method on two event shapes, thrust and C parameter, which are already known in the literature at NNLO accuracy and compute for the first time oblateness and the energy-energy correlation at the same accuracy.

  6. Three-Jet Production in Electron-Positron Collisions at Next-to-Next-to-Leading Order Accuracy.

    PubMed

    Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Trócsányi, Zoltán

    2016-10-07

    We introduce a completely local subtraction method for fully differential predictions at next-to-next-to-leading order (NNLO) accuracy for jet cross sections and use it to compute event shapes in three-jet production in electron-positron collisions. We validate our method on two event shapes, thrust and C parameter, which are already known in the literature at NNLO accuracy and compute for the first time oblateness and the energy-energy correlation at the same accuracy.

  7. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  8. Von Neumann stability analysis of globally divergence-free RKDG schemes for the induction equation using multidimensional Riemann solvers

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Käppeli, Roger

    2017-05-01

    In this paper we focus on the numerical solution of the induction equation using Runge-Kutta Discontinuous Galerkin (RKDG)-like schemes that are globally divergence-free. The induction equation plays a role in numerical MHD and other systems like it. It ensures that the magnetic field evolves in a divergence-free fashion; and that same property is shared by the numerical schemes presented here. The algorithms presented here are based on a novel DG-like method as it applies to the magnetic field components in the faces of a mesh. (I.e., this is not a conventional DG algorithm for conservation laws.) The other two novel building blocks of the method include divergence-free reconstruction of the magnetic field and multidimensional Riemann solvers; both of which have been developed in recent years by the first author. Since the method is linear, a von Neumann stability analysis is carried out in two-dimensions to understand its stability properties. The von Neumann stability analysis that we develop in this paper relies on transcribing from a modal to a nodal DG formulation in order to develop discrete evolutionary equations for the nodal values. These are then coupled to a suitable Runge-Kutta timestepping strategy so that one can analyze the stability of the entire scheme which is suitably high order in space and time. We show that our scheme permits CFL numbers that are comparable to those of traditional RKDG schemes. We also analyze the wave propagation characteristics of the method and show that with increasing order of accuracy the wave propagation becomes more isotropic and free of dissipation for a larger range of long wavelength modes. This makes a strong case for investing in higher order methods. We also use the von Neumann stability analysis to show that the divergence-free reconstruction and multidimensional Riemann solvers are essential algorithmic ingredients of a globally divergence-free RKDG-like scheme. Numerical accuracy analyses of the RKDG-like schemes are presented and compared with the accuracy of PNPM schemes. It is found that PNPM retrieve much of the accuracy of the RKDG-like schemes while permitting a larger CFL number.

  9. Is multiple-sequence alignment required for accurate inference of phylogeny?

    PubMed

    Höhl, Michael; Ragan, Mark A

    2007-04-01

    The process of inferring phylogenetic trees from molecular sequences almost always starts with a multiple alignment of these sequences but can also be based on methods that do not involve multiple sequence alignment. Very little is known about the accuracy with which such alignment-free methods recover the correct phylogeny or about the potential for increasing their accuracy. We conducted a large-scale comparison of ten alignment-free methods, among them one new approach that does not calculate distances and a faster variant of our pattern-based approach; all distance-based alignment-free methods are freely available from http://www.bioinformatics.org.au (as Python package decaf+py). We show that most methods exhibit a higher overall reconstruction accuracy in the presence of high among-site rate variation. Under all conditions that we considered, variants of the pattern-based approach were significantly better than the other alignment-free methods. The new pattern-based variant achieved a speed-up of an order of magnitude in the distance calculation step, accompanied by a small loss of tree reconstruction accuracy. A method of Bayesian inference from k-mers did not improve on classical alignment-free (and distance-based) methods but may still offer other advantages due to its Bayesian nature. We found the optimal word length k of word-based methods to be stable across various data sets, and we provide parameter ranges for two different alphabets. The influence of these alphabets was analyzed to reveal a trade-off in reconstruction accuracy between long and short branches. We have mapped the phylogenetic accuracy for many alignment-free methods, among them several recently introduced ones, and increased our understanding of their behavior in response to biologically important parameters. In all experiments, the pattern-based approach emerged as superior, at the expense of higher resource consumption. Nonetheless, no alignment-free method that we examined recovers the correct phylogeny as accurately as does an approach based on maximum-likelihood distance estimates of multiply aligned sequences.

  10. Best Practices for Mudweight Window Generation and Accuracy Assessment between Seismic Based Pore Pressure Prediction Methodologies for a Near-Salt Field in Mississippi Canyon, Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Mannon, Timothy Patrick, Jr.

    Improving well design has and always will be the primary goal in drilling operations in the oil and gas industry. Oil and gas plays are continuing to move into increasingly hostile drilling environments, including near and/or sub-salt proximities. The ability to reduce the risk and uncertainly involved in drilling operations in unconventional geologic settings starts with improving the techniques for mudweight window modeling. To address this issue, an analysis of wellbore stability and well design improvement has been conducted. This study will show a systematic approach to well design by focusing on best practices for mudweight window projection for a field in Mississippi Canyon, Gulf of Mexico. The field includes depleted reservoirs and is in close proximity of salt intrusions. Analysis of offset wells has been conducted in the interest of developing an accurate picture of the subsurface environment by making connections between depth, non-productive time (NPT) events, and mudweights used. Commonly practiced petrophysical methods of pore pressure, fracture pressure, and shear failure gradient prediction have been applied to key offset wells in order to enhance the well design for two proposed wells. For the first time in the literature, the accuracy of the commonly accepted, seismic interval velocity based and the relatively new, seismic frequency based methodologies for pore pressure prediction are qualitatively and quantitatively compared for accuracy. Accuracy standards will be based on the agreement of the seismic outputs to pressure data obtained while drilling and petrophysically based pore pressure outputs for each well. The results will show significantly higher accuracy for the seismic frequency based approach in wells that were in near/sub-salt environments and higher overall accuracy for all of the wells in the study as a whole.

  11. A comparison among several P300 brain-computer interface speller paradigms.

    PubMed

    Fazel-Rezai, Reza; Gavett, Scott; Ahmad, Waqas; Rabbi, Ahmed; Schneider, Eric

    2011-10-01

    Since the brain-computer interface (BCI) speller was first proposed by Farwell and Donchin, there have been modifications in the visual aspects of P300 paradigms. Most of the changes are based on the original matrix format such as changes in the number of rows and columns, font size, flash/ blank time, and flash order. The improvement in the resulting accuracy and speed of such systems has always been the ultimate goal. In this study, we have compared several different speller paradigms including row-column, single character flashing, and two region-based paradigms which are not based on the matrix format. In the first region-based paradigm, at the first level, characters and symbols are distributed over seven regions alphabetically, while in the second region-based paradigm they are distributed in the most frequently used order. At the second level, each one of the regions is further subdivided into seven subsets. The experimental results showed that the average accuracy and user acceptability for two region-based paradigms were higher than those for traditional paradigms such as row/column and single character.

  12. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.

    2002-01-01

    The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.

  13. Fast and high-order numerical algorithms for the solution of multidimensional nonlinear fractional Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Mohebbi, Akbar

    2018-02-01

    In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.

  14. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  15. Engineering Inertial and Primary-Frequency Response for Distributed Energy Resources: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Guggilam, Swaroop

    We propose a framework to engineer synthetic-inertia and droop-control parameters for distributed energy resources (DERs) so that the system frequency in a network composed of DERs and synchronous generators conforms to prescribed transient and steady-state performance specifications. Our approach is grounded in a second-order lumped-parameter model that captures the dynamics of synchronous generators and frequency-responsive DERs endowed with inertial and droop control. A key feature of this reduced-order model is that its parameters can be related to those of the originating higher-order dynamical model. This allows one to systematically design the DER inertial and droop-control coefficients leveraging classical frequency-domain responsemore » characteristics of second-order systems. Time-domain simulations validate the accuracy of the model-reduction method and demonstrate how DER controllers can be designed to meet steady-state-regulation and transient-performance specifications.« less

  16. A comparison of two emergency medical dispatch protocols with respect to accuracy.

    PubMed

    Torlén, Klara; Kurland, Lisa; Castrén, Maaret; Olanders, Knut; Bohm, Katarina

    2017-12-29

    Emergency medical dispatching should be as accurate as possible in order to ensure patient safety and optimize the use of ambulance resources. This study aimed to compare the accuracy, measured as priority level, between two Swedish dispatch protocols - the three-graded priority protocol Medical Index and a newly developed prototype, the four-graded priority protocol, RETTS-A. A simulation study was carried out at the Emergency Medical Communication Centre (EMCC) in Stockholm, Sweden, between October and March 2016. Fifty-three voluntary telecommunicators working at SOS Alarm were recruited nationally. Each telecommunicator handled 26 emergency medical calls, simulated by experienced standard patients. Manuscripts for the scenarios were based on recorded real-life calls, representing the six most common complaints. A cross-over design with 13 + 13 calls was used. Priority level and medical condition for each scenario was set through expert consensus and used as gold standard in the study. A total of 1293 calls were included in the analysis. For priority level, n = 349 (54.0%) of the calls were assessed correctly with Medical Index and n = 309 (48.0%) with RETTS-A (p = 0.012). Sensitivity for the highest priority level was 82.6% (95% confidence interval: 76.6-87.3%) in the Medical Index and 54.0% (44.3-63.4%) in RETTS-A. Overtriage was 37.9% (34.2-41.7%) in the Medical Index and 28.6% (25.2-32.2%) in RETTS-A. The corresponding proportion of undertriage was 6.3% (4.7-8.5%) and 23.4% (20.3-26.9%) respectively. In this simulation study we demonstrate that Medical Index had a higher accuracy for priority level and less undertriage than the new prototype RETTS-A. The overall accuracy of both protocols is to be considered as low. Overtriage challenges resource utilization while undertriage threatens patient safety. The results suggest that in order to improve patient safety both protocols need revisions in order to guarantee safe emergency medical dispatching.

  17. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  18. Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Thomas, James

    2008-01-01

    Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications

  19. Low Dissipative High Order Shock-Capturing Methods Using Characteristic-Based Filters

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sandham, N. D.; Djomehri, M. J.

    1998-01-01

    An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Oisson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.

  20. Low Dissipative High Order Shock-Capturing Methods using Characteristic-Based Filters

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sandham, N. D.; Djomehri, M. J.

    1998-01-01

    An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Olsson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.

  1. A Fast Neural Network Approach to Predict Lung Tumor Motion during Respiration for Radiation Therapy Applications

    PubMed Central

    Slama, Matous; Benes, Peter M.; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time. PMID:25893194

  2. An Application of the Quadrature-Free Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Atkins, Harold L.

    2000-01-01

    The process of generating a block-structured mesh with the smoothness required for high-accuracy schemes is still a time-consuming process often measured in weeks or months. Unstructured grids about complex geometries are more easily generated, and for this reason, methods using unstructured grids have gained favor for aerodynamic analyses. The discontinuous Galerkin (DG) method is a compact finite-element projection method that provides a practical framework for the development of a high-order method using unstructured grids. Higher-order accuracy is obtained by representing the solution as a high-degree polynomial whose time evolution is governed by a local Galerkin projection. The traditional implementation of the discontinuous Galerkin uses quadrature for the evaluation of the integral projections and is prohibitively expensive. Atkins and Shu introduced the quadrature-free formulation in which the integrals are evaluated a-priori and exactly for a similarity element. The approach has been demonstrated to possess the accuracy required for acoustics even in cases where the grid is not smooth. Other issues such as boundary conditions and the treatment of non-linear fluxes have also been studied in earlier work This paper describes the application of the quadrature-free discontinuous Galerkin method to a two-dimensional shear layer problem. First, a brief description of the method is given. Next, the problem is described and the solution is presented. Finally, the resources required to perform the calculations are given.

  3. A fast neural network approach to predict lung tumor motion during respiration for radiation therapy applications.

    PubMed

    Bukovsky, Ivo; Homma, Noriyasu; Ichiji, Kei; Cejnek, Matous; Slama, Matous; Benes, Peter M; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time.

  4. Staggered Mesh Ewald: An extension of the Smooth Particle-Mesh Ewald method adding great versatility

    PubMed Central

    Cerutti, David S.; Duke, Robert E.; Darden, Thomas A.; Lybrand, Terry P.

    2009-01-01

    We draw on an old technique for improving the accuracy of mesh-based field calculations to extend the popular Smooth Particle Mesh Ewald (SPME) algorithm as the Staggered Mesh Ewald (StME) algorithm. StME improves the accuracy of computed forces by up to 1.2 orders of magnitude and also reduces the drift in system momentum inherent in the SPME method by averaging the results of two separate reciprocal space calculations. StME can use charge mesh spacings roughly 1.5× larger than SPME to obtain comparable levels of accuracy; the one mesh in an SPME calculation can therefore be replaced with two separate meshes, each less than one third of the original size. Coarsening the charge mesh can be balanced with reductions in the direct space cutoff to optimize performance: the efficiency of StME rivals or exceeds that of SPME calculations with similarly optimized parameters. StME may also offer advantages for parallel molecular dynamics simulations because it permits the use of coarser meshes without requiring higher orders of charge interpolation and also because the two reciprocal space calculations can be run independently if that is most suitable for the machine architecture. We are planning other improvements to the standard SPME algorithm, and anticipate that StME will work synergistically will all of them to dramatically improve the efficiency and parallel scaling of molecular simulations. PMID:20174456

  5. New Models for Velocity/Pressure-Gradient Correlations in Turbulent Boundary Layers

    NASA Astrophysics Data System (ADS)

    Poroseva, Svetlana; Murman, Scott

    2014-11-01

    To improve the performance of Reynolds-Averaged Navier-Stokes (RANS) turbulence models, one has to improve the accuracy of models for three physical processes: turbulent diffusion, interaction of turbulent pressure and velocity fluctuation fields, and dissipative processes. The accuracy of modeling the turbulent diffusion depends on the order of a statistical closure chosen as a basis for a RANS model. When the Gram-Charlier series expansions for the velocity correlations are used to close the set of RANS equations, no assumption on Gaussian turbulence is invoked and no unknown model coefficients are introduced into the modeled equations. In such a way, this closure procedure reduces the modeling uncertainty of fourth-order RANS (FORANS) closures. Experimental and direct numerical simulation data confirmed the validity of using the Gram-Charlier series expansions in various flows including boundary layers. We will address modeling the velocity/pressure-gradient correlations. New linear models will be introduced for the second- and higher-order correlations applicable to two-dimensional incompressible wall-bounded flows. Results of models' validation with DNS data in a channel flow and in a zero-pressure gradient boundary layer over a flat plate will be demonstrated. A part of the material is based upon work supported by NASA under award NNX12AJ61A.

  6. Propagation of coherent light pulses with PHASE

    NASA Astrophysics Data System (ADS)

    Bahrdt, J.; Flechsig, U.; Grizzoli, W.; Siewert, F.

    2014-09-01

    The current status of the software package PHASE for the propagation of coherent light pulses along a synchrotron radiation beamline is presented. PHASE is based on an asymptotic expansion of the Fresnel-Kirchhoff integral (stationary phase approximation) which is usually truncated at the 2nd order. The limits of this approximation as well as possible extensions to higher orders are discussed. The accuracy is benchmarked against a direct integration of the Fresnel-Kirchhoff integral. Long range slope errors of optical elements can be included by means of 8th order polynomials in the optical element coordinates w and l. Only recently, a method for the description of short range slope errors has been implemented. The accuracy of this method is evaluated and examples for realistic slope errors are given. PHASE can be run either from a built-in graphical user interface or from any script language. The latter method provides substantial flexibility. Optical elements including apertures can be combined. Complete wave packages can be propagated, as well. Fourier propagators are included in the package, thus, the user may choose between a variety of propagators. Several means to speed up the computation time were tested - among them are the parallelization in a multi core environment and the parallelization on a cluster.

  7. A technique for increasing the accuracy of the numerical inversion of the Laplace transform with applications

    NASA Technical Reports Server (NTRS)

    Berger, B. S.; Duangudom, S.

    1973-01-01

    A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.

  8. Wakefield Simulation of CLIC PETS Structure Using Parallel 3D Finite Element Time-Domain Solver T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the parallel 3D Finite Element electromagnetic time-domain code T3P. Higher-order Finite Element methods on conformal unstructured meshes and massively parallel processing allow unprecedented simulation accuracy for wakefield computations and simulations of transient effects in realistic accelerator structures. Applications include simulation of wakefield damping in the Compact Linear Collider (CLIC) power extraction and transfer structure (PETS).

  9. Generalised model-independent characterisation of strong gravitational lenses. II. Transformation matrix between multiple images

    NASA Astrophysics Data System (ADS)

    Wagner, J.; Tessore, N.

    2018-05-01

    We determine the transformation matrix that maps multiple images with identifiable resolved features onto one another and that is based on a Taylor-expanded lensing potential in the vicinity of a point on the critical curve within our model-independent lens characterisation approach. From the transformation matrix, the same information about the properties of the critical curve at fold and cusp points can be derived as we previously found when using the quadrupole moment of the individual images as observables. In addition, we read off the relative parities between the images, so that the parity of all images is determined when one is known. We compare all retrievable ratios of potential derivatives to the actual values and to those obtained by using the quadrupole moment as observable for two- and three-image configurations generated by a galaxy-cluster scale singular isothermal ellipse. We conclude that using the quadrupole moments as observables, the properties of the critical curve are retrieved to a higher accuracy at the cusp points and to a lower accuracy at the fold points; the ratios of second-order potential derivatives are retrieved to comparable accuracy. We also show that the approach using ratios of convergences and reduced shear components is equivalent to ours in the vicinity of the critical curve, but yields more accurate results and is more robust because it does not require a special coordinate system as the approach using potential derivatives does. The transformation matrix is determined by mapping manually assigned reference points in the multiple images onto one another. If the assignment of the reference points is subject to measurement uncertainties under the influence of noise, we find that the confidence intervals of the lens parameters can be as large as the values themselves when the uncertainties are larger than one pixel. In addition, observed multiple images with resolved features are more extended than unresolved ones, so that higher-order moments should be taken into account to improve the reconstruction precision and accuracy.

  10. An implicit higher-order spatially accurate scheme for solving time dependent flows on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tomaro, Robert F.

    1998-07-01

    The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.

  11. Fast object detection algorithm based on HOG and CNN

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Wang, Dandan; Zhang, Yanduo

    2018-04-01

    In the field of computer vision, object classification and object detection are widely used in many fields. The traditional object detection have two main problems:one is that sliding window of the regional selection strategy is high time complexity and have window redundancy. And the other one is that Robustness of the feature is not well. In order to solve those problems, Regional Proposal Network (RPN) is used to select candidate regions instead of selective search algorithm. Compared with traditional algorithms and selective search algorithms, RPN has higher efficiency and accuracy. We combine HOG feature and convolution neural network (CNN) to extract features. And we use SVM to classify. For TorontoNet, our algorithm's mAP is 1.6 percentage points higher. For OxfordNet, our algorithm's mAP is 1.3 percentage higher.

  12. Characterization of depressive States in bipolar patients using wearable textile technology and instantaneous heart rate variability assessment.

    PubMed

    Valenza, Gaetano; Citi, Luca; Gentili, Claudio; Lanata, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2015-01-01

    The analysis of cognitive and autonomic responses to emotionally relevant stimuli could provide a viable solution for the automatic recognition of different mood states, both in normal and pathological conditions. In this study, we present a methodological application describing a novel system based on wearable textile technology and instantaneous nonlinear heart rate variability assessment, able to characterize the autonomic status of bipolar patients by considering only electrocardiogram recordings. As a proof of this concept, our study presents results obtained from eight bipolar patients during their normal daily activities and being elicited according to a specific emotional protocol through the presentation of emotionally relevant pictures. Linear and nonlinear features were computed using a novel point-process-based nonlinear autoregressive integrative model and compared with traditional algorithmic methods. The estimated indices were used as the input of a multilayer perceptron to discriminate the depressive from the euthymic status. Results show that our system achieves much higher accuracy than the traditional techniques. Moreover, the inclusion of instantaneous higher order spectra features significantly improves the accuracy in successfully recognizing depression from euthymia.

  13. 3D T2-weighted and Gd-EOB-DTPA-enhanced 3D T1-weighted MR cholangiography for evaluation of biliary anatomy in living liver donors.

    PubMed

    Cai, Larry; Yeh, Benjamin M; Westphalen, Antonio C; Roberts, John; Wang, Zhen J

    2017-03-01

    To investigate whether the addition of gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced 3D T1-weighted MR cholangiography (T1w-MRC) to 3D T2-weighted MRC (T2w-MRC) improves the confidence and diagnostic accuracy of biliary anatomy in living liver donors. Two abdominal radiologists retrospectively and independently reviewed pre-operative MR studies in 58 consecutive living liver donors. The second-order bile duct visualization on T1w- and T2w-MRC images was rated on a 4-point scale. The readers also independently recorded the biliary anatomy and their diagnostic confidence using (1) combined T1w- and T2w-MRC, and (2) T2w-MRC. In the 23 right lobe donors, the biliary anatomy at imaging and the imaging-predicted number of duct orifices at surgery were compared to intra-operative findings. T1w-MRC had a higher proportion of excellent visualization than T2w-MRC, 66% vs. 45% for reader 1 and 60% vs. 31% for reader 2. The median confidence score for biliary anatomy diagnosis was significantly higher with combined T1w- and T2w-MRC than T2w-MRC alone for both readers (Reader 1: 3 vs. 2, p < 0.001; Reader 2: 3 vs. 1, p < 0.001). Compared to intra-operative findings, the accuracy of imaging-predicted number of duct orifices using combined T1w-and T2w-MRC was significantly higher than that using T2w-MRC alone (p = 0.034 for reader 1, p = 0.0082 for reader 2). The addition of Gd-EOB-DTPA-enhanced 3D T1w-MRC to 3D T2w-MRC improves second-order bile duct visualization and increases the confidence in biliary anatomy diagnosis and the accuracy in the imaging-predicted number of duct orifices acquired during right lobe harvesting.

  14. Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu

    2016-02-15

    We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less

  15. A comprehensive evaluation of two MODIS evapotranspiration products over the conterminous United States: using point and gridded FLUXNET and water balance ET

    USGS Publications Warehouse

    Velpuri, Naga M.; Senay, Gabriel B.; Singh, Ramesh K.; Bohms, Stefanie; Verdin, James P.

    2013-01-01

    Remote sensing datasets are increasingly being used to provide spatially explicit large scale evapotranspiration (ET) estimates. Extensive evaluation of such large scale estimates is necessary before they can be used in various applications. In this study, two monthly MODIS 1 km ET products, MODIS global ET (MOD16) and Operational Simplified Surface Energy Balance (SSEBop) ET, are validated over the conterminous United States at both point and basin scales. Point scale validation was performed using eddy covariance FLUXNET ET (FLET) data (2001–2007) aggregated by year, land cover, elevation and climate zone. Basin scale validation was performed using annual gridded FLUXNET ET (GFET) and annual basin water balance ET (WBET) data aggregated by various hydrologic unit code (HUC) levels. Point scale validation using monthly data aggregated by years revealed that the MOD16 ET and SSEBop ET products showed overall comparable annual accuracies. For most land cover types, both ET products showed comparable results. However, SSEBop showed higher performance for Grassland and Forest classes; MOD16 showed improved performance in the Woody Savanna class. Accuracy of both the ET products was also found to be comparable over different climate zones. However, SSEBop data showed higher skill score across the climate zones covering the western United States. Validation results at different HUC levels over 2000–2011 using GFET as a reference indicate higher accuracies for MOD16 ET data. MOD16, SSEBop and GFET data were validated against WBET (2000–2009), and results indicate that both MOD16 and SSEBop ET matched the accuracies of the global GFET dataset at different HUC levels. Our results indicate that both MODIS ET products effectively reproduced basin scale ET response (up to 25% uncertainty) compared to CONUS-wide point-based ET response (up to 50–60% uncertainty) illustrating the reliability of MODIS ET products for basin-scale ET estimation. Results from this research would guide the additional parameter refinement required for the MOD16 and SSEBop algorithms in order to further improve their accuracy and performance for agro-hydrologic applications.

  16. A new solution method for wheel/rail rolling contact.

    PubMed

    Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei

    2016-01-01

    To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.

  17. A New High-Order Spectral Difference Method for Simulating Viscous Flows on Unstructured Grids with Mixed Elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mao; Qiu, Zihua; Liang, Chunlei

    In the present study, a new spectral difference (SD) method is developed for viscous flows on meshes with a mixture of triangular and quadrilateral elements. The standard SD method for triangular elements, which employs Lagrangian interpolating functions for fluxes, is not stable when the designed accuracy of spatial discretization is third-order or higher. Unlike the standard SD method, the method examined here uses vector interpolating functions in the Raviart-Thomas (RT) spaces to construct continuous flux functions on reference elements. Studies have been performed for 2D wave equation and Euler equa- tions. Our present results demonstrated that the SDRT method ismore » stable and high-order accurate for a number of test problems by using triangular-, quadrilateral-, and mixed- element meshes.« less

  18. CRKSPH: A new meshfree hydrodynamics method with applications to astrophysics

    NASA Astrophysics Data System (ADS)

    Owen, John Michael; Raskin, Cody; Frontiere, Nicholas

    2018-01-01

    The study of astrophysical phenomena such as supernovae, accretion disks, galaxy formation, and large-scale structure formation requires computational modeling of, at a minimum, hydrodynamics and gravity. Developing numerical methods appropriate for these kinds of problems requires a number of properties: shock-capturing hydrodynamics benefits from rigorous conservation of invariants such as total energy, linear momentum, and mass; lack of obvious symmetries or a simplified spatial geometry to exploit necessitate 3D methods that ideally are Galilean invariant; the dynamic range of mass and spatial scales that need to be resolved can span many orders of magnitude, requiring methods that are highly adaptable in their space and time resolution. We have developed a new Lagrangian meshfree hydrodynamics method called Conservative Reproducing Kernel Smoothed Particle Hydrodynamics, or CRKSPH, in order to meet these goals. CRKSPH is a conservative generalization of the meshfree reproducing kernel method, combining the high-order accuracy of reproducing kernels with the explicit conservation of mass, linear momentum, and energy necessary to study shock-driven hydrodynamics in compressible fluids. CRKSPH's Lagrangian, particle-like nature makes it simple to combine with well-known N-body methods for modeling gravitation, similar to the older Smoothed Particle Hydrodynamics (SPH) method. Indeed, CRKSPH can be substituted for SPH in existing SPH codes due to these similarities. In comparison to SPH, CRKSPH is able to achieve substantially higher accuracy for a given number of points due to the explicitly consistent (and higher-order) interpolation theory of reproducing kernels, while maintaining the same conservation principles (and therefore applicability) as SPH. There are currently two coded implementations of CRKSPH available: one in the open-source research code Spheral, and the other in the high-performance cosmological code HACC. Using these codes we have applied CRKSPH to a number of astrophysical scenarios, such as rotating gaseous disks, supernova remnants, and large-scale cosmological structure formation. In this poster we present an overview of CRKSPH and show examples of these astrophysical applications.

  19. Accuracy of analytic energy level formulas applied to hadronic spectroscopy of heavy mesons

    NASA Technical Reports Server (NTRS)

    Badavi, Forooz F.; Norbury, John W.; Wilson, John W.; Townsend, Lawrence W.

    1988-01-01

    Linear and harmonic potential models are used in the nonrelativistic Schroedinger equation to obtain article mass spectra for mesons as bound states of quarks. The main emphasis is on the linear potential where exact solutions of the S-state eigenvalues and eigenfunctions and the asymptotic solution for the higher order partial wave are obtained. A study of the accuracy of two analytical energy level formulas as applied to heavy mesons is also included. Cornwall's formula is found to be particularly accurate and useful as a predictor of heavy quarkonium states. Exact solution for all partial waves of eigenvalues and eigenfunctions for a harmonic potential is also obtained and compared with the calculated discrete spectra of the linear potential. Detailed derivations of the eigenvalues and eigenfunctions of the linear and harmonic potentials are presented in appendixes.

  20. New high order schemes in BATS-R-US

    NASA Astrophysics Data System (ADS)

    Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.

    2013-12-01

    The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.

  1. Configurational forces in electronic structure calculations using Kohn-Sham density functional theory

    NASA Astrophysics Data System (ADS)

    Motamarri, Phani; Gavini, Vikram

    2018-04-01

    We derive the expressions for configurational forces in Kohn-Sham density functional theory, which correspond to the generalized variational force computed as the derivative of the Kohn-Sham energy functional with respect to the position of a material point x . These configurational forces that result from the inner variations of the Kohn-Sham energy functional provide a unified framework to compute atomic forces as well as stress tensor for geometry optimization. Importantly, owing to the variational nature of the formulation, these configurational forces inherently account for the Pulay corrections. The formulation presented in this work treats both pseudopotential and all-electron calculations in a single framework, and employs a local variational real-space formulation of Kohn-Sham density functional theory (DFT) expressed in terms of the nonorthogonal wave functions that is amenable to reduced-order scaling techniques. We demonstrate the accuracy and performance of the proposed configurational force approach on benchmark all-electron and pseudopotential calculations conducted using higher-order finite-element discretization. To this end, we examine the rates of convergence of the finite-element discretization in the computed forces and stresses for various materials systems, and, further, verify the accuracy from finite differencing the energy. Wherever applicable, we also compare the forces and stresses with those obtained from Kohn-Sham DFT calculations employing plane-wave basis (pseudopotential calculations) and Gaussian basis (all-electron calculations). Finally, we verify the accuracy of the forces on large materials systems involving a metallic aluminum nanocluster containing 666 atoms and an alkane chain containing 902 atoms, where the Kohn-Sham electronic ground state is computed using a reduced-order scaling subspace projection technique [P. Motamarri and V. Gavini, Phys. Rev. B 90, 115127 (2014), 10.1103/PhysRevB.90.115127].

  2. Impact of Linearity and Write Noise of Analog Resistive Memory Devices in a Neural Algorithm Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs-Gedrim, Robin B.; Agarwal, Sapan; Knisely, Kathrine E.

    Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaO x, andmore » two conducting metallization systems, Cu-SiO 2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. As a result, this suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.« less

  3. Impact of Linearity and Write Noise of Analog Resistive Memory Devices in a Neural Algorithm Accelerator

    DOE PAGES

    Jacobs-Gedrim, Robin B.; Agarwal, Sapan; Knisely, Kathrine E.; ...

    2017-12-01

    Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaO x, andmore » two conducting metallization systems, Cu-SiO 2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. As a result, this suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.« less

  4. High Order Approximations for Compressible Fluid Dynamics on Unstructured and Cartesian Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy (Editor); Deconinck, Herman (Editor)

    1999-01-01

    The development of high-order accurate numerical discretization techniques for irregular domains and meshes is often cited as one of the remaining challenges facing the field of computational fluid dynamics. In structural mechanics, the advantages of high-order finite element approximation are widely recognized. This is especially true when high-order element approximation is combined with element refinement (h-p refinement). In computational fluid dynamics, high-order discretization methods are infrequently used in the computation of compressible fluid flow. The hyperbolic nature of the governing equations and the presence of solution discontinuities makes high-order accuracy difficult to achieve. Consequently, second-order accurate methods are still predominately used in industrial applications even though evidence suggests that high-order methods may offer a way to significantly improve the resolution and accuracy for these calculations. To address this important topic, a special course was jointly organized by the Applied Vehicle Technology Panel of NATO's Research and Technology Organization (RTO), the von Karman Institute for Fluid Dynamics, and the Numerical Aerospace Simulation Division at the NASA Ames Research Center. The NATO RTO sponsored course entitled "Higher Order Discretization Methods in Computational Fluid Dynamics" was held September 14-18, 1998 at the von Karman Institute for Fluid Dynamics in Belgium and September 21-25, 1998 at the NASA Ames Research Center in the United States. During this special course, lecturers from Europe and the United States gave a series of comprehensive lectures on advanced topics related to the high-order numerical discretization of partial differential equations with primary emphasis given to computational fluid dynamics (CFD). Additional consideration was given to topics in computational physics such as the high-order discretization of the Hamilton-Jacobi, Helmholtz, and elasticity equations. This volume consists of five articles prepared by the special course lecturers. These articles should be of particular relevance to those readers with an interest in numerical discretization techniques which generalize to very high-order accuracy. The articles of Professors Abgrall and Shu consider the mathematical formulation of high-order accurate finite volume schemes utilizing essentially non-oscillatory (ENO) and weighted essentially non-oscillatory (WENO) reconstruction together with upwind flux evaluation. These formulations are particularly effective in computing numerical solutions of conservation laws containing solution discontinuities. Careful attention is given by the authors to implementational issues and techniques for improving the overall efficiency of these methods. The article of Professor Cockburn discusses the discontinuous Galerkin finite element method. This method naturally extends to high-order accuracy and has an interpretation as a finite volume method. Cockburn addresses two important issues associated with the discontinuous Galerkin method: controlling spurious extrema near solution discontinuities via "limiting" and the extension to second order advective-diffusive equations (joint work with Shu). The articles of Dr. Henderson and Professor Schwab consider the mathematical formulation and implementation of the h-p finite element methods using hierarchical basis functions and adaptive mesh refinement. These methods are particularly useful in computing high-order accurate solutions containing perturbative layers and corner singularities. Additional flexibility is obtained using a mortar FEM technique whereby nonconforming elements are interfaced together. Numerous examples are given by Henderson applying the h-p FEM method to the simulation of turbulence and turbulence transition.

  5. Reduced fMRI activity predicts relapse in patients recovering from stimulant dependence.

    PubMed

    Clark, Vincent P; Beatty, Gregory K; Anderson, Robert E; Kodituwakku, Piyadassa; Phillips, John P; Lane, Terran D R; Kiehl, Kent A; Calhoun, Vince D

    2014-02-01

    Relapse presents a significant problem for patients recovering from stimulant dependence. Here we examined the hypothesis that patterns of brain function obtained at an early stage of abstinence differentiates patients who later relapse versus those who remain abstinent. Forty-five recently abstinent stimulant-dependent patients were tested using a randomized event-related functional MRI (ER-fMRI) design that was developed in order to replicate a previous ERP study of relapse using a selective attention task, and were then monitored until 6 months of verified abstinence or stimulant use occurred. SPM revealed smaller absolute blood oxygen level-dependent (BOLD) response amplitude in bilateral ventral posterior cingulate and right insular cortex in 23 patients positive for relapse to stimulant use compared with 22 who remained abstinent. ER-fMRI, psychiatric, neuropsychological, demographic, personal and family history of drug use were compared in order to form predictive models. ER-fMRI was found to predict abstinence with higher accuracy than any other single measure obtained in this study. Logistic regression using fMRI amplitude in right posterior cingulate and insular cortex predicted abstinence with 77.8% accuracy, which increased to 89.9% accuracy when history of mania was included. Using 10-fold cross-validation, Bayesian logistic regression and multilayer perceptron algorithms provided the highest accuracy of 84.4%. These results, combined with previous studies, suggest that the functional organization of paralimbic brain regions including ventral anterior and posterior cingulate and right insula are related to patients' ability to maintain abstinence. Novel therapies designed to target these paralimbic regions identified using ER-fMRI may improve treatment outcome. Copyright © 2012 Wiley Periodicals, Inc.

  6. SLOWLY REPEATED EVOKED PAIN (SREP) AS A MARKER OF CENTRAL SENSITIZATION IN FIBROMYALGIA: DIAGNOSTIC ACCURACY AND RELIABILITY IN COMPARISON WITH TEMPORAL SUMMATION OF PAIN.

    PubMed

    de la Coba, Pablo; Bruehl, Stephen; Gálvez-Sánchez, Carmen María; Reyes Del Paso, Gustavo A

    2018-05-01

    This study examined the diagnostic accuracy and test-retest reliability of a novel dynamic evoked pain protocol (slowly repeated evoked pain; SREP) compared to temporal summation of pain (TSP), a standard index of central sensitization. Thirty-five fibromyalgia (FM) and 30 rheumatoid arthritis (RA) patients completed, in pseudorandomized order, a standard mechanical TSP protocol (10 stimuli of 1s duration at the thenar eminence using a 300g monofilament with 1s interstimulus interval) and the SREP protocol (9 suprathreshold pressure stimuli of 5s duration applied to the fingernail with a 30s interstimulus interval). In order to evaluate reliability for both protocols, they were repeated in a second session 4-7 days later. Evidence for significant pain sensitization over trials (increasing pain intensity ratings) was observed for SREP in FM (p<.001) but not in RA (p=.35), whereas significant sensitization was observed in both diagnostic groups for the TSP protocol (p's<.008). Compared to TSP, SREP demonstrated higher overall diagnostic accuracy (87.7% vs. 64.6%), greater sensitivity (0.89 vs. 0.57), and greater specificity (0.87 vs. 0.73) in discriminating between FM and RA patients. Test-retest reliability of SREP sensitization was good in FM (ICCs: 0.80), and moderate in RA (ICC: 0.68). SREP seems to be a dynamic evoked pain index tapping into pain sensitization that allows for greater diagnostic accuracy in identifying FM patients compared to a standard TSP protocol. Further research is needed to study mechanisms underlying SREP and the potential utility of adding SREP to standard pain evaluation protocols.

  7. A novel upwind stabilized discontinuous finite element angular framework for deterministic dose calculations in magnetic fields.

    PubMed

    Yang, R; Zelyak, O; Fallone, B G; St-Aubin, J

    2018-01-30

    Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.

  8. A novel upwind stabilized discontinuous finite element angular framework for deterministic dose calculations in magnetic fields

    NASA Astrophysics Data System (ADS)

    Yang, R.; Zelyak, O.; Fallone, B. G.; St-Aubin, J.

    2018-02-01

    Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.

  9. Research in computational fluid dynamics and analysis of algorithms

    NASA Technical Reports Server (NTRS)

    Gottlieb, David

    1992-01-01

    Recently, higher-order compact schemes have seen increasing use in the DNS (Direct Numerical Simulations) of the Navier-Stokes equations. Although they do not have the spatial resolution of spectral methods, they offer significant increases in accuracy over conventional second order methods. They can be used on any smooth grid, and do not have an overly restrictive CFL dependence as compared with the O(N(exp -2)) CFL dependence observed in Chebyshev spectral methods on finite domains. In addition, they are generally more robust and less costly than spectral methods. The issue of the relative cost of higher-order schemes (accuracy weighted against physical and numerical cost) is a far more complex issue, depending ultimately on what features of the solution are sought and how accurately they must be resolved. In any event, the further development of the underlying stability theory of these schemes is important. The approach of devising suitable boundary clusters and then testing them with various stability techniques (such as finding the norm) is entirely the wrong approach when dealing with high-order methods. Very seldom are high-order boundary closures stable, making them difficult to isolate. An alternative approach is to begin with a norm which satisfies all the stability criteria for the hyperbolic system, and look for the boundary closure forms which will match the norm exactly. This method was used recently by Strand to isolate stable boundary closure schemes for the explicit central fourth- and sixth-order schemes. The norm used was an energy norm mimicking the norm for the differential equations. Further research should be devoted to BC for high order schemes in order to make sure that the results obtained are reliable. The compact fourth order and sixth order finite difference scheme had been incorporated into a code to simulate flow past circular cylinders. This code will serve as a verification of the full spectral codes. A detailed stability analysis by Carpenter (from the fluid Mechanics Division) and Gottlieb gave analytic conditions for stability as well as asymptotic stability. This had been incorporated in the code in form of stable boundary conditions. Effects of the cylinder rotations had been studied. The results differ from the known theoretical results. We are in the middle of analyzing the results. A detailed analysis of the effects of the heating of the cylinder on the shedding frequency had been studied using the above schemes. It has been found that the shedding frequency decreases when the wire was heated. Experimental work is being carried out to affirm this result.

  10. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  11. Distribution and mitigation of higher-order ionospheric effects on precise GNSS processing

    NASA Astrophysics Data System (ADS)

    Hernández-Pajares, Manuel; Aragón-Ángel, Àngela; Defraigne, Pascale; Bergeot, Nicolas; Prieto-Cerdeira, Roberto; García-Rigo, Alberto

    2014-04-01

    Higher-order ionospheric effects (I2+) are one of the main limiting factors in very precise Global Navigation Satellite Systems (GNSS) processing, for applications where millimeter accuracy is demanded. This paper summarizes a comprehensive study of the I2+ effects in range and in GNSS precise products such as receiver position and clock, tropospheric delay, geocenter offset, and GNSS satellite position and clock. All the relevant higher-order contributions are considered: second and third orders, geometric bending, and slant total electron content (dSTEC) bending (i.e., the difference between the STEC for straight and bent paths). Using a realistic simulation with representative solar maximum conditions on GPS signals, both the effects and mitigation errors are analyzed. The usage of the combination of multifrequency L band observations has to be rejected due to its increased noise level. The results of the study show that the main two effects in range are the second-order ionospheric and dSTEC terms, with peak values up to 2 cm. Their combined impacts on the precise GNSS satellite products affects the satellite Z coordinates (up to +1 cm) and satellite clocks (more than ±20 ps). Other precise products are affected at the millimeter level. After correction the impact on all the precise GNSS products is reduced below 5 mm. We finally show that the I2+ impact on a Precise Point Positioning (PPP) user is lower than the current uncertainties of the PPP solutions, after applying consistently the precise products (satellite orbits and clocks) obtained under I2+ correction.

  12. Comparison between multi-constellation ambiguity-fixed PPP and RTK for maritime precise navigation

    NASA Astrophysics Data System (ADS)

    Tegedor, Javier; Liu, Xianglin; Ørpen, Ole; Treffers, Niels; Goode, Matthew; Øvstedal, Ola

    2015-06-01

    In order to achieve high-accuracy positioning, either Real-Time Kinematic (RTK) or Precise Point Positioning (PPP) techniques can be used. While RTK normally delivers higher accuracy with shorter convergence times, PPP has been an attractive technology for maritime applications, as it delivers uniform positioning performance without the direct need of a nearby reference station. Traditional PPP has been based on ambiguity-­float solutions using GPS and Glonass constellations. However, the addition of new satellite systems, such as Galileo and BeiDou, and the possibility of fixing integer carrier-phase ambiguities (PPP-AR) allow to increase PPP accuracy. In this article, a performance assessment has been done between RTK, PPP and PPP-AR, using GNSS data collected from two antennas installed on a ferry navigating in Oslo (Norway). RTK solutions have been generated using short, medium and long baselines (up to 290 km). For the generation of PPP-AR solutions, Uncalibrated Hardware Delays (UHDs) for GPS, Galileo and BeiDou have been estimated using reference stations in Oslo and Onsala. The performance of RTK and multi-­constellation PPP and PPP-AR are presented.

  13. Speed-accuracy trade-off in skilled typewriting: decomposing the contributions of hierarchical control loops.

    PubMed

    Yamaguchi, Motonori; Crump, Matthew J C; Logan, Gordon D

    2013-06-01

    Typing performance involves hierarchically structured control systems: At the higher level, an outer loop generates a word or a series of words to be typed; at the lower level, an inner loop activates the keystrokes comprising the word in parallel and executes them in the correct order. The present experiments examined contributions of the outer- and inner-loop processes to the control of speed and accuracy in typewriting. Experiments 1 and 2 involved discontinuous typing of single words, and Experiments 3 and 4 involved continuous typing of paragraphs. Across experiments, typists were able to trade speed for accuracy but were unable to type at rates faster than 100 ms/keystroke, implying limits to the flexibility of the underlying processes. The analyses of the component latencies and errors indicated that the majority of the trade-offs were due to inner-loop processing. The contribution of outer-loop processing to the trade-offs was small, but it resulted in large costs in error rate. Implications for strategic control of automatic processes are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  14. Development of a high sensitivity pinhole type gamma camera using semiconductors for low dose rate fields

    NASA Astrophysics Data System (ADS)

    Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo

    2018-06-01

    We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.

  15. Working memory components that predict word problem solving: Is it merely a function of reading, calculation, and fluid intelligence?

    PubMed

    Fung, Wenson; Swanson, H Lee

    2017-07-01

    The purpose of this study was to assess whether the differential effects of working memory (WM) components (the central executive, phonological loop, and visual-spatial sketchpad) on math word problem-solving accuracy in children (N = 413, ages 6-10) are completely mediated by reading, calculation, and fluid intelligence. The results indicated that all three WM components predicted word problem solving in the nonmediated model, but only the storage component of WM yielded a significant direct path to word problem-solving accuracy in the fully mediated model. Fluid intelligence was found to moderate the relationship between WM and word problem solving, whereas reading, calculation, and related skills (naming speed, domain-specific knowledge) completely mediated the influence of the executive system on problem-solving accuracy. Our results are consistent with findings suggesting that storage eliminates the predictive contribution of executive WM to various measures Colom, Rebollo, Abad, & Shih (Memory & Cognition, 34: 158-171, 2006). The findings suggest that the storage component of WM, rather than the executive component, has a direct path to higher-order processing in children.

  16. Prediction of Spirometric Forced Expiratory Volume (FEV1) Data Using Support Vector Regression

    NASA Astrophysics Data System (ADS)

    Kavitha, A.; Sujatha, C. M.; Ramakrishnan, S.

    2010-01-01

    In this work, prediction of forced expiratory volume in 1 second (FEV1) in pulmonary function test is carried out using the spirometer and support vector regression analysis. Pulmonary function data are measured with flow volume spirometer from volunteers (N=175) using a standard data acquisition protocol. The acquired data are then used to predict FEV1. Support vector machines with polynomial kernel function with four different orders were employed to predict the values of FEV1. The performance is evaluated by computing the average prediction accuracy for normal and abnormal cases. Results show that support vector machines are capable of predicting FEV1 in both normal and abnormal cases and the average prediction accuracy for normal subjects was higher than that of abnormal subjects. Accuracy in prediction was found to be high for a regularization constant of C=10. Since FEV1 is the most significant parameter in the analysis of spirometric data, it appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.

  17. Extended bounds limiter for high-order finite-volume schemes on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tsoutsanis, Panagiotis

    2018-06-01

    This paper explores the impact of the definition of the bounds of the limiter proposed by Michalak and Ollivier-Gooch in [56] (2009), for higher-order Monotone-Upstream Central Scheme for Conservation Laws (MUSCL) numerical schemes on unstructured meshes in the finite-volume (FV) framework. A new modification of the limiter is proposed where the bounds are redefined by utilising all the spatial information provided by all the elements in the reconstruction stencil. Numerical results obtained on smooth and discontinuous test problems of the Euler equations on unstructured meshes, highlight that the newly proposed extended bounds limiter exhibits superior performance in terms of accuracy and mesh sensitivity compared to the cell-based or vertex-based bounds implementations.

  18. Computational wave dynamics for innovative design of coastal structures

    PubMed Central

    GOTOH, Hitoshi; OKAYASU, Akio

    2017-01-01

    For innovative designs of coastal structures, Numerical Wave Flumes (NWFs), which are solvers of Navier-Stokes equation for free-surface flows, are key tools. In this article, various methods and techniques for NWFs are overviewed. In the former half, key techniques of NWFs, namely the interface capturing (MAC, VOF, C-CUP) and significance of NWFs in comparison with the conventional wave models are described. In the latter part of this article, recent improvements of the particle method are shown as one of cores of NWFs. Methods for attenuating unphysical pressure fluctuation and improving accuracy, such as CMPS method for momentum conservation, Higher-order Source of Poisson Pressure Equation (PPE), Higher-order Laplacian, Error-Compensating Source in PPE, and Gradient Correction for ensuring Taylor-series consistency, are reviewed briefly. Finally, the latest new frontier of the accurate particle method, including Dynamic Stabilization for providing minimum-required artificial repulsive force to improve stability of computation, and Space Potential Particle for describing the exact free-surface boundary condition, is described. PMID:29021506

  19. Analysis and automatic identification of sleep stages using higher order spectra.

    PubMed

    Acharya, U Rajendra; Chua, Eric Chern-Pin; Chua, Kuang Chua; Min, Lim Choo; Tamura, Toshiyo

    2010-12-01

    Electroencephalogram (EEG) signals are widely used to study the activity of the brain, such as to determine sleep stages. These EEG signals are nonlinear and non-stationary in nature. It is difficult to perform sleep staging by visual interpretation and linear techniques. Thus, we use a nonlinear technique, higher order spectra (HOS), to extract hidden information in the sleep EEG signal. In this study, unique bispectrum and bicoherence plots for various sleep stages were proposed. These can be used as visual aid for various diagnostics application. A number of HOS based features were extracted from these plots during the various sleep stages (Wakefulness, Rapid Eye Movement (REM), Stage 1-4 Non-REM) and they were found to be statistically significant with p-value lower than 0.001 using ANOVA test. These features were fed to a Gaussian mixture model (GMM) classifier for automatic identification. Our results indicate that the proposed system is able to identify sleep stages with an accuracy of 88.7%.

  20. Nonlinear circuits for naturalistic visual motion estimation

    PubMed Central

    Fitzgerald, James E; Clark, Damon A

    2015-01-01

    Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator. DOI: http://dx.doi.org/10.7554/eLife.09123.001 PMID:26499494

  1. A supervised learning approach for Crohn's disease detection using higher-order image statistics and a novel shape asymmetry measure.

    PubMed

    Mahapatra, Dwarikanath; Schueffler, Peter; Tielbeek, Jeroen A W; Buhmann, Joachim M; Vos, Franciscus M

    2013-10-01

    Increasing incidence of Crohn's disease (CD) in the Western world has made its accurate diagnosis an important medical challenge. The current reference standard for diagnosis, colonoscopy, is time-consuming and invasive while magnetic resonance imaging (MRI) has emerged as the preferred noninvasive procedure over colonoscopy. Current MRI approaches assess rate of contrast enhancement and bowel wall thickness, and rely on extensive manual segmentation for accurate analysis. We propose a supervised learning method for the identification and localization of regions in abdominal magnetic resonance images that have been affected by CD. Low-level features like intensity and texture are used with shape asymmetry information to distinguish between diseased and normal regions. Particular emphasis is laid on a novel entropy-based shape asymmetry method and higher-order statistics like skewness and kurtosis. Multi-scale feature extraction renders the method robust. Experiments on real patient data show that our features achieve a high level of accuracy and perform better than two competing methods.

  2. Excitation energies from particle-particle random phase approximation with accurate optimized effective potentials

    NASA Astrophysics Data System (ADS)

    Jin, Ye; Yang, Yang; Zhang, Du; Peng, Degao; Yang, Weitao

    2017-10-01

    The optimized effective potential (OEP) that gives accurate Kohn-Sham (KS) orbitals and orbital energies can be obtained from a given reference electron density. These OEP-KS orbitals and orbital energies are used here for calculating electronic excited states with the particle-particle random phase approximation (pp-RPA). Our calculations allow the examination of pp-RPA excitation energies with the exact KS density functional theory (DFT). Various input densities are investigated. Specifically, the excitation energies using the OEP with the electron densities from the coupled-cluster singles and doubles method display the lowest mean absolute error from the reference data for the low-lying excited states. This study probes into the theoretical limit of the pp-RPA excitation energies with the exact KS-DFT orbitals and orbital energies. We believe that higher-order correlation contributions beyond the pp-RPA bare Coulomb kernel are needed in order to achieve even higher accuracy in excitation energy calculations.

  3. Development and performance validation of a cryogenic linear stage for SPICA-SAFARI verification

    NASA Astrophysics Data System (ADS)

    Ferrari, Lorenza; Smit, H. P.; Eggens, M.; Keizer, G.; de Jonge, A. W.; Detrain, A.; de Jonge, C.; Laauwen, W. M.; Dieleman, P.

    2014-07-01

    In the context of the SAFARI instrument (SpicA FAR-infrared Instrument) SRON is developing a test environment to verify the SAFARI performance. The characterization of the detector focal plane will be performed with a backilluminated pinhole over a reimaged SAFARI focal plane by an XYZ scanning mechanism that consists of three linear stages stacked together. In order to reduce background radiation that can couple into the high sensitivity cryogenic detectors (goal NEP of 2•10-19 W/√Hz and saturation power of few femtoWatts) the scanner is mounted inside the cryostat in the 4K environment. The required readout accuracy is 3 μm and reproducibility of 1 μm along the total travel of 32 mm. The stage will be operated in "on the fly" mode to prevent vibrations of the scanner mechanism and will move with a constant speed varying from 60 μm/s to 400 μm/s. In order to meet the requirements of large stroke, low dissipation (low friction) and high accuracy a DC motor plus spindle stage solution has been chosen. In this paper we will present the stage design and stage characterization, describing also the measurements setup. The room temperature performance has been measured with a 3D measuring machine cross calibrated with a laser interferometer and a 2-axis tilt sensor. The low temperature verification has been performed in a wet 4K cryostat using a laser interferometer for measuring the linear displacements and a theodolite for measuring the angular displacements. The angular displacements can be calibrated with a precision of 4 arcsec and the position could be determined with high accuracy. The presence of friction caused higher values of torque than predicted and consequently higher dissipation. The thermal model of the stage has also been verified at 4K.

  4. Mesoscale modelling methodology based on nudging to increase accuracy in WRA

    NASA Astrophysics Data System (ADS)

    Mylonas Dirdiris, Markos; Barbouchi, Sami; Hermmann, Hugo

    2016-04-01

    The offshore wind energy has recently become a rapidly growing renewable energy resource worldwide, with several offshore wind projects in development in different planning stages. Despite of this, a better understanding of the atmospheric interaction within the marine atmospheric boundary layer (MABL) is needed in order to contribute to a better energy capture and cost-effectiveness. Light has been thrown in observational nudging as it has recently become an innovative method to increase the accuracy of wind flow modelling. This particular study focuses on the observational nudging capability of Weather Research and Forecasting (WRF) and ways the uncertainty of wind flow modelling in the wind resource assessment (WRA) can be reduced. Finally, an alternative way to calculate the model uncertainty is pinpointed. Approach WRF mesoscale model will be nudged with observations from FINO3 at three different heights. The model simulations with and without applying observational nudging will be verified against FINO1 measurement data at 100m. In order to evaluate the observational nudging capability of WRF two ways to derive the model uncertainty will be described: one global uncertainty and an uncertainty per wind speed bin derived using the recommended practice of the IEA in order to link the model uncertainty to a wind energy production uncertainty. This study assesses the observational data assimilation capability of WRF model within the same vertical gridded atmospheric column. The principal aim is to investigate whether having observations up to one height could improve the simulation at a higher vertical level. The study will use objective analysis implementing a Cress-man scheme interpolation to interpolate the observation in time and in sp ace (keeping the horizontal component constant) to the gridded analysis. Then the WRF model core will incorporate the interpolated variables to the "first guess" to develop a nudged simulation. Consequently, WRF with and without applying observational nudging will be validated against the higher level of FINO1 met mast using verification statistical metrics such as root mean square error (RMSE), standard deviation of mean error (ME Std), mean error average (bias) and Pearson correlation coefficient (R). The respective process will be followed for different atmospheric stratification regimes in order to evaluate the sensibility of the method to the atmospheric stability. Finally, since wind speed does not have an equally distributed impact on the power yield, the uncertainty will be measured using two ways resulting in a global uncertainty and one per wind speed bin based on a wind turbine power curve in order to evaluate the WRF for the purposes of wind power generation. Conclusion This study shows the higher accuracy of the WRF model after nudging observational data. In a next step these results will be compared with traditional vertical extrapolation methods such as power and log laws. The larger picture of this work would be to nudge the observations from a short offshore metmast in order for the WRF to reconstruct accurately the entire wind profile of the atmosphere up to hub height. This is an important step in order to reduce the cost of offshore WRA. Learning objectives 1. The audience will get a clear view of the added value of observational nudging; 2. An interesting way to calculate WRF uncertainty will be described, linking wind speed uncertainty to energy uncertainty.

  5. Exact lower and upper bounds on stationary moments in stochastic biochemical systems

    NASA Astrophysics Data System (ADS)

    Ghusinga, Khem Raj; Vargas-Garcia, Cesar A.; Lamperski, Andrew; Singh, Abhyudai

    2017-08-01

    In the stochastic description of biochemical reaction systems, the time evolution of statistical moments for species population counts is described by a linear dynamical system. However, except for some ideal cases (such as zero- and first-order reaction kinetics), the moment dynamics is underdetermined as lower-order moments depend upon higher-order moments. Here, we propose a novel method to find exact lower and upper bounds on stationary moments for a given arbitrary system of biochemical reactions. The method exploits the fact that statistical moments of any positive-valued random variable must satisfy some constraints that are compactly represented through the positive semidefiniteness of moment matrices. Our analysis shows that solving moment equations at steady state in conjunction with constraints on moment matrices provides exact lower and upper bounds on the moments. These results are illustrated by three different examples—the commonly used logistic growth model, stochastic gene expression with auto-regulation and an activator-repressor gene network motif. Interestingly, in all cases the accuracy of the bounds is shown to improve as moment equations are expanded to include higher-order moments. Our results provide avenues for development of approximation methods that provide explicit bounds on moments for nonlinear stochastic systems that are otherwise analytically intractable.

  6. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  7. Precise and absolute measurements of complex third-order optical susceptibility

    NASA Astrophysics Data System (ADS)

    Santran, Stephane; Canioni, Lionel; Cardinal, Thierry; Fargin, Evelyne; Le Flem, Gilles; Rouyer, Claude; Sarger, Laurent

    2000-11-01

    We present precise and absolute measurements of full complex third order optical susceptibility on different fused silica and original glasses composed of tellurium, titanium, niobium erbium. These materials are designed to be the key point for applications ranging form high power laser systems to optoelectronics, their nonlinear index of refraction is a major property and thus must be accurately known. Due to the accuracy and sensitivity of our technique, we have been able to find a large dispersion (more than 30%) of the non linear index of fused silica glasses as a function of their processing mode. On the other hand, measurements on tellurium glasses have shown very strong nonlinearities (40 times higher than fused silica), to be linked to the configurations of their cations and anions. Although the titanium and niobium glasses are less nonlinear, they can be promising matrices for addition of luminescent entities like erbium leading to very interesting laser amplification materials. The experimental set-up is a collinear pump-probe (orthogonally polarized) experiment using transient absorption technique. It is built with around a 100 femtosecond laser oscillator. A fast oscillating delay between the pump and the probe allows us to measure the electronic nonlinearity in quasi real-time. This experiment has the following specifications: an absolute measurement accuracy below 10% mainly due to the laser parameters characterization, a relative measurement accuracy of 1% and a resolution less than 5.10-24m2/V2(50 times less than fused silica).

  8. SIRGAS: the core geodetic infrastructure in Latin America and the Caribbean

    NASA Astrophysics Data System (ADS)

    Sanchez, L.; Brunini, C.; Drewes, H.; Mackern, V.; da Silva, A.

    2013-05-01

    Studying, understanding, and modelling geophysical phenomena, such as global change and geodynamics, require geodetic reference frames with (1) an order of accuracy higher than the magnitude of the effects we want to study, (2) consistency and reliability worldwide (the same accuracy everywhere), and (3) a long-term stability (the same order of accuracy at any time). The definition, realisation, maintenance, and wide-utilisation of the International Terrestrial Reference System (ITRS) are oriented to guarantee a globally unified geometric reference frame with reliability at the mm-level, i.e. the International Terrestrial Reference Frame (ITRF). The densification of the global ITRF in Latin America and The Caribbean is given by SIRGAS (Sistema de Referencia Geocéntrico para Las Américas), primary objective of which is to provide the most precise coordinates in the region. Therefore, SIRGAS is the backbone for all regional projects based on the generation, use, and analysis of geo-referenced data at national as well as at international level. Besides providing the reference for a wide range of scientific applications such as the monitoring of Earth's crust deformations, vertical movements, sea level variations, atmospheric studies, etc., SIRGAS is also the platform for practical applications such as engineering projects, digital administration of geographical data, geospatial data infrastructures, etc. According to this, the present contribution describes the main features of SIRGAS, giving special care to those challenges faced to continue providing the best possible, long-term stable and high-precise reference frame for Latin America and the Caribbean.

  9. Numerical integration and optimization of motions for multibody dynamic systems

    NASA Astrophysics Data System (ADS)

    Aguilar Mayans, Joan

    This thesis considers the optimization and simulation of motions involving rigid body systems. It does so in three distinct parts, with the following topics: optimization and analysis of human high-diving motions, efficient numerical integration of rigid body dynamics with contacts, and motion optimization of a two-link robot arm using Finite-Time Lyapunov Analysis. The first part introduces the concept of eigenpostures, which we use to simulate and analyze human high-diving motions. Eigenpostures are used in two different ways: first, to reduce the complexity of the optimal control problem that we solve to obtain such motions, and second, to generate an eigenposture space to which we map existing real world motions to better analyze them. The benefits of using eigenpostures are showcased through different examples. The second part reviews an extensive list of integration algorithms used for the integration of rigid body dynamics. We analyze the accuracy and stability of the different integrators in the three-dimensional space and the rotation space SO(3). Integrators with an accuracy higher than first order perform more efficiently than integrators with first order accuracy, even in the presence of contacts. The third part uses Finite-time Lyapunov Analysis to optimize motions for a two-link robot arm. Finite-Time Lyapunov Analysis diagnoses the presence of time-scale separation in the dynamics of the optimized motion and provides the information and methodology for obtaining an accurate approximation to the optimal solution, avoiding the complications that timescale separation causes for alternative solution methods.

  10. Calibration and Testing of Digital Zenith Camera System Components

    NASA Astrophysics Data System (ADS)

    Ulug, Rasit; Halicioglu, Kerem; Tevfik Ozludemir, M.; Albayrak, Muge; Basoglu, Burak; Deniz, Rasim

    2017-04-01

    Starting from the beginning of the new millennium, thanks to the Charged-Coupled Device (CCD) technology, fully or partly automatic zenith camera systems are designed and used in order to determine astro-geodetic deflections of the vertical components in several countries, including Germany, Switzerland, Serbia, Latvia, Poland, Austria, China and Turkey. The Digital Zenith Camera System (DZCS) of Turkey performed successful observations yet it needs to be improved in terms of automating the system and increasing observation accuracy. In order to optimize the observation time and improve the system, some modifications have been implemented. Through the modification process that started at the beginning of 2016, some DZCS components have been replaced with the new ones and some new additional components have been installed. In this presentation, the ongoing calibration and testing process of the DZCS are summarized in general. In particular, one of the tested system components is the High Resolution Tiltmeter (HRTM), which enable orthogonal orientation of DZCS to the direction of plump line, is discussed. For the calibration of these components, two tiltmeters with different accuracies (1 nrad and 0.001 mrad) were observed nearly 30 days. The data recorded under different environmental conditions were divided into hourly, daily, and weekly subsets. In addition to the effects of temperature and humidity, interoperability of two tiltmeters were also investigated. Results show that with the integration of HRTM and the other implementations, the modified DZCS provides higher accuracy for the determination of vertical deflections.

  11. The efficacy of the reverse contrast mode in digital radiography for the detection of proximal dentinal caries

    PubMed Central

    Miri, Shimasadat; Mehralizadeh, Sandra; Sadri, Donya; Motamedi, Mahmood Reza Kalantar

    2015-01-01

    Purpose This study evaluated the diagnostic accuracy of the reverse contrast mode in intraoral digital radiography for the detection of proximal dentinal caries, in comparison with the original digital radiographs. Materials and Methods Eighty extracted premolars with no clinically apparent caries were selected, and digital radiographs of them were taken separately in standard conditions. Four observers examined the original radiographs and the same radiographs in the reverse contrast mode with the goal of identifying proximal dentinal caries. Microscopic sections 5 µm in thickness were prepared from the teeth in the mesiodistal direction. Four slides prepared from each sample used as the diagnostic gold standard. The data were analyzed using SPSS (α=0.05). Results Our results showed that the original radiographs in order to identify proximal dentinal caries had the following values for sensitivity, specificity, positive predictive value, negative predictive value, and accuracy, respectively: 72.5%, 90%, 87.2%, 76.5%, and 80.9%. For the reverse contrast mode, however, the corresponding values were 63.1%, 89.4%, 87.1%, 73.5%, and 78.8%, respectively. The sensitivity of original digital radiograph for detecting proximal dentinal caries was significantly higher than that of reverse contrast mode (p<0.05). However, no statistically significant differences were found regarding specificity, positive predictive value, negative predictive value, or accuracy (p>0.05). Conclusion The sensitivity of the original digital radiograph for detecting proximal dentinal caries was significantly higher than that of the reversed contrast images. However, no statistically significant differences were found between these techniques regarding specificity, positive predictive value, negative predictive value, or accuracy. PMID:26389055

  12. The Laguerre finite difference one-way equation solver

    NASA Astrophysics Data System (ADS)

    Terekhov, Andrew V.

    2017-05-01

    This paper presents a new finite difference algorithm for solving the 2D one-way wave equation with a preliminary approximation of a pseudo-differential operator by a system of partial differential equations. As opposed to the existing approaches, the integral Laguerre transform instead of Fourier transform is used. After carrying out the approximation of spatial variables it is possible to obtain systems of linear algebraic equations with better computing properties and to reduce computer costs for their solution. High accuracy of calculations is attained at the expense of employing finite difference approximations of higher accuracy order that are based on the dispersion-relationship-preserving method and the Richardson extrapolation in the downward continuation direction. The numerical experiments have verified that as compared to the spectral difference method based on Fourier transform, the new algorithm allows one to calculate wave fields with a higher degree of accuracy and a lower level of numerical noise and artifacts including those for non-smooth velocity models. In the context of solving the geophysical problem the post-stack migration for velocity models of the types Syncline and Sigsbee2A has been carried out. It is shown that the images obtained contain lesser noise and are considerably better focused as compared to those obtained by the known Fourier Finite Difference and Phase-Shift Plus Interpolation methods. There is an opinion that purely finite difference approaches do not allow carrying out the seismic migration procedure with sufficient accuracy, however the results obtained disprove this statement. For the supercomputer implementation it is proposed to use the parallel dichotomy algorithm when solving systems of linear algebraic equations with block-tridiagonal matrices.

  13. Error-proneness as a handicap signal.

    PubMed

    De Jaegher, Kris

    2003-09-21

    This paper describes two discrete signalling models in which the error-proneness of signals can serve as a handicap signal. In the first model, the direct handicap of sending a high-quality signal is not large enough to assure that a low-quality signaller will not send it. However, if the receiver sometimes mistakes a high-quality signal for a low-quality one, then there is an indirect handicap to sending a high-quality signal. The total handicap of sending such a signal may then still be such that a low-quality signaller would not want to send it. In the second model, there is no direct handicap of sending signals, so that nothing would seem to stop a signaller from always sending a high-quality signal. However, the receiver sometimes fails to detect signals, and this causes an indirect handicap of sending a high-quality signal that still stops the low-quality signaller of sending such a signal. The conditions for honesty are that the probability of an error of detection is higher for a high-quality than for a low-quality signal, and that the signaller who does not detect a signal adopts a response that is bad to the signaller. In both our models, we thus obtain the result that signal accuracy should not lie above a certain level in order for honest signalling to be possible. Moreover, we show that the maximal accuracy that can be achieved is higher the lower the degree of conflict between signaller and receiver. As well, we show that it is the conditions for honest signalling that may be constraining signal accuracy, rather than the signaller trying to make honest signals as effective as possible given receiver psychology, or the signaller adapting the accuracy of honest signals depending on his interests.

  14. Analysis the Accuracy of Digital Elevation Model (DEM) for Flood Modelling on Lowland Area

    NASA Astrophysics Data System (ADS)

    Zainol Abidin, Ku Hasna Zainurin Ku; Razi, Mohd Adib Mohammad; Bukari, Saifullizan Mohd

    2018-04-01

    Flood is one type of natural disaster that occurs almost every year in Malaysia. Commonly the lowland areas are the worst affected areas. This kind of disaster is controllable by using an accurate data for proposing any kinds of solutions. Elevation data is one of the data used to produce solutions for flooding. Currently, the research about the application of Digital Elevation Model (DEM) in hydrology was increased where this kind of model will identify the elevation for required areas. University of Tun Hussein Onn Malaysia is one of the lowland areas which facing flood problems on 2006. Therefore, this area was chosen in order to produce DEM which focussed on University Health Centre (PKU) and drainage area around Civil and Environment Faculty (FKAAS). Unmanned Aerial Vehicle used to collect aerial photos data then undergoes a process of generating DEM according to three types of accuracy and quality from Agisoft PhotoScan software. The higher the level of accuracy and quality of DEM produced, the longer time taken to generate a DEM. The reading of the errors created while producing the DEM shows almost 0.01 different. Therefore, it has been identified there are some important parameters which influenced the accuracy of DEM.

  15. Contextual interference effect on perceptual-cognitive skills training.

    PubMed

    Broadbent, David P; Causer, Joe; Ford, Paul R; Williams, A Mark

    2015-06-01

    Contextual interference (CI) effect predicts that a random order of practice for multiple skills is superior for learning compared to a blocked order. We report a novel attempt to examine the CI effect during acquisition and transfer of anticipatory judgments from simulation training to an applied sport situation. Participants were required to anticipate tennis shots under either a random practice schedule or a blocked practice schedule. Response accuracy was recorded for both groups in pretest, during acquisition, and on a 7-d retention test. Transfer of learning was assessed through a field-based tennis protocol that attempted to assess performance in an applied sport setting. The random practice group had significantly higher response accuracy scores on the 7-d laboratory retention test compared to the blocked group. Moreover, during the transfer of anticipatory judgments to an applied sport situation, the decision times of the random practice group were significantly lower compared to the blocked group. The CI effect extends to the training of anticipatory judgments through simulation techniques. Furthermore, we demonstrate for the first time that the CI effect increases transfer of learning from simulation training to an applied sport task, highlighting the importance of using appropriate practice schedules during simulation training.

  16. Towards Investigating Global Warming Impact on Human Health Using Derivatives of Photoplethysmogram Signals

    PubMed Central

    Elgendi, Mohamed; Norton, Ian; Brearley, Matt; Fletcher, Richard R.; Abbott, Derek; Lovell, Nigel H.; Schuurmans, Dale

    2015-01-01

    Recent clinical studies show that the contour of the photoplethysmogram (PPG) wave contains valuable information for characterizing cardiovascular activity. However, analyzing the PPG wave contour is difficult; therefore, researchers have applied first or higher order derivatives to emphasize and conveniently quantify subtle changes in the filtered PPG contour. Our hypothesis is that analyzing the whole PPG recording rather than each PPG wave contour or on a beat-by-beat basis can detect heat-stressed subjects and that, consequently, we will be able to investigate the impact of global warming on human health. Here, we explore the most suitable derivative order for heat stress assessment based on the energy and entropy of the whole PPG recording. The results of our study indicate that the use of the entropy of the seventh derivative of the filtered PPG signal shows promising results in detecting heat stress using 20-second recordings, with an overall accuracy of 71.6%. Moreover, the combination of the entropy of the seventh derivative of the filtered PPG signal with the root mean square of successive differences, or RMSSD (a traditional heart rate variability index of heat stress), improved the detection of heat stress to 88.9% accuracy. PMID:26473907

  17. An Automatic Prediction of Epileptic Seizures Using Cloud Computing and Wireless Sensor Networks.

    PubMed

    Sareen, Sanjay; Sood, Sandeep K; Gupta, Sunil Kumar

    2016-11-01

    Epilepsy is one of the most common neurological disorders which is characterized by the spontaneous and unforeseeable occurrence of seizures. An automatic prediction of seizure can protect the patients from accidents and save their life. In this article, we proposed a mobile-based framework that automatically predict seizures using the information contained in electroencephalography (EEG) signals. The wireless sensor technology is used to capture the EEG signals of patients. The cloud-based services are used to collect and analyze the EEG data from the patient's mobile phone. The features from the EEG signal are extracted using the fast Walsh-Hadamard transform (FWHT). The Higher Order Spectral Analysis (HOSA) is applied to FWHT coefficients in order to select the features set relevant to normal, preictal and ictal states of seizure. We subsequently exploit the selected features as input to a k-means classifier to detect epileptic seizure states in a reasonable time. The performance of the proposed model is tested on Amazon EC2 cloud and compared in terms of execution time and accuracy. The findings show that with selected HOS based features, we were able to achieve a classification accuracy of 94.6 %.

  18. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    PubMed

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  19. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  20. Fully coupled approach to modeling shallow water flow, sediment transport, and bed evolution in rivers

    NASA Astrophysics Data System (ADS)

    Li, Shuangcai; Duffy, Christopher J.

    2011-03-01

    Our ability to predict complex environmental fluid flow and transport hinges on accurate and efficient simulations of multiple physical phenomenon operating simultaneously over a wide range of spatial and temporal scales, including overbank floods, coastal storm surge events, drying and wetting bed conditions, and simultaneous bed form evolution. This research implements a fully coupled strategy for solving shallow water hydrodynamics, sediment transport, and morphological bed evolution in rivers and floodplains (PIHM_Hydro) and applies the model to field and laboratory experiments that cover a wide range of spatial and temporal scales. The model uses a standard upwind finite volume method and Roe's approximate Riemann solver for unstructured grids. A multidimensional linear reconstruction and slope limiter are implemented, achieving second-order spatial accuracy. Model efficiency and stability are treated using an explicit-implicit method for temporal discretization with operator splitting. Laboratory-and field-scale experiments were compiled where coupled processes across a range of scales were observed and where higher-order spatial and temporal accuracy might be needed for accurate and efficient solutions. These experiments demonstrate the ability of the fully coupled strategy in capturing dynamics of field-scale flood waves and small-scale drying-wetting processes.

  1. Mammalian cell culture monitoring using in situ spectroscopy: Is your method really optimised?

    PubMed

    André, Silvère; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Duponchel, Ludovic

    2017-03-01

    In recent years, as a result of the process analytical technology initiative of the US Food and Drug Administration, many different works have been carried out on direct and in situ monitoring of critical parameters for mammalian cell cultures by Raman spectroscopy and multivariate regression techniques. However, despite interesting results, it cannot be said that the proposed monitoring strategies, which will reduce errors of the regression models and thus confidence limits of the predictions, are really optimized. Hence, the aim of this article is to optimize some critical steps of spectroscopic acquisition and data treatment in order to reach a higher level of accuracy and robustness of bioprocess monitoring. In this way, we propose first an original strategy to assess the most suited Raman acquisition time for the processes involved. In a second part, we demonstrate the importance of the interbatch variability on the accuracy of the predictive models with a particular focus on the optical probes adjustment. Finally, we propose a methodology for the optimization of the spectral variables selection in order to decrease prediction errors of multivariate regressions. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:308-316, 2017. © 2017 American Institute of Chemical Engineers.

  2. A new family of high-order compact upwind difference schemes with good spectral resolution

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Yao, Zhaohui; He, Feng; Shen, M. Y.

    2007-12-01

    This paper presents a new family of high-order compact upwind difference schemes. Unknowns included in the proposed schemes are not only the values of the function but also those of its first and higher derivatives. Derivative terms in the schemes appear only on the upwind side of the stencil. One can calculate all the first derivatives exactly as one solves explicit schemes when the boundary conditions of the problem are non-periodic. When the proposed schemes are applied to periodic problems, only periodic bi-diagonal matrix inversions or periodic block-bi-diagonal matrix inversions are required. Resolution optimization is used to enhance the spectral representation of the first derivative, and this produces a scheme with the highest spectral accuracy among all known compact schemes. For non-periodic boundary conditions, boundary schemes constructed in virtue of the assistant scheme make the schemes not only possess stability for any selective length scale on every point in the computational domain but also satisfy the principle of optimal resolution. Also, an improved shock-capturing method is developed. Finally, both the effectiveness of the new hybrid method and the accuracy of the proposed schemes are verified by executing four benchmark test cases.

  3. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    NASA Astrophysics Data System (ADS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  4. Recursive regularization step for high-order lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  5. Total Top-Quark Pair-Production Cross Section at Hadron Colliders Through O(αS4)

    NASA Astrophysics Data System (ADS)

    Czakon, Michał; Fiedler, Paul; Mitov, Alexander

    2013-06-01

    We compute the next-to-next-to-leading order (NNLO) quantum chromodynamics (QCD) correction to the total cross section for the reaction gg→tt¯+X. Together with the partonic channels we computed previously, the result derived in this Letter completes the set of NNLO QCD corrections to the total top pair-production cross section at hadron colliders. Supplementing the fixed order results with soft-gluon resummation with next-to-next-to-leading logarithmic accuracy, we estimate that the theoretical uncertainty of this observable due to unknown higher order corrections is about 3% at the LHC and 2.2% at the Tevatron. We observe a good agreement between the standard model predictions and the available experimental measurements. The very high theoretical precision of this observable allows a new level of scrutiny in parton distribution functions and new physics searches.

  6. Total top-quark pair-production cross section at hadron colliders through O(αS(4)).

    PubMed

    Czakon, Michał; Fiedler, Paul; Mitov, Alexander

    2013-06-21

    We compute the next-to-next-to-leading order (NNLO) quantum chromodynamics (QCD) correction to the total cross section for the reaction gg → tt + X. Together with the partonic channels we computed previously, the result derived in this Letter completes the set of NNLO QCD corrections to the total top pair-production cross section at hadron colliders. Supplementing the fixed order results with soft-gluon resummation with next-to-next-to-leading logarithmic accuracy, we estimate that the theoretical uncertainty of this observable due to unknown higher order corrections is about 3% at the LHC and 2.2% at the Tevatron. We observe a good agreement between the standard model predictions and the available experimental measurements. The very high theoretical precision of this observable allows a new level of scrutiny in parton distribution functions and new physics searches.

  7. Derivation and application of a class of generalized impedance boundary conditions, part 2

    NASA Technical Reports Server (NTRS)

    Volakis, J. L.; Senior, T. B. A.; Jin, J.-M.

    1989-01-01

    Boundary conditions involving higher order derivatives are presented by simulating surfaces whose reflection coefficients are known analytically, numerically, or experimentally. Procedures for determining the coefficients of the derivatives are discussed, along with the effect of displacing the surface where the boundary conditions are applied. Provided the coefficients satisfy a duality relation, equivalent forms of the boundary conditions involving tangential field components are deduced, and these provide the natural extension to non-planar surfaces. As an illustration, the simulation of metal-backed uniform and three-layer dielectric coatings is given. It is shown that fourth order conditions are capable of providing an accurate simulation for the uniform coating at least a quarter of a wavelength in thickness. Provided, though, some compromise in accuracy is acceptable, it is also shown that a third order condition may be sufficient for practical purposes when simulating uniform coatings.

  8. Efficient kinetic method for fluid simulation beyond the Navier-Stokes equation.

    PubMed

    Zhang, Raoyang; Shan, Xiaowen; Chen, Hudong

    2006-10-01

    We present a further theoretical extension to the kinetic-theory-based formulation of the lattice Boltzmann method of Shan [J. Fluid Mech. 550, 413 (2006)]. In addition to the higher-order projection of the equilibrium distribution function and a sufficiently accurate Gauss-Hermite quadrature in the original formulation, a regularization procedure is introduced in this paper. This procedure ensures a consistent order of accuracy control over the nonequilibrium contributions in the Galerkin sense. Using this formulation, we construct a specific lattice Boltzmann model that accurately incorporates up to third-order hydrodynamic moments. Numerical evidence demonstrates that the extended model overcomes some major defects existing in conventionally known lattice Boltzmann models, so that fluid flows at finite Knudsen number Kn can be more quantitatively simulated. Results from force-driven Poiseuille flow simulations predict the Knudsen's minimum and the asymptotic behavior of flow flux at large Kn.

  9. A Demand-Driven Approach for a Multi-Agent System in Supply Chain Management

    NASA Astrophysics Data System (ADS)

    Kovalchuk, Yevgeniya; Fasli, Maria

    This paper presents the architecture of a multi-agent decision support system for Supply Chain Management (SCM) which has been designed to compete in the TAC SCM game. The behaviour of the system is demand-driven and the agents plan, predict, and react dynamically to changes in the market. The main strength of the system lies in the ability of the Demand agent to predict customer winning bid prices - the highest prices the agent can offer customers and still obtain their orders. This paper investigates the effect of the ability to predict customer order prices on the overall performance of the system. Four strategies are proposed and compared for predicting such prices. The experimental results reveal which strategies are better and show that there is a correlation between the accuracy of the models' predictions and the overall system performance: the more accurate the prediction of customer order prices, the higher the profit.

  10. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  11. High order filtering methods for approximating hyberbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1990-01-01

    In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.

  12. Multi-scale Eulerian model within the new National Environmental Modeling System

    NASA Astrophysics Data System (ADS)

    Janjic, Zavisa; Janjic, Tijana; Vasic, Ratko

    2010-05-01

    The unified Non-hydrostatic Multi-scale Model on the Arakawa B grid (NMMB) is being developed at NCEP within the National Environmental Modeling System (NEMS). The finite-volume horizontal differencing employed in the model preserves important properties of differential operators and conserves a variety of basic and derived dynamical and quadratic quantities. Among these, conservation of energy and enstrophy improves the accuracy of nonlinear dynamics of the model. Within further model development, advection schemes of fourth order of formal accuracy have been developed. It is argued that higher order advection schemes should not be used in the thermodynamic equation in order to preserve consistency with the second order scheme used for computation of the pressure gradient force. Thus, the fourth order scheme is applied only to momentum advection. Three sophisticated second order schemes were considered for upgrade. Two of them, proposed in Janjic(1984), conserve energy and enstrophy, but with enstrophy calculated differently. One of them conserves enstrophy as computed by the most accurate second order Laplacian operating on stream function. The other scheme conserves enstrophy as computed from the B grid velocity. The third scheme (Arakawa 1972) is arithmetic mean of the former two. It does not conserve enstrophy strictly, but it conserves other quadratic quantities that control the nonlinear energy cascade. Linearization of all three schemes leads to the same second order linear advection scheme. The second order term of the truncation error of the linear advection scheme has a special form so that it can be eliminated by simply preconditioning the advected quantity. Tests with linear advection of a cone confirm the advantage of the fourth order scheme. However, if a localized, large amplitude and high wave-number pattern is present in initial conditions, the clear advantage of the fourth order scheme disappears. In real data runs, problems with noisy data may appear due to mountains. Thus, accuracy and formal accuracy may not be synonymous. The nonlinear fourth order schemes are quadratic conservative and reduce to the Arakawa Jacobian in case of non-divergent flow. In case of general flow the conservation properties of the new momentum advection schemes impose stricter constraint on the nonlinear cascade than the original second order schemes. However, for non-divergent flow, the conservation properties of the fourth order schemes cannot be proven in the same way as those of the original second order schemes. Therefore, nonlinear tests were carried out in order to check how well the fourth order schemes control the nonlinear energy cascade. In the tests nonlinear shallow water equations are solved in a rotating rectangular domain (Janjic, 1984). The domain is covered with only 17 x 17 grid points. A diagnostic quantity is used to monitor qualitative changes in the spectrum over 116 days of simulated time. All schemes maintained meaningful solutions throughout the test. Among the second order schemes, the best result was obtained with the scheme that conserved enstrophy as computed by the second order Laplacian of the stream function. It was closely followed by the Arakawa (1972) scheme, while the remaining scheme was distant third. The fourth order schemes ranked in the same order, and were competitive throughout the experiments with their second order counterparts in preventing accumulation of energy at small scales. Finally, the impact was examined of the fourth order momentum advection on global medium range forecasts. The 500 mb anomaly correlation coefficient is used as a measure of success of the forecasts. Arakawa, A., 1972: Design of the UCLA general circulation model. Tech. Report No. 7, Department of Meteorology, University of California, Los Angeles, 116 pp. Janjic, Z. I., 1984: Non-linear advection schemes and energy cascade on semi-staggered grids. Monthly Weather Review, 112, 1234-1245.

  13. An extended UTD analysis for the scattering and diffraction from cubic polynomial strips

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    Spline and polynomial type surfaces are commonly used in high frequency modeling of complex structures such as aircraft, ships, reflectors, etc. It is therefore of interest to develop an efficient and accurate solution to describe the scattered fields from such surfaces. An extended Uniform Geometrical Theory of Diffraction (UTD) solution for the scattering and diffraction from perfectly conducting cubic polynomial strips is derived and involves the incomplete Airy integrals as canonical functions. This new solution is universal in nature and can be used to effectively describe the scattered fields from flat, strictly concave or convex, and concave convex boundaries containing edges. The classic UTD solution fails to describe the more complicated field behavior associated with higher order phase catastrophes and therefore a new set of uniform reflection and first-order edge diffraction coefficients is derived. Also, an additional diffraction coefficient associated with a zero-curvature (inflection) point is presented. Higher order effects such as double edge diffraction, creeping waves, and whispering gallery modes are not examined. The extended UTD solution is independent of the scatterer size and also provides useful physical insight into the various scattering and diffraction processes. Its accuracy is confirmed via comparison with some reference moment method results.

  14. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    NASA Astrophysics Data System (ADS)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  15. A technical challenge for robot-assisted minimally invasive surgery: precision surgery on soft tissue.

    PubMed

    Stallkamp, J; Schraft, R D

    2005-01-01

    In minimally invasive surgery, a higher degree of accuracy is required by surgeons both for current and for future applications. This could be achieved using either a manipulator or a robot which would undertake selected tasks during surgery. However, a manually-controlled manipulator cannot fully exploit the maximum accuracy and feasibility of three-dimensional motion sequences. Therefore, apart from being used to perform simple positioning tasks, manipulators will probably be replaced by robot systems more and more in the future. However, in order to use a robot, accurate, up-to-date and extensive data is required which cannot yet be acquired by typical sensors such as CT, MRI, US or common x-ray machines. This paper deals with a new sensor and a concept for its application in robot-assisted minimally invasive surgery on soft tissue which could be a solution for data acquisition in future. Copyright 2005 Robotic Publications Ltd.

  16. Use of noncrystallographic symmetry for automated model building at medium to low resolution.

    PubMed

    Wiegels, Tim; Lamzin, Victor S

    2012-04-01

    A novel method is presented for the automatic detection of noncrystallographic symmetry (NCS) in macromolecular crystal structure determination which does not require the derivation of molecular masks or the segmentation of density. It was found that throughout structure determination the NCS-related parts may be differently pronounced in the electron density. This often results in the modelling of molecular fragments of variable length and accuracy, especially during automated model-building procedures. These fragments were used to identify NCS relations in order to aid automated model building and refinement. In a number of test cases higher completeness and greater accuracy of the obtained structures were achieved, specifically at a crystallographic resolution of 2.3 Å or poorer. In the best case, the method allowed the building of up to 15% more residues automatically and a tripling of the average length of the built fragments.

  17. A Novel Robot Visual Homing Method Based on SIFT Features

    PubMed Central

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-01-01

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880

  18. Verification of low-Mach number combustion codes using the method of manufactured solutions

    NASA Astrophysics Data System (ADS)

    Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz

    2007-11-01

    Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.

  19. New high-precision drift-tube detectors for the ATLAS muon spectrometer

    NASA Astrophysics Data System (ADS)

    Kroha, H.; Fakhrutdinov, R.; Kozhin, A.

    2017-06-01

    Small-diameter muon drift tube (sMDT) detectors have been developed for upgrades of the ATLAS muon spectrometer. With a tube diameter of 15 mm, they provide an about an order of magnitude higher rate capability than the present ATLAS muon tracking detectors, the MDT chambers with 30 mm tube diameter. The drift-tube design and the construction methods have been optimised for mass production and allow for complex shapes required for maximising the acceptance. A record sense wire positioning accuracy of 5 μm has been achieved with the new design. In the serial production, the wire positioning accuracy is routinely better than 10 μm. 14 new sMDT chambers are already operational in ATLAS, further 16 are under construction for installation in the 2019-2020 LHC shutdown. For the upgrade of the barrel muon spectrometer for High-Luminosity LHC, 96 sMDT chambers will be contructed between 2020 and 2024.

  20. Combining High Spatial Resolution Optical and LIDAR Data for Object-Based Image Classification

    NASA Astrophysics Data System (ADS)

    Li, R.; Zhang, T.; Geng, R.; Wang, L.

    2018-04-01

    In order to classify high spatial resolution images more accurately, in this research, a hierarchical rule-based object-based classification framework was developed based on a high-resolution image with airborne Light Detection and Ranging (LiDAR) data. The eCognition software is employed to conduct the whole process. In detail, firstly, the FBSP optimizer (Fuzzy-based Segmentation Parameter) is used to obtain the optimal scale parameters for different land cover types. Then, using the segmented regions as basic units, the classification rules for various land cover types are established according to the spectral, morphological and texture features extracted from the optical images, and the height feature from LiDAR respectively. Thirdly, the object classification results are evaluated by using the confusion matrix, overall accuracy and Kappa coefficients. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy.

  1. NASA astrophysics - Optical systems to explore the universe

    NASA Technical Reports Server (NTRS)

    Pellerin, C. J., Jr.; Stencel, R. E.

    1983-01-01

    Major and minor NASA astrophysical research efforts in the near-term are outlined, together with projections of direction for future projects. The Space Telescope is being readied for a 1986 launch and will feature an f/24, 2.4 m aperture, an MgF2 mirror with better than 1/60 wavelength accuracy and will be diffraction-limited in the UV. Pointing accuracy is designed to be 0.007 arcsec for 24 hr. Optical, spectrometric, and photometric equipment will be included. Around 1990, Shuttle-based missions will include an IR telescope and a subarcsec solar surface imaging device. A free-flying X-ray observatory (AXAF) is planned and will include a sensitivity that exceeds that of the HEAO-2 spacecraft by two orders of magnitude. Instruments are under development for higher resolution UV, gamma-ray, and IR studies. In-orbit interferometry is being studied and will depend on in-orbit assembly and servicing of stable structures with segmented optics.

  2. Chronnectome fingerprinting: Identifying individuals and predicting higher cognitive functions using dynamic brain connectivity patterns.

    PubMed

    Liu, Jin; Liao, Xuhong; Xia, Mingrui; He, Yong

    2018-02-01

    The human brain is a large, interacting dynamic network, and its architecture of coupling among brain regions varies across time (termed the "chronnectome"). However, very little is known about whether and how the dynamic properties of the chronnectome can characterize individual uniqueness, such as identifying individuals as a "fingerprint" of the brain. Here, we employed multiband resting-state functional magnetic resonance imaging data from the Human Connectome Project (N = 105) and a sliding time-window dynamic network analysis approach to systematically examine individual time-varying properties of the chronnectome. We revealed stable and remarkable individual variability in three dynamic characteristics of brain connectivity (i.e., strength, stability, and variability), which was mainly distributed in three higher order cognitive systems (i.e., default mode, dorsal attention, and fronto-parietal) and in two primary systems (i.e., visual and sensorimotor). Intriguingly, the spatial patterns of these dynamic characteristics of brain connectivity could successfully identify individuals with high accuracy and could further significantly predict individual higher cognitive performance (e.g., fluid intelligence and executive function), which was primarily contributed by the higher order cognitive systems. Together, our findings highlight that the chronnectome captures inherent functional dynamics of individual brain networks and provides implications for individualized characterization of health and disease. © 2017 Wiley Periodicals, Inc.

  3. Effects of shade tab arrangement on the repeatability and accuracy of shade selection.

    PubMed

    Yılmaz, Burak; Yuzugullu, Bulem; Cınar, Duygu; Berksun, Semih

    2011-06-01

    Appropriate and repeatable shade matching using visual shade selection remains a challenge for the restorative dentist. The purpose of this study was to evaluate the effect of different arrangements of a shade guide on the repeatability and accuracy of visual shade selection by restorative dentists. Three Vitapan Classical shade guides were used for shade selection. Seven shade tabs from one shade guide were used as target shades for the testing (A1, A4, B2, B3, C2, C4, and D3); the other 2 guides were used for shade selection by the subjects. One shade guide was arranged according to hue and chroma and the second was arranged according to value. Thirteen male and 22 female restorative dentists were asked to match the target shades using shade guide tabs arranged in the 2 different orders. The sessions were performed twice with each guide in a viewing booth. Collected data were analyzed with Fisher's exact test to compare the accuracy and repeatability of the shade selection (α=.05). There were no significant differences observed in the accuracy or repeatability of the shade selection results obtained with the 2 different arrangements. When the hue/chroma-ordered shade guide was used, 58% of the shade selections were accurate. This ratio was 57.6% when the value-ordered shade guide was used. The observers repeated 55.5% of the selections accurately with the hue/chroma-ordered shade guide and 54.3% with the value-ordered shade guide. The accuracy and repeatability of shade selections by restorative dentists were similar when different arrangements (hue/chroma-ordered and value-ordered) of the Vitapan Classical shade guide were used. Copyright © 2011 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  4. A family of high-order gas-kinetic schemes and its comparison with Riemann solver based high-order methods

    NASA Astrophysics Data System (ADS)

    Ji, Xing; Zhao, Fengxiang; Shyy, Wei; Xu, Kun

    2018-03-01

    Most high order computational fluid dynamics (CFD) methods for compressible flows are based on Riemann solver for the flux evaluation and Runge-Kutta (RK) time stepping technique for temporal accuracy. The advantage of this kind of space-time separation approach is the easy implementation and stability enhancement by introducing more middle stages. However, the nth-order time accuracy needs no less than n stages for the RK method, which can be very time and memory consuming due to the reconstruction at each stage for a high order method. On the other hand, the multi-stage multi-derivative (MSMD) method can be used to achieve the same order of time accuracy using less middle stages with the use of the time derivatives of the flux function. For traditional Riemann solver based CFD methods, the lack of time derivatives in the flux function prevents its direct implementation of the MSMD method. However, the gas kinetic scheme (GKS) provides such a time accurate evolution model. By combining the second-order or third-order GKS flux functions with the MSMD technique, a family of high order gas kinetic methods can be constructed. As an extension of the previous 2-stage 4th-order GKS, the 5th-order schemes with 2 and 3 stages will be developed in this paper. Based on the same 5th-order WENO reconstruction, the performance of gas kinetic schemes from the 2nd- to the 5th-order time accurate methods will be evaluated. The results show that the 5th-order scheme can achieve the theoretical order of accuracy for the Euler equations, and present accurate Navier-Stokes solutions as well due to the coupling of inviscid and viscous terms in the GKS formulation. In comparison with Riemann solver based 5th-order RK method, the high order GKS has advantages in terms of efficiency, accuracy, and robustness, for all test cases. The 4th- and 5th-order GKS have the same robustness as the 2nd-order scheme for the capturing of discontinuous solutions. The current high order MSMD GKS is a multi-dimensional scheme with incorporation of both normal and tangential spatial derivatives of flow variables at a cell interface in the flux evaluation. The scheme can be extended straightforwardly to viscous flow computation in unstructured mesh. It provides a promising direction for the development of high-order CFD methods for the computation of complex flows, such as turbulence and acoustics with shock interactions.

  5. An index of refraction algorithm for seawater over temperature, pressure, salinity, density, and wavelength

    NASA Astrophysics Data System (ADS)

    Millard, R. C.; Seaver, G.

    1990-12-01

    A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.

  6. Detection of proximal caries using digital radiographic systems with different resolutions.

    PubMed

    Nikneshan, Sima; Abbas, Fatemeh Mashhadi; Sabbagh, Sedigheh

    2015-01-01

    Dental radiography is an important tool for detection of caries and digital radiography is the latest advancement in this regard. Spatial resolution is a characteristic of digital receptors used for describing the quality of images. This study was aimed to compare the diagnostic accuracy of two digital radiographic systems with three different resolutions for detection of noncavitated proximal caries. Diagnostic accuracy. Seventy premolar teeth were mounted in 14 gypsum blocks. Digora; Optime and RVG Access were used for obtaining digital radiographs. Six observers evaluated the proximal surfaces in radiographs for each resolution in order to determine the depth of caries based on a 4-point scale. The teeth were then histologically sectioned, and the results of histologic analysis were considered as the gold standard. Data were entered using SPSS version 18 software and the Kruskal-Wallis test was used for data analysis. P <0.05 was considered as statistically significant. No significant difference was found between different resolutions for detection of proximal caries (P > 0.05). RVG access system had the highest specificity (87.7%) and Digora; Optime at high resolution had the lowest specificity (84.2%). Furthermore, Digora; Optime had higher sensitivity for detection of caries exceeding outer half of enamel. Judgment of oral radiologists for detection of the depth of caries had higher reliability than that of restorative dentistry specialists. The three resolutions of Digora; Optime and RVG access had similar accuracy in detection of noncavitated proximal caries.

  7. Evaluation of Techniques Used to Estimate Cortical Feature Maps

    PubMed Central

    Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2011-01-01

    Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537

  8. Toward chemical accuracy in the description of ion-water interactions through many-body representations. Alkali-water dimer potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Riera, Marc; Mardirossian, Narbe; Bajaj, Pushp; Götz, Andreas W.; Paesani, Francesco

    2017-10-01

    This study presents the extension of the MB-nrg (Many-Body energy) theoretical/computational framework of transferable potential energy functions (PEFs) for molecular simulations of alkali metal ion-water systems. The MB-nrg PEFs are built upon the many-body expansion of the total energy and include the explicit treatment of one-body, two-body, and three-body interactions, with all higher-order contributions described by classical induction. This study focuses on the MB-nrg two-body terms describing the full-dimensional potential energy surfaces of the M+(H2O) dimers, where M+ = Li+, Na+, K+, Rb+, and Cs+. The MB-nrg PEFs are derived entirely from "first principles" calculations carried out at the explicitly correlated coupled-cluster level including single, double, and perturbative triple excitations [CCSD(T)-F12b] for Li+ and Na+ and at the CCSD(T) level for K+, Rb+, and Cs+. The accuracy of the MB-nrg PEFs is systematically assessed through an extensive analysis of interaction energies, structures, and harmonic frequencies for all five M+(H2O) dimers. In all cases, the MB-nrg PEFs are shown to be superior to both polarizable force fields and ab initio models based on density functional theory. As previously demonstrated for halide-water dimers, the MB-nrg PEFs achieve higher accuracy by correctly describing short-range quantum-mechanical effects associated with electron density overlap as well as long-range electrostatic many-body interactions.

  9. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  10. Point-of-care ultrasound versus auscultation in determining the position of double-lumen tube

    PubMed Central

    Hu, Wei-Cai; Xu, Lei; Zhang, Quan; Wei, Li; Zhang, Wei

    2018-01-01

    Abstract This study was designed to assess the accuracy of point-of-care ultrasound in determining the position of double-lumen tubes (DLTs). A total of 103 patients who required DLT intubation were enrolled into the study. After DLTs were tracheal intubated in the supine position, an auscultation researcher and ultrasound researcher were sequentially invited in the operating room to conduct their evaluation of the DLT. After the end of their evaluation, fiberscope researchers (FRs) were invited in the operating room to evaluate the position of DLT using a fiberscope. After the patients were changed to the lateral position, the same evaluation process was repeated. These 3 researchers were blind to each other when they made their conclusions. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were obtained by statistical analysis. When left DLTs (LDLTs) were used, the accuracy of ultrasound (84.2% [72.1%, 92.5%]) was higher than the accuracy of auscultation (59.7% [45.8%, 72.4%]) (P < .01). When right DLTs (RDLTs) were used, the accuracy of ultrasound (89.1% [76.4%, 96.4%]) was higher than the accuracy of auscultation (67.4% [52.0%, 80.5%]) (P < .01). When LDLTs were used in the lateral position, the accuracy of ultrasound (75.4% [62.2%, 85.9%]) was higher than the accuracy of auscultation (54.4% [40.7%, 67.6%]) (P < .05). When RDLT were used, the accuracy of ultrasound (73.9% [58.9%, 85.7%]) was higher than the accuracy of auscultation (47.8% [32.9%, 63.1%]) (P < .05). Assessment via point-of-care ultrasound is superior to auscultation in determining the position of DLTs. PMID:29595696

  11. Point-of-care ultrasound versus auscultation in determining the position of double-lumen tube.

    PubMed

    Hu, Wei-Cai; Xu, Lei; Zhang, Quan; Wei, Li; Zhang, Wei

    2018-03-01

    This study was designed to assess the accuracy of point-of-care ultrasound in determining the position of double-lumen tubes (DLTs).A total of 103 patients who required DLT intubation were enrolled into the study. After DLTs were tracheal intubated in the supine position, an auscultation researcher and ultrasound researcher were sequentially invited in the operating room to conduct their evaluation of the DLT. After the end of their evaluation, fiberscope researchers (FRs) were invited in the operating room to evaluate the position of DLT using a fiberscope. After the patients were changed to the lateral position, the same evaluation process was repeated. These 3 researchers were blind to each other when they made their conclusions. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were obtained by statistical analysis.When left DLTs (LDLTs) were used, the accuracy of ultrasound (84.2% [72.1%, 92.5%]) was higher than the accuracy of auscultation (59.7% [45.8%, 72.4%]) (P < .01). When right DLTs (RDLTs) were used, the accuracy of ultrasound (89.1% [76.4%, 96.4%]) was higher than the accuracy of auscultation (67.4% [52.0%, 80.5%]) (P < .01). When LDLTs were used in the lateral position, the accuracy of ultrasound (75.4% [62.2%, 85.9%]) was higher than the accuracy of auscultation (54.4% [40.7%, 67.6%]) (P < .05). When RDLT were used, the accuracy of ultrasound (73.9% [58.9%, 85.7%]) was higher than the accuracy of auscultation (47.8% [32.9%, 63.1%]) (P < .05).Assessment via point-of-care ultrasound is superior to auscultation in determining the position of DLTs.

  12. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2010-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.

  13. Practical Aerodynamic Design Optimization Based on the Navier-Stokes Equations and a Discrete Adjoint Method

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard

    1999-01-01

    The technical details are summarized below: Compressible and incompressible versions of a three-dimensional unstructured mesh Reynolds-averaged Navier-Stokes flow solver have been differentiated and resulting derivatives have been verified by comparisons with finite differences and a complex-variable approach. In this implementation, the turbulence model is fully coupled with the flow equations in order to achieve this consistency. The accuracy demonstrated in the current work represents the first time that such an approach has been successfully implemented. The accuracy of a number of simplifying approximations to the linearizations of the residual have been examined. A first-order approximation to the dependent variables in both the adjoint and design equations has been investigated. The effects of a "frozen" eddy viscosity and the ramifications of neglecting some mesh sensitivity terms were also examined. It has been found that none of the approximations yielded derivatives of acceptable accuracy and were often of incorrect sign. However, numerical experiments indicate that an incomplete convergence of the adjoint system often yield sufficiently accurate derivatives, thereby significantly lowering the time required for computing sensitivity information. The convergence rate of the adjoint solver relative to the flow solver has been examined. Inviscid adjoint solutions typically require one to four times the cost of a flow solution, while for turbulent adjoint computations, this ratio can reach as high as eight to ten. Numerical experiments have shown that the adjoint solver can stall before converging the solution to machine accuracy, particularly for viscous cases. A possible remedy for this phenomenon would be to include the complete higher-order linearization in the preconditioning step, or to employ a simple form of mesh sequencing to obtain better approximations to the solution through the use of coarser meshes. . An efficient surface parameterization based on a free-form deformation technique has been utilized and the resulting codes have been integrated with an optimization package. Lastly, sample optimizations have been shown for inviscid and turbulent flow over an ONERA M6 wing. Drag reductions have been demonstrated by reducing shock strengths across the span of the wing.

  14. SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.

    PubMed

    Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver

    2012-07-15

    In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.

  15. Current Status of Astrometry Satellite missions in Japan: JASMINE project series

    NASA Astrophysics Data System (ADS)

    Yano, T.; Gouda, N.; Kobayashi, Y.; Tsujimoto, T.; Hatsutori, Y.; Murooka, J.; Niwa, Y.; Yamada, Y.

    Astrometry satellites have common technological issues. (A) Astrometry satellites are required to measure the positions of stars with high accuracy from the huge amount of data during the observational period. (B) The high stabilization of the thermal environment in the telescope is required. (C) The attitude-pointing stability of these satellites with sub-pixel accuracy is also required. Measurement of the positions of stars from a huge amount of data is the essence of astrometry. It is needed to exclude the systematic errors adequately for each image of stars in order to obtain the accurate positions. We have carried out a centroiding experiment for determining the positions of stars from about 10 000 image data. The following two points are important issues for the mission system of JASMINE in order to achieve our aim. For the small-JASMINE, we require the thermal stabilization of the telescope in order to obtain high astrometric accuracy of about 10 micro-arcsec. In order to accomplish a measurement of positions of stars with high accuracy, we must make a model of the distortion of the image on the focal plane with the accuracy of less than 0.1 nm. We have investigated numerically that the above requirement is achieved if the thermal variation is within about 1 K / 0.75 h. We also require the accuracy of the attitude-pointing stability of about 200 mas / 7 s. The utilization of the Tip-tilt mirror will make it possible to achieve such a stable pointing.

  16. Accuracy of References in Five Entomology Journals.

    ERIC Educational Resources Information Center

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…

  17. ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.

    2018-07-01

    We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved space-times. In this paper, we assume the background space-time to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local time-stepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed space-times. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.

  18. ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.

    2018-03-01

    We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved spacetimes. In this paper we assume the background spacetime to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully-discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local timestepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a-posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed spacetimes. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.

  19. Baseline heartbeat perception accuracy and short-term outcome of brief cognitive-behaviour therapy for panic disorder with agoraphobia.

    PubMed

    Masdrakis, Vasilios G; Legaki, Emilia-Maria; Vaidakis, Nikolaos; Ploumpidis, Dimitrios; Soldatos, Constantin R; Papageorgiou, Charalambos; Papadimitriou, George N; Oulis, Panagiotis

    2015-07-01

    Increased heartbeat perception accuracy (HBP-accuracy) may contribute to the pathogenesis of Panic Disorder (PD) without or with Agoraphobia (PDA). Extant research suggests that HBP-accuracy is a rather stable individual characteristic, moreover predictive of worse long-term outcome in PD/PDA patients. However, it remains still unexplored whether HBP-accuracy adversely affects patients' short-term outcome after structured cognitive behaviour therapy (CBT) for PD/PDA. To explore the potential association between HBP-accuracy and the short-term outcome of a structured brief-CBT for the acute treatment of PDA. We assessed baseline HBP-accuracy using the "mental tracking" paradigm in 25 consecutive medication-free, CBT-naive PDA patients. Patients then underwent a structured, protocol-based, 8-session CBT by the same therapist. Outcome measures included the number of panic attacks during the past week, the Agoraphobic Cognitions Questionnaire (ACQ), and the Mobility Inventory-Alone subscale (MI-alone). No association emerged between baseline HBP-accuracy and posttreatment changes concerning number of panic attacks. Moreover, higher baseline HBP-accuracy was associated with significantly larger reductions in the scores of the ACQ and the MI-alone scales. Our results suggest that in PDA patients undergoing structured brief-CBT for the acute treatment of their symptoms, higher baseline HBP-accuracy is not associated with worse short-term outcome concerning panic attacks. Furthermore, higher baseline HBP-accuracy may be associated with enhanced therapeutic gains in agoraphobic cognitions and behaviours.

  20. An efficient and accurate two-stage fourth-order gas-kinetic scheme for the Euler and Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan

    2016-12-01

    For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function provides a dynamic process of evolution from the kinetic scale particle free transport to the hydrodynamic scale wave propagation, which provides the physics for the non-equilibrium numerical shock structure construction to the near equilibrium NS solution. As a result, with the implementation of the fifth-order WENO initial reconstruction, in the smooth region the current two-stage GKS provides an accuracy of O ((Δx) 5 ,(Δt) 4) for the Euler equations, and O ((Δx) 5 ,τ2 Δt) for the NS equations, where τ is the time between particle collisions. Many numerical tests, including difficult ones for the Navier-Stokes solvers, have been used to validate the current method. Perfect numerical solutions can be obtained from the high Reynolds number boundary layer to the hypersonic viscous heat conducting flow. Following the two-stage time-stepping framework, the third-order GKS flux function can be used as well to construct a fifth-order method with the usage of both first-order and second-order time derivatives of the flux function. The use of time-accurate flux function may have great advantages on the development of higher-order CFD methods.

  1. Next-to-next-to-leading order gravitational spin-squared potential via the effective field theory for spinning objects in the post-Newtonian scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levi, Michele; Steinhoff, Jan, E-mail: michele.levi@upmc.fr, E-mail: jan.steinhoff@aei.mpg.de

    2016-01-01

    The next-to-next-to-leading order spin-squared interaction potential for generic compact binaries is derived for the first time via the effective field theory for gravitating spinning objects in the post-Newtonian scheme. The spin-squared sector is an intricate one, as it requires the consideration of the point particle action beyond minimal coupling, and mainly involves the spin-squared worldline couplings, which are quite complex, compared to the worldline couplings from the minimal coupling part of the action. This sector also involves the linear in spin couplings, as we go up in the nonlinearity of the interaction, and in the loop order. Hence, there ismore » an excessive increase in the number of Feynman diagrams, of which more are higher loop ones. We provide all the Feynman diagrams and their values. The beneficial ''nonrelativistic gravitational'' fields are employed in the computation. This spin-squared correction, which enters at the fourth post-Newtonian order for rapidly rotating compact objects, completes the conservative sector up to the fourth post-Newtonian accuracy. The robustness of the effective field theory for gravitating spinning objects is shown here once again, as demonstrated in a recent series of papers by the authors, which obtained all spin dependent sectors, required up to the fourth post-Newtonian accuracy. The effective field theory of spinning objects allows to directly obtain the equations of motion, and the Hamiltonians, and these will be derived for the potential obtained here in a forthcoming paper.« less

  2. Coupling between shear and bending in the analysis of beam problems: Planar case

    NASA Astrophysics Data System (ADS)

    Shabana, Ahmed A.; Patel, Mohil

    2018-04-01

    The interpretation of invariants, such as curvatures which uniquely define the bending and twist of space curves and surfaces, is fundamental in the formulation of the beam and plate elastic forces. Accurate representations of curve and surface invariants, which enter into the definition of the strain energy equations, is particularly important in the case of large displacement analysis. This paper discusses this important subject in view of the fact that shear and bending are independent modes of deformation and do not have kinematic coupling; this is despite the fact that kinetic coupling may exist. The paper shows, using simple examples, that shear without bending and bending without shear at an arbitrary point and along a certain direction are scenarios that higher-order finite elements (FE) can represent with a degree of accuracy that depends on the order of interpolation and/or mesh size. The FE representation of these two kinematically uncoupled modes of deformation is evaluated in order to examine the effect of the order of the polynomial interpolation on the accuracy of representing these two independent modes. It is also shown in this paper that not all the curvature vectors contribute to bending deformation. In view of the conclusions drawn from the analysis of simple beam problems, the material curvature used in several previous investigations is evaluated both analytically and numerically. The problems associated with the material curvature matrix, obtained using the rotation of the beam cross-section, and the fundamental differences between this material curvature matrix and the Serret-Frenet curvature matrix are discussed.

  3. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  4. Identifying and reducing error in cluster-expansion approximations of protein energies.

    PubMed

    Hahn, Seungsoo; Ashenberg, Orr; Grigoryan, Gevorg; Keating, Amy E

    2010-12-01

    Protein design involves searching a vast space for sequences that are compatible with a defined structure. This can pose significant computational challenges. Cluster expansion is a technique that can accelerate the evaluation of protein energies by generating a simple functional relationship between sequence and energy. The method consists of several steps. First, for a given protein structure, a training set of sequences with known energies is generated. Next, this training set is used to expand energy as a function of clusters consisting of single residues, residue pairs, and higher order terms, if required. The accuracy of the sequence-based expansion is monitored and improved using cross-validation testing and iterative inclusion of additional clusters. As a trade-off for evaluation speed, the cluster-expansion approximation causes prediction errors, which can be reduced by including more training sequences, including higher order terms in the expansion, and/or reducing the sequence space described by the cluster expansion. This article analyzes the sources of error and introduces a method whereby accuracy can be improved by judiciously reducing the described sequence space. The method is applied to describe the sequence-stability relationship for several protein structures: coiled-coil dimers and trimers, a PDZ domain, and T4 lysozyme as examples with computationally derived energies, and SH3 domains in amphiphysin-1 and endophilin-1 as examples where the expanded pseudo-energies are obtained from experiments. Our open-source software package Cluster Expansion Version 1.0 allows users to expand their own energy function of interest and thereby apply cluster expansion to custom problems in protein design. © 2010 Wiley Periodicals, Inc.

  5. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  6. Prostate tissue characterization/classification in 144 patient population using wavelet and higher order spectra features from transrectal ultrasound images.

    PubMed

    Pareek, Gyan; Acharya, U Rajendra; Sree, S Vinitha; Swapna, G; Yantri, Ratna; Martis, Roshan Joy; Saba, Luca; Krishnamurthi, Ganapathy; Mallarini, Giorgio; El-Baz, Ayman; Al Ekish, Shadi; Beland, Michael; Suri, Jasjit S

    2013-12-01

    In this work, we have proposed an on-line computer-aided diagnostic system called "UroImage" that classifies a Transrectal Ultrasound (TRUS) image into cancerous or non-cancerous with the help of non-linear Higher Order Spectra (HOS) features and Discrete Wavelet Transform (DWT) coefficients. The UroImage system consists of an on-line system where five significant features (one DWT-based feature and four HOS-based features) are extracted from the test image. These on-line features are transformed by the classifier parameters obtained using the training dataset to determine the class. We trained and tested six classifiers. The dataset used for evaluation had 144 TRUS images which were split into training and testing sets. Three-fold and ten-fold cross-validation protocols were adopted for training and estimating the accuracy of the classifiers. The ground truth used for training was obtained using the biopsy results. Among the six classifiers, using 10-fold cross-validation technique, Support Vector Machine and Fuzzy Sugeno classifiers presented the best classification accuracy of 97.9% with equally high values for sensitivity, specificity and positive predictive value. Our proposed automated system, which achieved more than 95% values for all the performance measures, can be an adjunct tool to provide an initial diagnosis for the identification of patients with prostate cancer. The technique, however, is limited by the limitations of 2D ultrasound guided biopsy, and we intend to improve our technique by using 3D TRUS images in the future.

  7. Comparison of application of various crossovers in solving inhomogeneous minimax problem modified by Goldberg model

    NASA Astrophysics Data System (ADS)

    Kobak, B. V.; Zhukovskiy, A. G.; Kuzin, A. P.

    2018-05-01

    This paper considers one of the classical NP complete problems - an inhomogeneous minimax problem. When solving such large-scale problem, there appear difficulties in obtaining an exact solution. Therefore, let us propose getting an optimum solution in an acceptable time. Among a wide range of genetic algorithm models, let us choose the modified Goldberg model, which earlier was successfully used by authors in solving NP complete problems. The classical Goldberg model uses a single-point crossover and a singlepoint mutation, which somewhat decreases the accuracy of the obtained results. In the article, let us propose using a full two-point crossover with various mutations previously researched. In addition, the work studied the necessary probability to apply it to the crossover in order to obtain results that are more accurate. Results of the computation experiment showed that the higher the probability of a crossover, the higher the quality of both the average results and the best solutions. In addition, it was found out that the higher the values of the number of individuals and the number of repetitions, the closer both the average results and the best solutions to the optimum. The paper shows how the use of a full two-point crossover increases the accuracy of solving an inhomogeneous minimax problem, while the time for getting the solution increases, but remains polynomial.

  8. Efficient high-order structure-preserving methods for the generalized Rosenau-type equation with power law nonlinearity

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxiang; Liang, Hua; Zhang, Chun

    2018-06-01

    Based on the multi-symplectic Hamiltonian formula of the generalized Rosenau-type equation, a multi-symplectic scheme and an energy-preserving scheme are proposed. To improve the accuracy of the solution, we apply the composition technique to the obtained schemes to develop high-order schemes which are also multi-symplectic and energy-preserving respectively. Discrete fast Fourier transform makes a significant improvement to the computational efficiency of schemes. Numerical results verify that all the proposed schemes have satisfactory performance in providing accurate solution and preserving the discrete mass and energy invariants. Numerical results also show that although each basic time step is divided into several composition steps, the computational efficiency of the composition schemes is much higher than that of the non-composite schemes.

  9. Effect of varying displays and room illuminance on caries diagnostic accuracy in digital dental radiographs.

    PubMed

    Pakkala, T; Kuusela, L; Ekholm, M; Wenzel, A; Haiter-Neto, F; Kortesniemi, M

    2012-01-01

    In clinical practice, digital radiographs taken for caries diagnostics are viewed on varying types of displays and usually in relatively high ambient lighting (room illuminance) conditions. Our purpose was to assess the effect of room illuminance and varying display types on caries diagnostic accuracy in digital dental radiographs. Previous studies have shown that the diagnostic accuracy of caries detection is significantly better in reduced lighting conditions. Our hypothesis was that higher display luminance could compensate for this in higher ambient lighting conditions. Extracted human teeth with approximal surfaces clinically ranging from sound to demineralized were radiographed and evaluated by 3 observers who detected carious lesions on 3 different types of displays in 3 different room illuminance settings ranging from low illumination, i.e. what is recommended for diagnostic viewing, to higher illumination levels corresponding to those found in an average dental office. Sectioning and microscopy of the teeth validated the presence or absence of a carious lesion. Sensitivity, specificity and accuracy were calculated for each modality and observer. Differences were estimated by analyzing the binary data assuming the added effects of observer and modality in a generalized linear model. The observers obtained higher sensitivities in lower illuminance settings than in higher illuminance settings. However, this was related to a reduction in specificity, which meant that there was no significant difference in overall accuracy. Contrary to our hypothesis, there were no significant differences between the accuracy of different display types. Therefore, different displays and room illuminance levels did not affect the overall accuracy of radiographic caries detection. Copyright © 2012 S. Karger AG, Basel.

  10. a Cell Vertex Algorithm for the Incompressible Navier-Stokes Equations on Non-Orthogonal Grids

    NASA Astrophysics Data System (ADS)

    Jessee, J. P.; Fiveland, W. A.

    1996-08-01

    The steady, incompressible Navier-Stokes (N-S) equations are discretized using a cell vertex, finite volume method. Quadrilateral and hexahedral meshes are used to represent two- and three-dimensional geometries respectively. The dependent variables include the Cartesian components of velocity and pressure. Advective fluxes are calculated using bounded, high-resolution schemes with a deferred correction procedure to maintain a compact stencil. This treatment insures bounded, non-oscillatory solutions while maintaining low numerical diffusion. The mass and momentum equations are solved with the projection method on a non-staggered grid. The coupling of the pressure and velocity fields is achieved using the Rhie and Chow interpolation scheme modified to provide solutions independent of time steps or relaxation factors. An algebraic multigrid solver is used for the solution of the implicit, linearized equations.A number of test cases are anlaysed and presented. The standard benchmark cases include a lid-driven cavity, flow through a gradual expansion and laminar flow in a three-dimensional curved duct. Predictions are compared with data, results of other workers and with predictions from a structured, cell-centred, control volume algorithm whenever applicable. Sensitivity of results to the advection differencing scheme is investigated by applying a number of higher-order flux limiters: the MINMOD, MUSCL, OSHER, CLAM and SMART schemes. As expected, studies indicate that higher-order schemes largely mitigate the diffusion effects of first-order schemes but also shown no clear preference among the higher-order schemes themselves with respect to accuracy. The effect of the deferred correction procedure on global convergence is discussed.

  11. Improved Collaborative Filtering Algorithm via Information Transformation

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Wang, Bing-Hong; Guo, Qiang

    In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering using the Pearson correlation. Furthermore, we introduce a free parameter β to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-N similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.

  12. The equilibrium-diffusion limit for radiation hydrodynamics

    DOE PAGES

    Ferguson, J. M.; Morel, J. E.; Lowrie, R.

    2017-07-27

    The equilibrium-diffusion approximation (EDA) is used to describe certain radiation-hydrodynamic (RH) environments. When this is done the RH equations reduce to a simplified set of equations. The EDA can be derived by asymptotically analyzing the full set of RH equations in the equilibrium-diffusion limit. Here, we derive the EDA this way and show that it and the associated set of simplified equations are both first-order accurate with transport corrections occurring at second order. Having established the EDA’s first-order accuracy we then analyze the grey nonequilibrium-diffusion approximation and the grey Eddington approximation and show that they both preserve this first-order accuracy.more » Further, these approximations preserve the EDA’s first-order accuracy when made in either the comoving-frame (CMF) or the lab-frame (LF). And while analyzing the Eddington approximation, we found that the CMF and LF radiation-source equations are equivalent when neglecting O(β 2) terms and compared in the LF. Of course, the radiation pressures are not equivalent. It is expected that simplified physical models and numerical discretizations of the RH equations that do not preserve this first-order accuracy will not retain the correct equilibrium-diffusion solutions. As a practical example, we show that nonequilibrium-diffusion radiative-shock solutions devolve to equilibrium-diffusion solutions when the asymptotic parameter is small.« less

  13. Accuracy of perturbative master equations.

    PubMed

    Fleming, C H; Cummings, N I

    2011-03-01

    We consider open quantum systems with dynamics described by master equations that have perturbative expansions in the system-environment interaction. We show that, contrary to intuition, full-time solutions of order-2n accuracy require an order-(2n+2) master equation. We give two examples of such inaccuracies in the solutions to an order-2n master equation: order-2n inaccuracies in the steady state of the system and order-2n positivity violations. We show how these arise in a specific example for which exact solutions are available. This result has a wide-ranging impact on the validity of coupling (or friction) sensitive results derived from second-order convolutionless, Nakajima-Zwanzig, Redfield, and Born-Markov master equations.

  14. Order of accuracy of QUICK and related convection-diffusion schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.

  15. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE PAGES

    Gao, Kai; Huang, Lianjie

    2017-08-31

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  16. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Huang, Lianjie

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  17. A projection hybrid high order finite volume/finite element method for incompressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Busto, S.; Ferrín, J. L.; Toro, E. F.; Vázquez-Cendón, M. E.

    2018-01-01

    In this paper the projection hybrid FV/FE method presented in [1] is extended to account for species transport equations. Furthermore, turbulent regimes are also considered thanks to the k-ε model. Regarding the transport diffusion stage new schemes of high order of accuracy are developed. The CVC Kolgan-type scheme and ADER methodology are extended to 3D. The latter is modified in order to profit from the dual mesh employed by the projection algorithm and the derivatives involved in the diffusion term are discretized using a Galerkin approach. The accuracy and stability analysis of the new method are carried out for the advection-diffusion-reaction equation. Within the projection stage the pressure correction is computed by a piecewise linear finite element method. Numerical results are presented, aimed at verifying the formal order of accuracy of the scheme and to assess the performance of the method on several realistic test problems.

  18. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    NASA Astrophysics Data System (ADS)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  19. Electronic Structures of Anti-Ferromagnetic Tetraradicals: Ab Initio and Semi-Empirical Studies.

    PubMed

    Zhang, Dawei; Liu, Chungen

    2016-04-12

    The energy relationships and electronic structures of the lowest-lying spin states in several anti-ferromagnetic tetraradical model systems are studied with high-level ab initio and semi-empirical methods. The Full-CI method (FCI), the complete active space second-order perturbation theory (CASPT2), and the n-electron valence state perturbation theory (NEVPT2) are employed to obtain reference results. By comparing the energy relationships predicted from the Heisenberg and Hubbard models with ab initio benchmarks, the accuracy of the widely used Heisenberg model for anti-ferromagnetic spin-coupling in low-spin polyradicals is cautiously tested in this work. It is found that the strength of electron correlation (|U/t|) concerning anti-ferromagnetically coupled radical centers could range widely from strong to moderate correlation regimes and could become another degree of freedom besides the spin multiplicity. Accordingly, the Heisenberg-type model works well in the regime of strong correlation, which reproduces well the energy relationships along with the wave functions of all the spin states. In moderately spin-correlated tetraradicals, the results of the prototype Heisenberg model deviate severely from those of multi-reference electron correlation ab initio methods, while the extended Heisenberg model, containing four-body terms, can introduce reasonable corrections and maintains its accuracy in this condition. In the weak correlation regime, both the prototype Heisenberg model and its extended forms containing higher-order correction terms will encounter difficulties. Meanwhile, the Hubbard model shows balanced accuracy from strong to weak correlation cases and can reproduce qualitatively correct electronic structures, which makes it more suitable for the study of anti-ferromagnetic coupling in polyradical systems.

  20. Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations

    NASA Astrophysics Data System (ADS)

    Wyszkowska, Patrycja

    2017-12-01

    The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.

  1. Double sided grating fabrication for high energy X-ray phase contrast imaging

    DOE PAGES

    Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick; ...

    2018-04-19

    State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less

  2. Double sided grating fabrication for high energy X-ray phase contrast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick

    State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less

  3. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species.

    PubMed

    Klippenstein, Stephen J; Harding, Lawrence B; Ruscic, Branko

    2017-09-07

    The fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds to essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2 , CH 4 , H 2 O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zero-point energy, core-valence, relativistic, and diagonal Born-Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0-1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.

  4. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko

    Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less

  5. A quasi steady state method for solving transient Darcy flow in complex 3D fractured networks accounting for matrix to fracture flow

    NASA Astrophysics Data System (ADS)

    Nœtinger, B.

    2015-02-01

    Modeling natural Discrete Fracture Networks (DFN) receives more and more attention in applied geosciences, from oil and gas industry, to geothermal recovery and aquifer management. The fractures may be either natural, or artificial in case of well stimulation. Accounting for the flow inside the fracture network, and accounting for the transfers between the matrix and the fractures, with the same level of accuracy is an important issue for calibrating the well architecture and for setting up optimal resources recovery strategies. Recently, we proposed an original method allowing to model transient pressure diffusion in the fracture network only [1]. The matrix was assumed to be impervious. A systematic approximation scheme was built, allowing to model the initial DFN by a set of N unknowns located at each identified intersection between fractures. The higher N, the higher the accuracy of the model. The main assumption was using a quasi steady state hypothesis, that states that the characteristic diffusion time over one single fracture is negligible compared with the characteristic time of the macroscopic problem, e.g. change of boundary conditions. In that context, the lowest order approximation N = 1 has the form of solving a transient problem in a resistor/capacitor network, a so-called pipe network. Its topology is the same as the network of geometrical intersections between fractures. In this paper, we generalize this approach in order to account for fluxes from matrix to fractures. The quasi steady state hypothesis at the fracture level is still kept. Then, we show that in the case of well separated time scales between matrix and fractures, the preceding model needs only to be slightly modified in order to incorporate these fluxes. The additional knowledge of the so-called matrix to fracture transfer function allows to modify the mass matrix that becomes a time convolution operator. This is reminiscent of existing space averaged transient dual porosity models.

  6. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species

    DOE PAGES

    Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko

    2017-07-31

    Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less

  7. Millimeter accuracy satellites for two color ranging

    NASA Technical Reports Server (NTRS)

    Degnan, John J.

    1993-01-01

    The principal technical challenge in designing a millimeter accuracy satellite to support two color observations at high altitudes is to provide high optical cross-section simultaneously with minimal pulse spreading. In order to address this issue, we provide, a brief review of some fundamental properties of optical retroreflectors when used in spacecraft target arrays, develop a simple model for a spherical geodetic satellite, and use the model to determine some basic design criteria for a new generation of geodetic satellites capable of supporting millimeter accuracy two color laser ranging. We find that increasing the satellite diameter provides: a larger surface area for additional cube mounting thereby leading to higher cross-sections; and makes the satellite surface a better match for the incoming planar phasefront of the laser beam. Restricting the retroreflector field of view (e.g. by recessing it in its holder) limits the target response to the fraction of the satellite surface which best matches the optical phasefront thereby controlling the amount of pulse spreading. In surveying the arrays carried by existing satellites, we find that European STARLETTE and ERS-1 satellites appear to be the best candidates for supporting near term two color experiments in space.

  8. Studying the Representation Accuracy of the Earth's Gravity Field in the Polar Regions Based on the Global Geopotential Models

    NASA Astrophysics Data System (ADS)

    Koneshov, V. N.; Nepoklonov, V. B.

    2018-05-01

    The development of studies on estimating the accuracy of the Earth's modern global gravity models in terms of the spherical harmonics of the geopotential in the problematic regions of the world is discussed. The comparative analysis of the results of reconstructing quasi-geoid heights and gravity anomalies from the different models is carried out for two polar regions selected within a radius of 1000 km from the North and South poles. The analysis covers nine recently developed models, including six high-resolution models and three lower order models, including the Russian GAOP2012 model. It is shown that the modern models determine the quasi-geoid heights and gravity anomalies in the polar regions with errors of 5 to 10 to a few dozen cm and from 3 to 5 to a few dozen mGal, respectively, depending on the resolution. The accuracy of the models in the Arctic is several times higher than in the Antarctic. This is associated with the peculiarities of gravity anomalies in every particular region and with the fact that the polar part of the Antarctic has been comparatively less explored by the gravity methods than the polar Arctic.

  9. Detection of dechallenge in spontaneous reporting systems: a comparison of Bayes methods.

    PubMed

    Banu, A Bazila; Alias Balamurugan, S Appavu; Thirumalaikolundusubramanian, Ponniah

    2014-01-01

    Dechallenge is a response observed for the reduction or disappearance of adverse drug reactions (ADR) on withdrawal of a drug from a patient. Currently available algorithms to detect dechallenge have limitations. Hence, there is a need to compare available new methods. To detect dechallenge in Spontaneous Reporting Systems, data-mining algorithms like Naive Bayes and Improved Naive Bayes were applied for comparing the performance of the algorithms in terms of accuracy and error. Analyzing the factors of dechallenge like outcome and disease category will help medical practitioners and pharmaceutical industries to determine the reasons for dechallenge in order to take essential steps toward drug safety. Adverse drug reactions of the year 2011 and 2012 were downloaded from the United States Food and Drug Administration's database. The outcome of classification algorithms showed that Improved Naive Bayes algorithm outperformed Naive Bayes with accuracy of 90.11% and error of 9.8% in detecting the dechallenge. Detecting dechallenge for unknown samples are essential for proper prescription. To overcome the issues exposed by Naive Bayes algorithm, Improved Naive Bayes algorithm can be used to detect dechallenge in terms of higher accuracy and minimal error.

  10. Effect of Facet Displacement on Radiation Field and Its Application for Panel Adjustment of Large Reflector Antenna

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Lian, Peiyuan; Zhang, Shuxin; Xiang, Binbin; Xu, Qian

    2017-05-01

    Large reflector antennas are widely used in radars, satellite communication, radio astronomy, and so on. The rapid developments in these fields have created demands for development of better performance and higher surface accuracy. However, low accuracy and low efficiency are the common disadvantages for traditional panel alignment and adjustment. In order to improve the surface accuracy of large reflector antenna, a new method is presented to determinate panel adjustment values from far field pattern. Based on the method of Physical Optics (PO), the effect of panel facet displacement on radiation field value is derived. Then the linear system is constructed between panel adjustment vector and far field pattern. Using the method of Singular Value Decomposition (SVD), the adjustment value for all panel adjustors are obtained by solving the linear equations. An experiment is conducted on a 3.7 m reflector antenna with 12 segmented panels. The results of simulation and test are similar, which shows that the presented method is feasible. Moreover, the discussion about validation shows that the method can be used for many cases of reflector shape. The proposed research provides the instruction to adjust surface panels efficiently and accurately.

  11. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.

  12. Improving Surveying Accuracy and Efficiency in Connecticut: An Accuracy Assessment of GEOID03 and GEOID09

    DOT National Transportation Integrated Search

    2010-03-01

    Comparing published NAVD 88 Helmert orthometric heights of First-Order bench marks against GPS-determined orthometric heights showed that GEOID03 and GEOID09 perform at their reported accuracy in Connecticut. GPS-determined orthometric heights were d...

  13. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    PubMed Central

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  14. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    PubMed

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  15. A deformable particle-in-cell method for advective transport in geodynamic modeling

    NASA Astrophysics Data System (ADS)

    Samuel, Henri

    2018-06-01

    This paper presents an improvement of the particle-in-cell method commonly used in geodynamic modeling for solving pure advection of sharply varying fields. Standard particle-in-cell approaches use particle kernels to transfer the information carried by the Lagrangian particles to/from the Eulerian grid. These kernels are generally one-dimensional and non-evolutive, which leads to the development of under- and over-sampling of the spatial domain by the particles. This reduces the accuracy of the solution, and may require the use of a prohibitive amount of particles in order to maintain the solution accuracy to an acceptable level. The new proposed approach relies on the use of deformable kernels that account for the strain history in the vicinity of particles. It results in a significant improvement of the spatial sampling by the particles, leading to a much higher accuracy of the numerical solution, for a reasonable computational extra cost. Various 2D tests were conducted to compare the performances of the deformable particle-in-cell method with the particle-in-cell approach. These consistently show that at comparable accuracy, the deformable particle-in-cell method was found to be four to six times more efficient than standard particle-in-cell approaches. The method could be adapted to 3D space and generalized to cases including motionless transport.

  16. Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Niemann, K. Olaf; Liu, Jing; Shi, Yifang; Wang, Tiejun

    2018-02-01

    Separation of foliar and woody materials using remotely sensed data is crucial for the accurate estimation of leaf area index (LAI) and woody biomass across forest stands. In this paper, we present a new method to accurately separate foliar and woody materials using terrestrial LiDAR point clouds obtained from ten test sites in a mixed forest in Bavarian Forest National Park, Germany. Firstly, we applied and compared an adaptive radius near-neighbor search algorithm with a fixed radius near-neighbor search method in order to obtain both radiometric and geometric features derived from terrestrial LiDAR point clouds. Secondly, we used a random forest machine learning algorithm to classify foliar and woody materials and examined the impact of understory and slope on the classification accuracy. An average overall accuracy of 84.4% (Kappa = 0.75) was achieved across all experimental plots. The adaptive radius near-neighbor search method outperformed the fixed radius near-neighbor search method. The classification accuracy was significantly higher when the combination of both radiometric and geometric features was utilized. The analysis showed that increasing slope and understory coverage had a significant negative effect on the overall classification accuracy. Our results suggest that the utilization of the adaptive radius near-neighbor search method coupling both radiometric and geometric features has the potential to accurately discriminate foliar and woody materials from terrestrial LiDAR data in a mixed natural forest.

  17. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.

    1990-01-01

    A regular perturbation analysis is presented. Closed-loop simulations were performed with a first order correction including all of the atmospheric terms. In addition, a method was developed for independently checking the accuracy of the analysis and the rather extensive programming required to implement the complete first order correction with all of the aerodynamic effects included. This amounted to developing an equivalent Hamiltonian computed from the first order analysis. A second order correction was also completed for the neglected spherical Earth and back-pressure effects. Finally, an analysis was begun on a method for dealing with control inequality constraints. The results on including higher order corrections do show some improvement for this application; however, it is not known at this stage if significant improvement will result when the aerodynamic forces are included. The weak formulation for solving optimal problems was extended in order to account for state inequality constraints. The formulation was tested on three example problems and numerical results were compared to the exact solutions. Development of a general purpose computational environment for the solution of a large class of optimal control problems is under way. An example, along with the necessary input and the output, is given.

  18. Workshop on Higher-Order Spectral Analysis Held at Vail, Colorado on 28- 30 June 1989

    DTIC Science & Technology

    1989-11-28

    8217’ Academic Press, 1..~ndonr 19111 7) 1. Sui± Ri- P !, M’’ LI I . (3 :br, Anr Intirodclio(n to Bisrpctal Analysis1. 𔄃 /1.- - -ir , qe nc 5 dc/s...a single equauon (2.8) (with m=() has clearly resulted in a severe Series Analysis. II, D.F. Findles, Ed. Nes, York: Academic , 1981. loss of accuracy...New York, Academic : 1966. 48-69:1965, (331 M. R. Rosenblatt, "Energy transfer for the Burgers’ equa- [15] K. Hasselman, W. Munk and G. J. F

  19. Energy invariant for shallow-water waves and the Korteweg-de Vries equation: Doubts about the invariance of energy

    NASA Astrophysics Data System (ADS)

    Karczewska, Anna; Rozmej, Piotr; Infeld, Eryk

    2015-11-01

    It is well known that the Korteweg-de Vries (KdV) equation has an infinite set of conserved quantities. The first three are often considered to represent mass, momentum, and energy. Here we try to answer the question of how this comes about and also how these KdV quantities relate to those of the Euler shallow-water equations. Here Luke's Lagrangian is helpful. We also consider higher-order extensions of KdV. Though in general not integrable, in some sense they are almost so within the accuracy of the expansion.

  20. Subdiffraction incoherent optical imaging via spatial-mode demultiplexing: Semiclassical treatment

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2018-02-01

    I present a semiclassical analysis of a spatial-mode demultiplexing (SPADE) measurement scheme for far-field incoherent optical imaging under the effects of diffraction and photon shot noise. Building on previous results that assume two point sources or the Gaussian point-spread function, I generalize SPADE for a larger class of point-spread functions and evaluate its errors in estimating the moments of an arbitrary subdiffraction object. Compared with the limits to direct imaging set by the Cramér-Rao bounds, the results show that SPADE can offer far superior accuracy in estimating second- and higher-order moments.

  1. Evolution: the dialogue between life and death

    NASA Astrophysics Data System (ADS)

    Holliday, robin

    1997-12-01

    Organisms have the ability to harness energy from the environment to create order and to reproduce. From early error-prone systems natural selection acted to produce present day organisms with high accuracy in the synthesis of macromolecules. The environment imposes strict limits on reproduction, so evolution is always accompanied by the discarding of a large proportion of the less fit cells, or organisms. Sexual reproduction depends on an immortal germline and a soma which may be immortal or mortal. Higher animals living in hazardous environments have evolved aging and death of the soma for the benefit of the ongoing germline.

  2. Solution methods for one-dimensional viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, John M.; Simitses, George J.

    1987-01-01

    A recently developed differential methodology for solution of one-dimensional nonlinear viscoelastic problems is presented. Using the example of an eccentrically loaded cantilever beam-column, the results from the differential formulation are compared to results generated using a previously published integral solution technique. It is shown that the results obtained from these distinct methodologies exhibit a surprisingly high degree of correlation with one another. A discussion of the various factors affecting the numerical accuracy and rate of convergence of these two procedures is also included. Finally, the influences of some 'higher order' effects, such as straining along the centroidal axis are discussed.

  3. Commissioning and field tests of a van-mounted system for the detection of radioactive sources and Special Nuclear Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cester, D.; Lunardon, M.; Stevanato, L.

    2015-07-01

    MODES SNM project aimed to carry out technical research in order to develop a prototype for a mobile, modular detection system for radioactive sources and Special Nuclear Materials (SNM). Its main goal was to deliver a tested prototype of a modular mobile system capable of passively detecting weak or shielded radioactive sources with accuracy higher than that of currently available systems. By the end of the project all the objectives have been successfully achieved. Results from the laboratory commissioning and the field tests will be presented. (authors)

  4. Fragmentation functions at next-to-next-to-leading order accuracy

    DOE PAGES

    Anderle, Daniele P.; Stratmann, Marco; Ringer, Felix

    2015-12-01

    We present a first analysis of parton-to-pion fragmentation functions at next-to-next-to-leading order accuracy in QCD based on single-inclusive pion production in electron-positron annihilation. Special emphasis is put on the technical details necessary to perform the QCD scale evolution and cross section calculation in Mellin moment space. Lastly, we demonstrate how the description of the data and the theoretical uncertainties are improved when next-to-next-to-leading order QCD corrections are included.

  5. Implementation of higher-order vertical finite elements in ISSM v4.13 for improved ice sheet flow modeling over paleoclimate timescales

    NASA Astrophysics Data System (ADS)

    Cuzzone, Joshua K.; Morlighem, Mathieu; Larour, Eric; Schlegel, Nicole; Seroussi, Helene

    2018-05-01

    Paleoclimate proxies are being used in conjunction with ice sheet modeling experiments to determine how the Greenland ice sheet responded to past changes, particularly during the last deglaciation. Although these comparisons have been a critical component in our understanding of the Greenland ice sheet sensitivity to past warming, they often rely on modeling experiments that favor minimizing computational expense over increased model physics. Over Paleoclimate timescales, simulating the thermal structure of the ice sheet has large implications on the modeled ice viscosity, which can feedback onto the basal sliding and ice flow. To accurately capture the thermal field, models often require a high number of vertical layers. This is not the case for the stress balance computation, however, where a high vertical resolution is not necessary. Consequently, since stress balance and thermal equations are generally performed on the same mesh, more time is spent on the stress balance computation than is otherwise necessary. For these reasons, running a higher-order ice sheet model (e.g., Blatter-Pattyn) over timescales equivalent to the paleoclimate record has not been possible without incurring a large computational expense. To mitigate this issue, we propose a method that can be implemented within ice sheet models, whereby the vertical interpolation along the z axis relies on higher-order polynomials, rather than the traditional linear interpolation. This method is tested within the Ice Sheet System Model (ISSM) using quadratic and cubic finite elements for the vertical interpolation on an idealized case and a realistic Greenland configuration. A transient experiment for the ice thickness evolution of a single-dome ice sheet demonstrates improved accuracy using the higher-order vertical interpolation compared to models using the linear vertical interpolation, despite having fewer degrees of freedom. This method is also shown to improve a model's ability to capture sharp thermal gradients in an ice sheet particularly close to the bed, when compared to models using a linear vertical interpolation. This is corroborated in a thermal steady-state simulation of the Greenland ice sheet using a higher-order model. In general, we find that using a higher-order vertical interpolation decreases the need for a high number of vertical layers, while dramatically reducing model runtime for transient simulations. Results indicate that when using a higher-order vertical interpolation, runtimes for a transient ice sheet relaxation are upwards of 5 to 7 times faster than using a model which has a linear vertical interpolation, and this thus requires a higher number of vertical layers to achieve a similar result in simulated ice volume, basal temperature, and ice divide thickness. The findings suggest that this method will allow higher-order models to be used in studies investigating ice sheet behavior over paleoclimate timescales at a fraction of the computational cost than would otherwise be needed for a model using a linear vertical interpolation.

  6. Measurement of the PPN parameter γ by testing the geometry of near-Earth space

    NASA Astrophysics Data System (ADS)

    Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang

    2016-06-01

    The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.

  7. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less

  8. Method for automatic detection of wheezing in lung sounds.

    PubMed

    Riella, R J; Nohama, P; Maia, J M

    2009-07-01

    The present report describes the development of a technique for automatic wheezing recognition in digitally recorded lung sounds. This method is based on the extraction and processing of spectral information from the respiratory cycle and the use of these data for user feedback and automatic recognition. The respiratory cycle is first pre-processed, in order to normalize its spectral information, and its spectrogram is then computed. After this procedure, the spectrogram image is processed by a two-dimensional convolution filter and a half-threshold in order to increase the contrast and isolate its highest amplitude components, respectively. Thus, in order to generate more compressed data to automatic recognition, the spectral projection from the processed spectrogram is computed and stored as an array. The higher magnitude values of the array and its respective spectral values are then located and used as inputs to a multi-layer perceptron artificial neural network, which results an automatic indication about the presence of wheezes. For validation of the methodology, lung sounds recorded from three different repositories were used. The results show that the proposed technique achieves 84.82% accuracy in the detection of wheezing for an isolated respiratory cycle and 92.86% accuracy for the detection of wheezes when detection is carried out using groups of respiratory cycles obtained from the same person. Also, the system presents the original recorded sound and the post-processed spectrogram image for the user to draw his own conclusions from the data.

  9. Electromagnetic Contact-Force Sensing Electrophysiological Catheters: How Accurate is the Technology?

    PubMed

    Bourier, Felix; Hessling, Gabriele; Ammar-Busch, Sonia; Kottmaier, Marc; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Semmler, Verena; Lennerz, Carsten; Schneider, Christine; Kolb, Christof; Deisenhofer, Isabel; Reents, Tilko

    2016-03-01

    Contact-force (CF) sensing catheters are increasingly used in clinical electrophysiological practice due to their efficacy and safety profile. As data about the accuracy of this technology are scarce, we sought to quantify accuracy based on in vitro experiments. A custom-made force sensor was constructed that allowed exact force reference measurements registered via a flexible membrane. A Smarttouch Surround Flow (ST SF) ablation catheter (Biosense Webster, Diamond Bar, CA, USA) was brought in contact with the membrane of the force sensor in order to compare the ST SF force measurements to force sensor reference measurements. ST SF force sensing technology is based on deflection registration between the distal and proximal catheter tip. The experiment was repeated for n = 10 ST SF catheters, which showed no significant difference in accuracy levels. A series of measurements (n = 1200) was carried out for different angles of force acting to the catheter tip (0°/perpendicular contact, 30°, 60°, 90°/parallel contact). The mean absolute differences between reference and ST SF measurements were 1.7 ± 1.8 g (0°), 1.6 ± 1.2 g (30°), 1.4 ± 1.3 g (60°), and 6.6 ± 5.9 g (90°). Measurement accuracy was significantly higher in non-parallel contact when compared with parallel contact (P < 0.01). Catheter force measurements using the ST SF catheters show a high level of accuracy regarding differences to reference measurements and reproducibility. The reduced accuracy in measurements of 90° acting forces (parallel contact) might be clinically important when creating, for example, linear lesions. © 2015 Wiley Periodicals, Inc.

  10. Higher Order Thermal Lattice Boltzmann Model

    NASA Astrophysics Data System (ADS)

    Sorathiya, Shahajhan; Ansumali, Santosh

    2013-03-01

    Lattice Boltzmann method (LBM) modelling of thermal flows, compressible and micro flows requires an accurate velocity space discretization. The sub optimality of Gauss-Hermite quadrature in this regard is well known. Most of the thermal LBM in the past have suffered from instability due to lack of proper H-theorem and accuracy. Motivated from these issues, the present work develops along the two works and and imposes an eighth higher order moment to get correct thermal physics. We show that this can be done by adding just 6 more velocities to D3Q27 model and obtain a ``multi-speed on lattice thermal LBM'' with 33 velocities in 3D and calO (u4) and calO (T4) accurate fieq with a consistent H-theorem and inherent numerical stability. Simulations for Rayleigh-Bernard as well as velocity and temperature slip in micro flows matches with analytical results. Lid driven cavity set up for grid convergence is studied. Finally, a novel data structure is developed for HPC. The authors express their gratitude for computational resources and financial support provide by Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), Bangalore, India.

  11. Water injection into vapor- and liquid-dominated reservoirs: Modeling of heat transfer and mass transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruess, K.; Oldenburg, C.; Moridis, G.

    1997-12-31

    This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less

  12. Corn and soybean Landsat MSS classification performance as a function of scene characteristics

    NASA Technical Reports Server (NTRS)

    Batista, G. T.; Hixson, M. M.; Bauer, M. E.

    1982-01-01

    In order to fully utilize remote sensing to inventory crop production, it is important to identify the factors that affect the accuracy of Landsat classifications. The objective of this study was to investigate the effect of scene characteristics involving crop, soil, and weather variables on the accuracy of Landsat classifications of corn and soybeans. Segments sampling the U.S. Corn Belt were classified using a Gaussian maximum likelihood classifier on multitemporally registered data from two key acquisition periods. Field size had a strong effect on classification accuracy with small fields tending to have low accuracies even when the effect of mixed pixels was eliminated. Other scene characteristics accounting for variability in classification accuracy included proportions of corn and soybeans, crop diversity index, proportion of all field crops, soil drainage, slope, soil order, long-term average soybean yield, maximum yield, relative position of the segment in the Corn Belt, weather, and crop development stage.

  13. Solving Nonlinear Euler Equations with Arbitrary Accuracy

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2005-01-01

    A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.

  14. Solving ODE Initial Value Problems With Implicit Taylor Series Methods

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2000-01-01

    In this paper we introduce a new class of numerical methods for integrating ODE initial value problems. Specifically, we propose an extension of the Taylor series method which significantly improves its accuracy and stability while also increasing its range of applicability. To advance the solution from t (sub n) to t (sub n+1), we expand a series about the intermediate point t (sub n+mu):=t (sub n) + mu h, where h is the stepsize and mu is an arbitrary parameter called an expansion coefficient. We show that, in general, a Taylor series of degree k has exactly k expansion coefficients which raise its order of accuracy. The accuracy is raised by one order if k is odd, and by two orders if k is even. In addition, if k is three or greater, local extrapolation can be used to raise the accuracy two additional orders. We also examine stability for the problem y'= lambda y, Re (lambda) less than 0, and identify several A-stable schemes. Numerical results are presented for both fixed and variable stepsizes. It is shown that implicit Taylor series methods provide an effective integration tool for most problems, including stiff systems and ODE's with a singular point.

  15. Validation of geometric accuracy of Global Land Survey (GLS) 2000 data

    USGS Publications Warehouse

    Rengarajan, Rajagopalan; Sampath, Aparajithan; Storey, James C.; Choate, Michael J.

    2015-01-01

    The Global Land Survey (GLS) 2000 data were generated from Geocover™ 2000 data with the aim of producing a global data set of accuracy better than 25 m Root Mean Square Error (RMSE). An assessment and validation of accuracy of GLS 2000 data set, and its co-registration with Geocover™ 2000 data set is presented here. Since the availability of global data sets that have higher nominal accuracy than the GLS 2000 is a concern, the data sets were assessed in three tiers. In the first tier, the data were compared with the Geocover™ 2000 data. This comparison provided a means of localizing regions of higher differences. In the second tier, the GLS 2000 data were compared with systematically corrected Landsat-7 scenes that were obtained in a time period when the spacecraft pointing information was extremely accurate. These comparisons localize regions where the data are consistently off, which may indicate regions of higher errors. The third tier consisted of comparing the GLS 2000 data against higher accuracy reference data. The reference data were the Digital Ortho Quads over the United States, orthorectified SPOT data over Australia, and high accuracy check points obtained using triangulation bundle adjustment of Landsat-7 images over selected sites around the world. The study reveals that the geometric errors in Geocover™ 2000 data have been rectified in GLS 2000 data, and that the accuracy of GLS 2000 data can be expected to be better than 25 m RMSE for most of its constituent scenes.

  16. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations. Part 1; Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2009-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.

  17. Towards Investigating Global Warming Impact on Human Health Using Derivatives of Photoplethysmogram Signals.

    PubMed

    Elgendi, Mohamed; Norton, Ian; Brearley, Matt; Fletcher, Richard R; Abbott, Derek; Lovell, Nigel H; Schuurmans, Dale

    2015-10-14

    Recent clinical studies show that the contour of the photoplethysmogram (PPG) wave contains valuable information for characterizing cardiovascular activity. However, analyzing the PPG wave contour is difficult; therefore, researchers have applied first or higher order derivatives to emphasize and conveniently quantify subtle changes in the filtered PPG contour. Our hypothesis is that analyzing the whole PPG recording rather than each PPG wave contour or on a beat-by-beat basis can detect heat-stressed subjects and that, consequently, we will be able to investigate the impact of global warming on human health. Here, we explore the most suitable derivative order for heat stress assessment based on the energy and entropy of the whole PPG recording. The results of our study indicate that the use Int. J. Environ. Res. Public Health 2015, 7 12777 of the entropy of the seventh derivative of the filtered PPG signal shows promising results in detecting heat stress using 20-second recordings, with an overall accuracy of 71.6%. Moreover, the combination of the entropy of the seventh derivative of the filtered PPG signal with the root mean square of successive differences, or RMSSD (a traditional heart rate variability index of heat stress), improved the detection of heat stress to 88.9% accuracy.

  18. Automated diagnosis of dry eye using infrared thermography images

    NASA Astrophysics Data System (ADS)

    Acharya, U. Rajendra; Tan, Jen Hong; Koh, Joel E. W.; Sudarshan, Vidya K.; Yeo, Sharon; Too, Cheah Loon; Chua, Chua Kuang; Ng, E. Y. K.; Tong, Louis

    2015-07-01

    Dry Eye (DE) is a condition of either decreased tear production or increased tear film evaporation. Prolonged DE damages the cornea causing the corneal scarring, thinning and perforation. There is no single uniform diagnosis test available to date; combinations of diagnostic tests are to be performed to diagnose DE. The current diagnostic methods available are subjective, uncomfortable and invasive. Hence in this paper, we have developed an efficient, fast and non-invasive technique for the automated identification of normal and DE classes using infrared thermography images. The features are extracted from nonlinear method called Higher Order Spectra (HOS). Features are ranked using t-test ranking strategy. These ranked features are fed to various classifiers namely, K-Nearest Neighbor (KNN), Nave Bayesian Classifier (NBC), Decision Tree (DT), Probabilistic Neural Network (PNN), and Support Vector Machine (SVM) to select the best classifier using minimum number of features. Our proposed system is able to identify the DE and normal classes automatically with classification accuracy of 99.8%, sensitivity of 99.8%, and specificity if 99.8% for left eye using PNN and KNN classifiers. And we have reported classification accuracy of 99.8%, sensitivity of 99.9%, and specificity if 99.4% for right eye using SVM classifier with polynomial order 2 kernel.

  19. The effect of search term on the quality and accuracy of online information regarding distal radius fractures.

    PubMed

    Dy, Christopher J; Taylor, Samuel A; Patel, Ronak M; Kitay, Alison; Roberts, Timothy R; Daluiski, Aaron

    2012-09-01

    Recent emphasis on shared decision making and patient-centered research has increased the importance of patient education and health literacy. The internet is rapidly growing as a source of self-education for patients. However, concern exists over the quality, accuracy, and readability of the information. Our objective was to determine whether the quality, accuracy, and readability of information online about distal radius fractures vary with the search term. This was a prospective evaluation of 3 search engines using 3 different search terms of varying sophistication ("distal radius fracture," "wrist fracture," and "broken wrist"). We evaluated 70 unique Web sites for quality, accuracy, and readability. We used comparative statistics to determine whether the search term affected the quality, accuracy, and readability of the Web sites found. Three orthopedic surgeons independently gauged quality and accuracy of information using a set of predetermined scoring criteria. We evaluated the readability of the Web site using the Fleisch-Kincaid score for reading grade level. There were significant differences in the quality, accuracy, and readability of information found, depending on the search term. We found higher quality and accuracy resulted from the search term "distal radius fracture," particularly compared with Web sites resulting from the term "broken wrist." The reading level was higher than recommended in 65 of the 70 Web sites and was significantly higher when searching with "distal radius fracture" than "wrist fracture" or "broken wrist." There was no correlation between Web site reading level and quality or accuracy. The readability of information about distal radius fractures in most Web sites was higher than the recommended reading level for the general public. The quality and accuracy of the information found significantly varied with the sophistication of the search term used. Physicians, professional societies, and search engines should consider efforts to improve internet access to high-quality information at an understandable level. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  20. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  1. A fourth order accurate finite difference scheme for the computation of elastic waves

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.

    1986-01-01

    A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.

  2. High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.

  3. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from 50 {× } 5 -fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93 . As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  4. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger ( AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients ( r) from 50 {× } 5-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  5. Learning epistatic interactions from sequence-activity data to predict enantioselectivity.

    PubMed

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from [Formula: see text]-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of [Formula: see text] and [Formula: see text]. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from [Formula: see text] to [Formula: see text] respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  6. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  7. Time integration algorithms for the two-dimensional Euler equations on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Slack, David C.; Whitaker, D. L.; Walters, Robert W.

    1994-01-01

    Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.

  8. Numerical solution of the wave equation with variable wave speed on nonconforming domains by high-order difference potentials

    NASA Astrophysics Data System (ADS)

    Britt, S.; Tsynkov, S.; Turkel, E.

    2018-02-01

    We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broda, Jill Terese

    The neutron flux across the nuclear reactor core is of interest to reactor designers and others. The diffusion equation, an integro-differential equation in space and energy, is commonly used to determine the flux level. However, the solution of a simplified version of this equation when automated is very time consuming. Since the flux level changes with time, in general, this calculation must be made repeatedly. Therefore solution techniques that speed the calculation while maintaining accuracy are desirable. One factor that contributes to the solution time is the spatial flux shape approximation used. It is common practice to use the samemore » order flux shape approximation in each energy group even though this method may not be the most efficient. The one-dimensional, two-energy group diffusion equation was solved, for the node average flux and core k-effective, using two sets of spatial shape approximations for each of three reactor types. A fourth-order approximation in both energy groups forms the first set of approximations used. The second set used combines a second-order approximation with a fourth-order approximation in energy group two. Comparison of the results from the two approximation sets show that the use of a different order spatial flux shape approximation results in considerable loss in accuracy for the pressurized water reactor modeled. However, the loss in accuracy is small for the heavy water and graphite reactors modeled. The use of different order approximations in each energy group produces mixed results. Further investigation into the accuracy and computing time is required before any quantitative advantage of the use of the second-order approximation in energy group one and the fourth-order approximation in energy group two can be determined.« less

  10. Testing higher-order Lagrangian perturbation theory against numerical simulations. 2: Hierarchical models

    NASA Technical Reports Server (NTRS)

    Melott, A. L.; Buchert, T.; Weib, A. G.

    1995-01-01

    We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of scales. The Lagrangian theory of gravitational instability of Friedmann-Lemaitre cosmogonies is compared with numerical simulations. We study the dynamics of hierarchical models as a second step. In the first step we analyzed the performance of the Lagrangian schemes for pancake models, the difference being that in the latter models the initial power spectrum is truncated. This work probed the quasi-linear and weakly non-linear regimes. We here explore whether the results found for pancake models carry over to hierarchical models which are evolved deeply into the non-linear regime. We smooth the initial data by using a variety of filter types and filter scales in order to determine the optimal performance of the analytical models, as has been done for the 'Zel'dovich-approximation' - hereafter TZA - in previous work. We find that for spectra with negative power-index the second-order scheme performs considerably better than TZA in terms of statistics which probe the dynamics, and slightly better in terms of low-order statistics like the power-spectrum. However, in contrast to the results found for pancake models, where the higher-order schemes get worse than TZA at late non-linear stages and on small scales, we here find that the second-order model is as robust as TZA, retaining the improvement at later stages and on smaller scales. In view of these results we expect that the second-order truncated Lagrangian model is especially useful for the modelling of standard dark matter models such as Hot-, Cold-, and Mixed-Dark-Matter.

  11. Numerical experiments on the accuracy of ENO and modified ENO schemes

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1990-01-01

    Further numerical experiments are made assessing an accuracy degeneracy phenomena. A modified essentially non-oscillatory (ENO) scheme is proposed, which recovers the correct order of accuracy for all the test problems with smooth initial conditions and gives comparable results with the original ENO schemes for discontinuous problems.

  12. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  13. High-precision calculations in strongly coupled quantum field theory with next-to-leading-order renormalized Hamiltonian Truncation

    NASA Astrophysics Data System (ADS)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-10-01

    Hamiltonian Truncation (a.k.a. Truncated Spectrum Approach) is an efficient numerical technique to solve strongly coupled QFTs in d = 2 spacetime dimensions. Further theoretical developments are needed to increase its accuracy and the range of applicability. With this goal in mind, here we present a new variant of Hamiltonian Truncation which exhibits smaller dependence on the UV cutoff than other existing implementations, and yields more accurate spectra. The key idea for achieving this consists in integrating out exactly a certain class of high energy states, which corresponds to performing renormalization at the cubic order in the interaction strength. We test the new method on the strongly coupled two-dimensional quartic scalar theory. Our work will also be useful for the future goal of extending Hamiltonian Truncation to higher dimensions d ≥ 3.

  14. Effective potentials in nonlinear polycrystals and quadrature formulae

    NASA Astrophysics Data System (ADS)

    Michel, Jean-Claude; Suquet, Pierre

    2017-08-01

    This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471, 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.

  15. f and g series solutions to a post-Newtonian two-body problem with parameters β and γ

    NASA Astrophysics Data System (ADS)

    Qin, Song-He; Liu, Jing-Xi; Zhong, Ze-Hao; Xie, Yi

    2016-01-01

    Classical Newtonian f and g series for a Keplerian two-body problem are extended for the case of a post-Newtonian two-body problem with parameters β and γ. These two parameters are introduced to parameterize the post-Newtonian approximation of alternative theories of gravity and they are both equal to 1 in general relativity. Up to the order of 30, we obtain all of the coefficients of the series in their exact forms without any cutoff for significant figures. The f and g series for the post-Newtonian two-body problem are also compared with a Runge-Kutta order 7 integrator. Although the f and g series have no superiority in terms of accuracy or efficiency at the order of 7, the discrepancy in the performances of these two methods is not quite distinct. However, the f and g series have the advantage of flexibility for going to higher orders. Some examples of relativistic advance of periastron are given and the effect of gravitational radiation on the scheme of f and g series is evaluated.

  16. Effective potentials in nonlinear polycrystals and quadrature formulae.

    PubMed

    Michel, Jean-Claude; Suquet, Pierre

    2017-08-01

    This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471 , 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.

  17. Influence of wave modelling on the prediction of fatigue for offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Veldkamp, H. F.; van der Tempel, J.

    2005-01-01

    Currently it is standard practice to use Airy linear wave theory combined with Morison's formula for the calculation of fatigue loads for offshore wind turbines. However, offshore wind turbines are typically placed in relatively shallow water depths of 5-25 m where linear wave theory has limited accuracy and where ideally waves generated with the Navier-Stokes approach should be used. This article examines the differences in fatigue for some representative offshore wind turbines that are found if first-order, second-order and fully non-linear waves are used. The offshore wind turbines near Blyth are located in an area where non-linear wave effects are common. Measurements of these waves from the OWTES project are used to compare the different wave models with the real world in spectral form. Some attention is paid to whether the shape of a higher-order wave height spectrum (modified JONSWAP) corresponds to reality for other places in the North Sea, and which values for the drag and inertia coefficients should be used. Copyright

  18. Design, analysis, and testing of high frequency passively damped struts

    NASA Technical Reports Server (NTRS)

    Yiu, Y. C.; Davis, L. Porter; Napolitano, Kevin; Ninneman, R. Rory

    1993-01-01

    Objectives of the research are: (1) to develop design requirements for damped struts to stabilize control system in the high frequency cross-over and spill-over range; (2) to design, fabricate and test viscously damped strut and viscoelastically damped strut; (3) to verify accuracy of design and analysis methodology of damped struts; and (4) to design and build test apparatus, and develop data reduction algorithm to measure strut complex stiffness. In order to meet the stringent performance requirements of the SPICE experiment, the active control system is used to suppress the dynamic responses of the low order structural modes. However, the control system also inadvertently drives some of the higher order modes unstable in the cross-over and spill-over frequency range. Passive damping is a reliable and effective way to provide damping to stabilize the control system. It also improves the robustness of the control system. Damping is designed into the SPICE testbed as an integral part of the control-structure technology.

  19. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  20. ON states as resource units for universal quantum computation with photonic architectures

    NASA Astrophysics Data System (ADS)

    Sabapathy, Krishna Kumar; Weedbrook, Christian

    2018-06-01

    Universal quantum computation using photonic systems requires gates the Hamiltonians of which are of order greater than quadratic in the quadrature operators. We first review previous proposals to implement such gates, where specific non-Gaussian states are used as resources in conjunction with entangling gates such as the continuous-variable versions of controlled-phase and controlled-not gates. We then propose ON states which are superpositions of the vacuum and the N th Fock state, for use as non-Gaussian resource states. We show that ON states can be used to implement the cubic and higher-order quadrature phase gates to first order in gate strength. There are several advantages to this method such as reduced number of superpositions in the resource state preparation and greater control over the final gate. We also introduce useful figures of merit to characterize gate performance. Utilizing a supply of on-demand resource states one can potentially scale up implementation to greater accuracy, by repeated application of the basic circuit.

  1. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  2. Treatment of pairing correlations based on the equations of motion for zero-coupled pair operators

    NASA Astrophysics Data System (ADS)

    Andreozzi, F.; Covello, A.; Gargano, A.; Ye, Liu Jian; Porrino, A.

    1985-07-01

    The pairing problem is treated by means of the equations of motion for zero-coupled pair operators. Exact equations for the seniority-v states of N particles are derived. These equations can be solved by a step-by-step procedure which consists of progressively adding pairs of particles to a core. The theory can be applied at several levels of approximation depending on the number of core states which are taken into account. Some numerical applications to the treatment of v=0, v=1, and v=2 states in the Ni isotopes are performed. The accuracy of various approximations is tested by comparison with exact results. For the seniority-one and seniority-two problems it turns out that the results obtained from the first-order theory are very accurate, while those of higher order calculations are practically exact. Concerning the seniority-zero problem, a fifth-order calculation reproduces quite well the three lowest states.

  3. Characterization of turbulent processes by the Raman lidar system BASIL during the HD(CP)2 observational prototype experiment - HOPE

    NASA Astrophysics Data System (ADS)

    Di Girolamo, Paolo; Summa, Donato; Stelitano, Dario; Cacciani, Marco; Scoccione, Andrea; Behrendt, Andreas; Wulfmeyer, Volker

    2017-02-01

    Measurements carried out by the Raman lidar system BASIL are reported to demonstrate the capability of this instrument to characterize turbulent processes within the Convective Boundary Layer (CBL). In order to resolve the vertical profiles of turbulent variables, high resolution water vapour and temperature measurements, with a temporal resolution of 10 sec and a vertical resolution of 90 and 30 m, respectively, are considered. Measurements of higher-order moments of the turbulent fluctuations of water vapour mixing ratio and temperature are obtained based on the application of spectral and auto-covariance analyses to the water vapour mixing ratio and temperature time series. The algorithms are applied to a case study (IOP 5, 20 April 2013) from the HD(CP)2 Observational Prototype Experiment (HOPE), held in Central Germany in the spring 2013. The noise errors are demonstrated to be small enough to allow the derivation of up to fourth-order moments for both water vapour mixing ratio and temperature fluctuations with sufficient accuracy.

  4. Practical Aerodynamic Design Optimization Based on the Navier-Stokes Equations and a Discrete Adjoint Method

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard

    1999-01-01

    Compressible and incompressible versions of a three-dimensional unstructured mesh Reynolds-averaged Navier-Stokes flow solver have been differentiated and resulting derivatives have been verified by comparisons with finite differences and a complex-variable approach. In this implementation, the turbulence model is fully coupled with the flow equations in order to achieve this consistency. The accuracy demonstrated in the current work represents the first time that such an approach has been successfully implemented. The accuracy of a number of simplifying approximations to the linearizations of the residual have been examined. A first-order approximation to the dependent variables in both the adjoint and design equations has been investigated. The effects of a "frozen" eddy viscosity and the ramifications of neglecting some mesh sensitivity terms were also examined. It has been found that none of the approximations yielded derivatives of acceptable accuracy and were often of incorrect sign. However, numerical experiments indicate that an incomplete convergence of the adjoint system often yield sufficiently accurate derivatives, thereby significantly lowering the time required for computing sensitivity information. The convergence rate of the adjoint solver relative to the flow solver has been examined. Inviscid adjoint solutions typically require one to four times the cost of a flow solution, while for turbulent adjoint computations, this ratio can reach as high as eight to ten. Numerical experiments have shown that the adjoint solver can stall before converging the solution to machine accuracy, particularly for viscous cases. A possible remedy for this phenomenon would be to include the complete higher-order linearization in the preconditioning step, or to employ a simple form of mesh sequencing to obtain better approximations to the solution through the use of coarser meshes. An efficient surface parameterization based on a free-form deformation technique has been utilized and the resulting codes have been integrated with an optimization package. Lastly, sample optimizations have been shown for inviscid and turbulent flow over an ONERA M6 wing. Drag reductions have been demonstrated by reducing shock strengths across the span of the wing. In order for large scale optimization to become routine, the benefits of parallel architectures should be exploited. Although the flow solver has been parallelized using compiler directives. The parallel efficiency is under 50 percent. Clearly, parallel versions of the codes will have an immediate impact on the ability to design realistic configurations on fine meshes, and this effort is currently underway.

  5. Mind-reading accuracy in intimate relationships: assessing the roles of the relationship, the target, and the judge.

    PubMed

    Thomas, Geoff; Fletcher, Garth J O

    2003-12-01

    Using a video-review procedure, multiple perceivers carried out mind-reading tasks of multiple targets at different levels of acquaintanceship (50 dating couples, friends of the dating partners, and strangers). As predicted, the authors found that mind-reading accuracy was (a). higher as a function of increased acquaintanceship, (b). relatively unaffected by target effects, (c). influenced by individual differences in perceivers' ability, and (d). higher for female than male perceivers. In addition, superior mind-reading accuracy (for dating couples and friends) was related to higher relationship satisfaction, closeness, and more prior disclosure about the problems discussed, but only under moderating conditions related to sex and relationship length. The authors conclude that the nature of the relationship between the perceiver and the target occupies a pivotal role in determining mind-reading accuracy.

  6. Comments on Lambeck and Coleman - 'The earth's shape and gravity field: A report of progress from 1958 to 1982'

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Klosko, S. M.; Wagner, C. A.

    1986-01-01

    The accuracy and validation of global gravity models based on satellite data are discussed, responding to the statistical analysis of Lambeck and Coleman (1983) (LC). Included are an evaluation of the LC error spectra, a summary of independent-observation calibrations of the error estimates of the Goddard Earth Models (GEM) 9 and L2 (Lerch et al., 1977, 1979, 1982, 1983, and 1985), a comparison of GEM-L2 with GRIM-3B (Reigber et al., 1983), a comparison of recent models with LAGEOS laser ranging, and a summary of resonant-orbit model tests. It is concluded that the accuracy of GEMs 9, 10, and L2 is much higher than claimed by LC, that the GEMs are in good agreement with independent observations and with GRIM-3B, and that the GEM calibrations were adequate. In a reply by LC, a number of specific questions regarding the error estimates are addressed, and it is pointed out that the intermodel discrepancies of the greatest geophysical interest are those in the higher-order coefficients, not discussed in the present comment. It is argued that the differences among the geoid heights of even the most recent models are large enough to call for considerable improvements.

  7. Application of matched asymptotic expansions to lunar and interplanetary trajectories. Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Lancaster, J. E.

    1973-01-01

    Previously published asymptotic solutions for lunar and interplanetary trajectories have been modified and combined to formulate a general analytical solution to the problem on N-bodies. The earlier first-order solutions, derived by the method of matched asymptotic expansions, have been extended to second order for the purpose of obtaining increased accuracy. The derivation of the second-order solution is summarized by showing the essential steps, some in functional form. The general asymptotic solution has been used as a basis for formulating a number of analytical two-point boundary value solutions. These include earth-to-moon, one- and two-impulse moon-to-earth, and interplanetary solutions. The results show that the accuracies of the asymptotic solutions range from an order of magnitude better than conic approximations to that of numerical integration itself. Also, since no iterations are required, the asymptotic boundary value solutions are obtained in a fraction of the time required for comparable numerically integrated solutions. The subject of minimizing the second-order error is discussed, and recommendations made for further work directed toward achieving a uniform accuracy in all applications.

  8. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemannmore » problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. Finally, the upwind scheme is shown to be robust and provide high-order accuracy.« less

  9. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    DOE PAGES

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    2017-09-28

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemannmore » problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. Finally, the upwind scheme is shown to be robust and provide high-order accuracy.« less

  10. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    NASA Astrophysics Data System (ADS)

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    2018-01-01

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw [1] how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemann problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. The upwind scheme is shown to be robust and provide high-order accuracy.

  11. High-density marker imputation accuracy in sixteen French cattle breeds.

    PubMed

    Hozé, Chris; Fouilloux, Marie-Noëlle; Venot, Eric; Guillaume, François; Dassonneville, Romain; Fritz, Sébastien; Ducrocq, Vincent; Phocas, Florence; Boichard, Didier; Croiseau, Pascal

    2013-09-03

    Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777,609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations.

  12. High-density marker imputation accuracy in sixteen French cattle breeds

    PubMed Central

    2013-01-01

    Background Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777 609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Methods Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Results Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. Conclusion In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations. PMID:24004563

  13. Hybrid RANS-LES using high order numerical methods

    NASA Astrophysics Data System (ADS)

    Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael

    2017-11-01

    Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  14. A Reduced-Order Model for Efficient Simulation of Synthetic Jet Actuators

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2003-01-01

    A new reduced-order model of multidimensional synthetic jet actuators that combines the accuracy and conservation properties of full numerical simulation methods with the efficiency of simplified zero-order models is proposed. The multidimensional actuator is simulated by solving the time-dependent compressible quasi-1-D Euler equations, while the diaphragm is modeled as a moving boundary. The governing equations are approximated with a fourth-order finite difference scheme on a moving mesh such that one of the mesh boundaries coincides with the diaphragm. The reduced-order model of the actuator has several advantages. In contrast to the 3-D models, this approach provides conservation of mass, momentum, and energy. Furthermore, the new method is computationally much more efficient than the multidimensional Navier-Stokes simulation of the actuator cavity flow, while providing practically the same accuracy in the exterior flowfield. The most distinctive feature of the present model is its ability to predict the resonance characteristics of synthetic jet actuators; this is not practical when using the 3-D models because of the computational cost involved. Numerical results demonstrating the accuracy of the new reduced-order model and its limitations are presented.

  15. High order filtering methods for approximating hyperbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1991-01-01

    The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.

  16. Speech recognition for embedded automatic positioner for laparoscope

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodong; Yin, Qingyun; Wang, Yi; Yu, Daoyin

    2014-07-01

    In this paper a novel speech recognition methodology based on Hidden Markov Model (HMM) is proposed for embedded Automatic Positioner for Laparoscope (APL), which includes a fixed point ARM processor as the core. The APL system is designed to assist the doctor in laparoscopic surgery, by implementing the specific doctor's vocal control to the laparoscope. Real-time respond to the voice commands asks for more efficient speech recognition algorithm for the APL. In order to reduce computation cost without significant loss in recognition accuracy, both arithmetic and algorithmic optimizations are applied in the method presented. First, depending on arithmetic optimizations most, a fixed point frontend for speech feature analysis is built according to the ARM processor's character. Then the fast likelihood computation algorithm is used to reduce computational complexity of the HMM-based recognition algorithm. The experimental results show that, the method shortens the recognition time within 0.5s, while the accuracy higher than 99%, demonstrating its ability to achieve real-time vocal control to the APL.

  17. Remembering and diagnosing clients: does experience matter?

    PubMed

    Witteman, Cilia L M; Tollenaar, Marieke S

    2012-01-01

    Experienced mental health clinicians often do not outperform novices in diagnostic decision making. In this paper we look for an explanation of this phenomenon by testing differences in memory processes. In two studies we aimed to look at differences in accuracy of diagnoses in relation to free recall of client information between mental health clinicians with different levels of experience. Clinicians were presented with two cases, and were asked afterwards, either directly (Study 1) or after 1 week (Study 2), to give the appropriate diagnoses and to write down what they remembered of the cases. We found in Study 1 that the accuracy of the diagnoses was the same for all levels of experience, as was the amount of details recalled. Very experienced clinicians did remember more higher-order concepts, that is, abstractions from the presented information. In Study 2 we found that the very experienced clinicians were less accurate in their diagnoses and remembered fewer details than the novices. In response to these findings we further discuss their implications for psychodiagnostic practice.

  18. Crowdsourcing-Assisted Radio Environment Database for V2V Communication.

    PubMed

    Katagiri, Keita; Sato, Koya; Fujii, Takeo

    2018-04-12

    In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.

  19. Determination of heat capacity of ionic liquid based nanofluids using group method of data handling technique

    NASA Astrophysics Data System (ADS)

    Sadi, Maryam

    2018-01-01

    In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.

  20. The generalized Sellmeier equation for air

    PubMed Central

    Voronin, A. A.; Zheltikov, A. M.

    2017-01-01

    We present a compact, uniform generalized Sellmeier-equation (GSE) description of air refraction and its dispersion that remains highly accurate within an ultrabroad spectral range from the ultraviolet to the long-wavelength infrared. While the standard Sellmeier equation (SSE) for atmospheric air is not intended for the description of air refractivity in the mid-infrared and long-wavelength infrared, failing beyond, roughly 2.5 μm, our generalization of this equation is shown to agree remarkably well with full-scale air-refractivity calculations involving over half a million atmospheric absorption lines, providing a highly accurate description of air refractivity in the range of wavelengths from 0.3 to 13 μm. With its validity range being substantially broader than the applicability range of the SSE and its accuracy being at least an order of magnitude higher than the accuracy that the SSE can provide even within its validity range, the GSE-based approach offers a powerful analytical tool for the rapidly progressing mid- and long-wavelength-infrared optics of the atmosphere. PMID:28836624

  1. Performance Measurement Of Mel Frequency Ceptral Coefficient (MFCC) Method In Learning System Of Al- Qur’an Based In Nagham Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Afrillia, Yesy; Mawengkang, Herman; Ramli, Marwan; Fadlisyah; Putra Fhonna, Rizky

    2017-12-01

    Most of research have used signal and speech processing in order to recognize makhraj pattern and tajwid reading in Al-Quran by exploring the mel frequency ceptral coefficient (MFCC). However, to our knowledge so far there is no research has been conducted to recognize the chanting of Al-Quran verse using MFCC. This term is also well-known as nagham Al-Quran. The characteristics of nagham Al-Quran pattern is much more complex then makhraj and tajwid pattern. In nagham the wave of the sound has more variation which implies the level of noice is much higher and has sound duration longer. The data testing in this research was taken term by real-time recording. The evaluation measurement in the system performance of nagham Al-Quran pattern is based on true and false detection parameter with accuracy 80%. To measure this accuracy it is necessary to modify the MFCC or to give more data learning process with more variation.

  2. Crowdsourcing-Assisted Radio Environment Database for V2V Communication †

    PubMed Central

    Katagiri, Keita; Fujii, Takeo

    2018-01-01

    In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation. PMID:29649174

  3. A geometrical defect detection method for non-silicon MEMS part based on HU moment invariants of skeleton image

    NASA Astrophysics Data System (ADS)

    Cheng, Xu; Jin, Xin; Zhang, Zhijing; Lu, Jun

    2014-01-01

    In order to improve the accuracy of geometrical defect detection, this paper presented a method based on HU moment invariants of skeleton image. This method have four steps: first of all, grayscale images of non-silicon MEMS parts are collected and converted into binary images, secondly, skeletons of binary images are extracted using medialaxis- transform method, and then HU moment invariants of skeleton images are calculated, finally, differences of HU moment invariants between measured parts and qualified parts are obtained to determine whether there are geometrical defects. To demonstrate the availability of this method, experiments were carried out between skeleton images and grayscale images, and results show that: when defects of non-silicon MEMS part are the same, HU moment invariants of skeleton images are more sensitive than that of grayscale images, and detection accuracy is higher. Therefore, this method can more accurately determine whether non-silicon MEMS parts qualified or not, and can be applied to nonsilicon MEMS part detection system.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less

  5. Comparative study of landslides susceptibility mapping methods: Multi-Criteria Decision Making (MCDM) and Artificial Neural Network (ANN)

    NASA Astrophysics Data System (ADS)

    Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan

    2018-02-01

    As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.

  6. TEC variations over North-western Balkan Peninsula before and during the seismic activity of 24th May 2009

    NASA Astrophysics Data System (ADS)

    Contadakis, M. E.; Arabelos, D. N.; Vergos, G.

    2012-04-01

    In this paper the Total Electron Content (TEC) data of 8 Global Positioning System (GPS) stations of the EUREF network, 4 close and 4 remote to EQ epicentre stations, which are being provided by IONOLAB (Turkey), were analysed using wavelet analysis and Discrete Fourier Analysis in order to investigate the TEC variations over North-western Balkan Peninsula before and during the seismic activity of 24th of May, 2009. The main conclusions of this analysis are the following. (a) TEC oscillations in a broad range of frequencies occur randomly over a broad area of several hundred km from the earthquake and (b) high frequency oscillations (f ≥ 0.0003Hz, periods T ≤ 60m) seems to point to the location of the earthquake with a questionable accuracy but the fractal characteristics of the frequencies distribution, points to the locus of the earthquake with a rather higher accuracy. We conclude that the LAIC mechanism through acoustic or gravity wave could explain this phenomenology.

  7. Assessing Complex Learning Objectives through Analytics

    NASA Astrophysics Data System (ADS)

    Horodyskyj, L.; Mead, C.; Buxner, S.; Semken, S. C.; Anbar, A. D.

    2016-12-01

    A significant obstacle to improving the quality of education is the lack of easy-to-use assessments of higher-order thinking. Most existing assessments focus on recall and understanding questions, which demonstrate lower-order thinking. Traditionally, higher-order thinking is assessed with practical tests and written responses, which are time-consuming to analyze and are not easily scalable. Computer-based learning environments offer the possibility of assessing such learning outcomes based on analysis of students' actions within an adaptive learning environment. Our fully online introductory science course, Habitable Worlds, uses an intelligent tutoring system that collects and responds to a range of behavioral data, including actions within the keystone project. This central project is a summative, game-like experience in which students synthesize and apply what they have learned throughout the course to identify and characterize a habitable planet from among hundreds of stars. Student performance is graded based on completion and accuracy, but two additional properties can be utilized to gauge higher-order thinking: (1) how efficient a student is with the virtual currency within the project and (2) how many of the optional milestones a student reached. In the project, students can use the currency to check their work and "unlock" convenience features. High-achieving students spend close to the minimum amount required to reach these goals, indicating a high-level of concept mastery and efficient methodology. Average students spend more, indicating effort, but lower mastery. Low-achieving students were more likely to spend very little, which indicates low effort. Differences on these metrics were statistically significant between all three of these populations. We interpret this as evidence that high-achieving students develop and apply efficient problem-solving skills as compared to lower-achieving student who use more brute-force approaches.

  8. Predictive capacity of anthropometric indicators for dyslipidemia screening in children and adolescents.

    PubMed

    Quadros, Teresa Maria Bianchini; Gordia, Alex Pinheiro; Silva, Rosane Carla Rosendo; Silva, Luciana Rodrigues

    2015-01-01

    To analyze the predictive capacity of anthropometric indicators and their cut-off values for dyslipidemia screening in children and adolescents. This was a cross-sectional study involving 1139 children and adolescents, of both sexes, aged 6-18 years. Body weight, height, waist circumference, subscapular, and triceps skinfold thickness were measured. The body mass index and waist-to-height ratio were calculated. Children and adolescents exhibiting at least one of the following lipid alterations were defined as having dyslipidemia: elevated total cholesterol, low high-density lipoprotein, elevated low-density lipoprotein, and high triglyceride concentration. A receiver operating characteristic curve was constructed and the area under the curve, sensitivity, and specificity was calculated for the parameters analyzed. The prevalence of dyslipidemia was 62.1%. The waist-to-height ratio, waist circumference, subscapular, body mass index, and triceps skinfold thickness, in this order, presented the largest number of significant accuracies, ranging from 0.59 to 0.78. The associations of the anthropometric indicators with dyslipidemia were stronger among adolescents than among children. Significant differences between accuracies of the anthropometric indicators were only observed by the end of adolescence; the accuracy of waist-to-height ratio was higher than that of subscapular (p=0.048) for females, and the accuracy of waist circumference was higher than that of subscapular (p=0.029) and body mass index (p=0.012) for males. In general, the cut-off values of the anthropometric predictors of dyslipidemia increased with age, except for waist-to-height ratio. Sensitivity and specificity varied substantially between anthropometric indicators, ranging from 75.6 to 53.5 and from 75.0 to 50.0, respectively. The anthropometric indicators studied had little utility as screening tools for dyslipidemia, especially in children. Copyright © 2015 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  9. An Implicit Measure of Associations with Mental Illness versus Physical Illness: Response Latency Decomposition and Stimuli Differential Functioning in Relation to IAT Order of Associative Conditions and Accuracy

    PubMed Central

    Mannarini, Stefania; Boffo, Marilisa

    2014-01-01

    The present study aimed at the definition of a latent measurement dimension underlying an implicit measure of automatic associations between the concept of mental illness and the psychosocial and biogenetic causal explanatory attributes. To this end, an Implicit Association Test (IAT) assessing the association between the Mental Illness and Physical Illness target categories to the Psychological and Biologic attribute categories, representative of the causal explanation domains, was developed. The IAT presented 22 stimuli (words and pictures) to be categorized into the four categories. After 360 university students completed the IAT, a Many-Facet Rasch Measurement (MFRM) modelling approach was applied. The model specified a person latency parameter and a stimulus latency parameter. Two additional parameters were introduced to denote the order of presentation of the task associative conditions and the general response accuracy. Beyond the overall definition of the latent measurement dimension, the MFRM was also applied to disentangle the effect of the task block order and the general response accuracy on the stimuli response latency. Further, the MFRM allowed detecting any differential functioning of each stimulus in relation to both block ordering and accuracy. The results evidenced: a) the existence of a latency measurement dimension underlying the Mental Illness versus Physical Illness - Implicit Association Test; b) significant effects of block order and accuracy on the overall latency; c) a differential functioning of specific stimuli. The results of the present study can contribute to a better understanding of the functioning of an implicit measure of semantic associations with mental illness and give a first blueprint for the examination of relevant issues in the development of an IAT. PMID:25000406

  10. An implicit measure of associations with mental illness versus physical illness: response latency decomposition and stimuli differential functioning in relation to IAT order of associative conditions and accuracy.

    PubMed

    Mannarini, Stefania; Boffo, Marilisa

    2014-01-01

    The present study aimed at the definition of a latent measurement dimension underlying an implicit measure of automatic associations between the concept of mental illness and the psychosocial and biogenetic causal explanatory attributes. To this end, an Implicit Association Test (IAT) assessing the association between the Mental Illness and Physical Illness target categories to the Psychological and Biologic attribute categories, representative of the causal explanation domains, was developed. The IAT presented 22 stimuli (words and pictures) to be categorized into the four categories. After 360 university students completed the IAT, a Many-Facet Rasch Measurement (MFRM) modelling approach was applied. The model specified a person latency parameter and a stimulus latency parameter. Two additional parameters were introduced to denote the order of presentation of the task associative conditions and the general response accuracy. Beyond the overall definition of the latent measurement dimension, the MFRM was also applied to disentangle the effect of the task block order and the general response accuracy on the stimuli response latency. Further, the MFRM allowed detecting any differential functioning of each stimulus in relation to both block ordering and accuracy. The results evidenced: a) the existence of a latency measurement dimension underlying the Mental Illness versus Physical Illness - Implicit Association Test; b) significant effects of block order and accuracy on the overall latency; c) a differential functioning of specific stimuli. The results of the present study can contribute to a better understanding of the functioning of an implicit measure of semantic associations with mental illness and give a first blueprint for the examination of relevant issues in the development of an IAT.

  11. Accuracy Rates of Ancestry Estimation by Forensic Anthropologists Using Identified Forensic Cases.

    PubMed

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2017-07-01

    A common task in forensic anthropology involves the estimation of the ancestry of a decedent by comparing their skeletal morphology and measurements to skeletons of individuals from known geographic groups. However, the accuracy rates of ancestry estimation methods in actual forensic casework have rarely been studied. This article uses 99 forensic cases with identified skeletal remains to develop accuracy rates for ancestry estimations conducted by forensic anthropologists. The overall rate of correct ancestry estimation from these cases is 90.9%, which is comparable to most research-derived rates and those reported by individual practitioners. Statistical tests showed no significant difference in accuracy rates depending on examiner education level or on the estimated or identified ancestry. More recent cases showed a significantly higher accuracy rate. The incorporation of metric analyses into the ancestry estimate in these cases led to a higher accuracy rate. © 2017 American Academy of Forensic Sciences.

  12. Validation of a High-Order Prefactored Compact Scheme on Nonlinear Flows with Complex Geometries

    NASA Technical Reports Server (NTRS)

    Hixon, Ray; Mankbadi, Reda R.; Povinelli, L. A. (Technical Monitor)

    2000-01-01

    Three benchmark problems are solved using a sixth-order prefactored compact scheme employing an explicit 10th-order filter with optimized fourth-order Runge-Kutta time stepping. The problems solved are the following: (1) propagation of sound waves through a transonic nozzle; (2) shock-sound interaction; and (3) single airfoil gust response. In the first two problems, the spatial accuracy of the scheme is tested on a stretched grid, and the effectiveness of boundary conditions is shown. The solution stability and accuracy near a shock discontinuity is shown as well. Also, 1-D nonlinear characteristic boundary conditions will be evaluated. In the third problem, a nonlinear Euler solver will be used that solves the equations in generalized curvilinear coordinates using the chain rule transformation. This work, continuing earlier work on flat-plate cascades and Joukowski airfoils, will focus mainly on the effect of the grid and boundary conditions on the accuracy of the solution. The grids were generated using a commercially available grid generator, GridPro/az3000.

  13. An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling

    NASA Astrophysics Data System (ADS)

    Wang, Enjiang; Liu, Yang

    2018-01-01

    The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.

  14. Navigation strategy and filter design for solar electric missions

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Hagar, H., Jr.

    1972-01-01

    Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.

  15. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  16. Order statistics applied to the most massive and most distant galaxy clusters

    NASA Astrophysics Data System (ADS)

    Waizmann, J.-C.; Ettori, S.; Bartelmann, M.

    2013-06-01

    In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.

  17. The theoretical accuracy of Runge-Kutta time discretizations for the initial boundary value problem: A careful study of the boundary error

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun

    1993-01-01

    The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.

  18. Cell-centered high-order hyperbolic finite volume method for diffusion equation on unstructured grids

    NASA Astrophysics Data System (ADS)

    Lee, Euntaek; Ahn, Hyung Taek; Luo, Hong

    2018-02-01

    We apply a hyperbolic cell-centered finite volume method to solve a steady diffusion equation on unstructured meshes. This method, originally proposed by Nishikawa using a node-centered finite volume method, reformulates the elliptic nature of viscous fluxes into a set of augmented equations that makes the entire system hyperbolic. We introduce an efficient and accurate solution strategy for the cell-centered finite volume method. To obtain high-order accuracy for both solution and gradient variables, we use a successive order solution reconstruction: constant, linear, and quadratic (k-exact) reconstruction with an efficient reconstruction stencil, a so-called wrapping stencil. By the virtue of the cell-centered scheme, the source term evaluation was greatly simplified regardless of the solution order. For uniform schemes, we obtain the same order of accuracy, i.e., first, second, and third orders, for both the solution and its gradient variables. For hybrid schemes, recycling the gradient variable information for solution variable reconstruction makes one order of additional accuracy, i.e., second, third, and fourth orders, possible for the solution variable with less computational work than needed for uniform schemes. In general, the hyperbolic method can be an effective solution technique for diffusion problems, but instability is also observed for the discontinuous diffusion coefficient cases, which brings necessity for further investigation about the monotonicity preserving hyperbolic diffusion method.

  19. Evaluation of diagnostic accuracy in detecting ordered symptom statuses without a gold standard

    PubMed Central

    Wang, Zheyu; Zhou, Xiao-Hua; Wang, Miqu

    2011-01-01

    Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown—imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease. PMID:21209155

  20. An accuracy assessment of positions obtained using survey- and recreational-grade Global Positioning System receivers across a range of forest conditions within the Tanana Valley of interior Alaska

    Treesearch

    Hans-Erik Andersen; Tobey Clarkin; Ken Winterberger; Jacob Strunk

    2009-01-01

    The accuracy of recreational- and survey-grade global positioning system (GPS) receivers was evaluated across a range of forest conditions in the Tanana Valley of interior Alaska. High-accuracy check points, established using high-order instruments and closed-traverse surveying methods, were then used to evaluate the accuracy of positions acquired in different forest...

  1. Discussion on accuracy degree evaluation of accident velocity reconstruction model

    NASA Astrophysics Data System (ADS)

    Zou, Tiefang; Dai, Yingbiao; Cai, Ming; Liu, Jike

    In order to investigate the applicability of accident velocity reconstruction model in different cases, a method used to evaluate accuracy degree of accident velocity reconstruction model is given. Based on pre-crash velocity in theory and calculation, an accuracy degree evaluation formula is obtained. With a numerical simulation case, Accuracy degrees and applicability of two accident velocity reconstruction models are analyzed; results show that this method is feasible in practice.

  2. Model-order reduction of lumped parameter systems via fractional calculus

    NASA Astrophysics Data System (ADS)

    Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio

    2018-04-01

    This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.

  3. Spot diameters for scanning photorefractive keratectomy: a comparative study

    NASA Astrophysics Data System (ADS)

    Manns, Fabrice; Parel, Jean-Marie A.

    1998-06-01

    Purpose: The purpose of this study was to compare with computer simulations the duration, smoothness and accuracy of scanning photo-refractive keratectomy with spot diameters ranging from 0.2 to 1 mm. Methods: We calculated the number of pulses per diopter of flattening for spot sizes varying from 0.2 to 1 mm. We also computed the corneal shape after the correction of 4 diopters of myopia and 4 diopters of astigmatism with a 6 mm ablation zone and a spot size of 0.4 mm with 600 mJ/cm2 peak radiant exposure and 0.8 mm with 300 mJ/cm2 peak radiant exposure. The accuracy and smoothness of the ablations were compared. Results: The repetition rate required to produce corrections of myopia with a 6 mm ablation zone in a duration of 5 s per diopter is on the order of 1 kHz for spot sizes smaller than 0.5 mm, and of 100 Hz for spot sizes larger than 0.5 mm. The accuracy and smoothness after the correction of myopia and astigmatism with small and large spot sizes were not significantly different. Conclusions: This study seems to indicate that there is no theoretical advantage for using either smaller spots with higher radiant exposures or larger spots with lower radiant exposures. However, at fixed radiant exposure, treatments with smaller spots require a larger duration of surgery but provide a better accuracy for the correction of astigmatism.

  4. Diagnostic test accuracy and prevalence inferences based on joint and sequential testing with finite population sampling.

    PubMed

    Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O

    2004-07-30

    The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.

  5. Pre- and postprocessing techniques for determining goodness of computational meshes

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Westermann, T.; Bass, J. M.

    1993-01-01

    Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.

  6. [Risk symptoms of psychosis in the young].

    PubMed

    Laajasalo, Taina; Huttunen, Matti; Lindgren, Maija; Manninen, Marko; Mustonen, Ulla; Suvisaari, Jaana; Therman, Sebastian

    2010-01-01

    Early intervention may postpone or even prevent the onset of psychosis and relieve symptom-related anxiety. Support and follow-up observation requires up-to-date knowledge of the nature of the risk symptoms of psychosis and of the therapy of the person having symptoms within the healthcare system. Healthcare professionals should be aware of the limitations of present research information in order to assess the correct magnitude of the risk of psychosis. Although a person assigned by current methods to the risk group presents a higher than tenfold risk compared with the rest of the population, improvement of prognostic accuracy remains as the central research issue.

  7. A boundary element method for steady incompressible thermoviscous flow

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.

    1991-01-01

    A boundary element formulation is presented for moderate Reynolds number, steady, incompressible, thermoviscous flows. The governing integral equations are written exclusively in terms of velocities and temperatures, thus eliminating the need for the computation of any gradients. Furthermore, with the introduction of reference velocities and temperatures, volume modeling can often be confined to only a small portion of the problem domain, typically near obstacles or walls. The numerical implementation includes higher order elements, adaptive integration and multiregion capability. Both the integral formulation and implementation are discussed in detail. Several examples illustrate the high level of accuracy that is obtainable with the current method.

  8. Numerical comparison of Riemann solvers for astrophysical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Klingenberg, Christian; Schmidt, Wolfram; Waagan, Knut

    2007-11-01

    The idea of this work is to compare a new positive and entropy stable approximate Riemann solver by Francois Bouchut with a state-of the-art algorithm for astrophysical fluid dynamics. We implemented the new Riemann solver into an astrophysical PPM-code, the Prometheus code, and also made a version with a different, more theoretically grounded higher order algorithm than PPM. We present shock tube tests, two-dimensional instability tests and forced turbulence simulations in three dimensions. We find subtle differences between the codes in the shock tube tests, and in the statistics of the turbulence simulations. The new Riemann solver increases the computational speed without significant loss of accuracy.

  9. Numerical method for solving the nonlinear four-point boundary value problems

    NASA Astrophysics Data System (ADS)

    Lin, Yingzhen; Lin, Jinnan

    2010-12-01

    In this paper, a new reproducing kernel space is constructed skillfully in order to solve a class of nonlinear four-point boundary value problems. The exact solution of the linear problem can be expressed in the form of series and the approximate solution of the nonlinear problem is given by the iterative formula. Compared with known investigations, the advantages of our method are that the representation of exact solution is obtained in a new reproducing kernel Hilbert space and accuracy of numerical computation is higher. Meanwhile we present the convergent theorem, complexity analysis and error estimation. The performance of the new method is illustrated with several numerical examples.

  10. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  11. Millimeter and Submillimeter Wave Spectroscopy of Higher Energy Conformers of 1,2-PROPANEDIOL

    NASA Astrophysics Data System (ADS)

    Zakharenko, Olena; Bossa, Jean-Baptiste; Lewen, Frank; Schlemmer, Stephan; Müller, Holger S. P.

    2017-06-01

    We have performed a study of the millimeter/submillimeter wave spectrum of four higher energy conformers of 1,2-propanediol (continuation of the previous study on the three lowest energy conformers. The present analysis of rotational transitions carried out in the frequency range 38 - 400 GHz represents a significant extension of previous microwave work. The new data were combined with previously-measured microwave transitions and fitted using a Watson's S-reduced Hamiltonian. The final fits were within experimental accuracy, and included spectroscopic parameters up to sixth order of angular momentum, for the ground states of the four higher energy conformers following previously studied ones: g'Ga, gG'g', aGg' and g'Gg. The present analysis provides reliable frequency predictions for astrophysical detection of 1,2-propanediol by radio telescope arrays at millimeter wavelengths. J.-B. Bossa, M.H. Ordu, H.S.P. Müller, F. Lewen, S. Schlemmer, A&A 570 (2014) A12)

  12. Machine learning approach for automated screening of malaria parasite using light microscopic images.

    PubMed

    Das, Dev Kumar; Ghosh, Madhumala; Pal, Mallika; Maiti, Asok K; Chakraborty, Chandan

    2013-02-01

    The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Assessing Videogrammetry for Static Aeroelastic Testing of a Wind-Tunnel Model

    NASA Technical Reports Server (NTRS)

    Spain, Charles V.; Heeg, Jennifer; Ivanco, Thomas G.; Barrows, Danny A.; Florance, James R.; Burner, Alpheus W.; DeMoss, Joshua; Lively, Peter S.

    2004-01-01

    The Videogrammetric Model Deformation (VMD) technique, developed at NASA Langley Research Center, was recently used to measure displacements and local surface angle changes on a static aeroelastic wind-tunnel model. The results were assessed for consistency, accuracy and usefulness. Vertical displacement measurements and surface angular deflections (derived from vertical displacements) taken at no-wind/no-load conditions were analyzed. For accuracy assessment, angular measurements were compared to those from a highly accurate accelerometer. Shewhart's Variables Control Charts were used in the assessment of consistency and uncertainty. Some bad data points were discovered, and it is shown that the measurement results at certain targets were more consistent than at other targets. Physical explanations for this lack of consistency have not been determined. However, overall the measurements were sufficiently accurate to be very useful in monitoring wind-tunnel model aeroelastic deformation and determining flexible stability and control derivatives. After a structural model component failed during a highly loaded condition, analysis of VMD data clearly indicated progressive structural deterioration as the wind-tunnel condition where failure occurred was approached. As a result, subsequent testing successfully incorporated near- real-time monitoring of VMD data in order to ensure structural integrity. The potential for higher levels of consistency and accuracy through the use of statistical quality control practices are discussed and recommended for future applications.

  14. Spectral reflectance inversion with high accuracy on green target

    NASA Astrophysics Data System (ADS)

    Jiang, Le; Yuan, Jinping; Li, Yong; Bai, Tingzhu; Liu, Shuoqiong; Jin, Jianzhou; Shen, Jiyun

    2016-09-01

    Using Landsat-7 ETM remote sensing data, the inversion of spectral reflectance of green wheat in visible and near infrared waveband in Yingke, China is studied. In order to solve the problem of lower inversion accuracy, custom atmospheric conditions method based on moderate resolution transmission model (MODTRAN) is put forward. Real atmospheric parameters are considered when adopting this method. The atmospheric radiative transfer theory to calculate atmospheric parameters is introduced first and then the inversion process of spectral reflectance is illustrated in detail. At last the inversion result is compared with simulated atmospheric conditions method which was a widely used method by previous researchers. The comparison shows that the inversion accuracy of this paper's method is higher in all inversion bands; the inversed spectral reflectance curve by this paper's method is more similar to the measured reflectance curve of wheat and better reflects the spectral reflectance characteristics of green plant which is very different from green artificial target. Thus, whether a green target is a plant or artificial target can be judged by reflectance inversion based on remote sensing image. This paper's research is helpful for the judgment of green artificial target hidden in the greenery, which has a great significance on the precise strike of green camouflaged weapons in military field.

  15. IEEE 802.15.4 ZigBee-Based Time-of-Arrival Estimation for Wireless Sensor Networks.

    PubMed

    Cheon, Jeonghyeon; Hwang, Hyunsu; Kim, Dongsun; Jung, Yunho

    2016-02-05

    Precise time-of-arrival (TOA) estimation is one of the most important techniques in RF-based positioning systems that use wireless sensor networks (WSNs). Because the accuracy of TOA estimation is proportional to the RF signal bandwidth, using broad bandwidth is the most fundamental approach for achieving higher accuracy. Hence, ultra-wide-band (UWB) systems with a bandwidth of 500 MHz are commonly used. However, wireless systems with broad bandwidth suffer from the disadvantages of high complexity and high power consumption. Therefore, it is difficult to employ such systems in various WSN applications. In this paper, we present a precise time-of-arrival (TOA) estimation algorithm using an IEEE 802.15.4 ZigBee system with a narrow bandwidth of 2 MHz. In order to overcome the lack of bandwidth, the proposed algorithm estimates the fractional TOA within the sampling interval. Simulation results show that the proposed TOA estimation algorithm provides an accuracy of 0.5 m at a signal-to-noise ratio (SNR) of 8 dB and achieves an SNR gain of 5 dB as compared with the existing algorithm. In addition, experimental results indicate that the proposed algorithm provides accurate TOA estimation in a real indoor environment.

  16. Development of Standard Reference Materials to support assessment of iodine status for nutritional and public health purposes.

    PubMed

    Long, Stephen E; Catron, Brittany L; Boggs, Ashley Sp; Tai, Susan Sc; Wise, Stephen A

    2016-09-01

    The use of urinary iodine as an indicator of iodine status relies in part on the accuracy of the analytical measurement of iodine in urine. Likewise, the use of dietary iodine intake as an indicator of iodine status relies in part on the accuracy of the analytical measurement of iodine in dietary sources, including foods and dietary supplements. Similarly, the use of specific serum biomarkers of thyroid function to screen for both iodine deficiency and iodine excess relies in part on the accuracy of the analytical measurement of those biomarkers. The National Institute of Standards and Technology has been working with the NIH Office of Dietary Supplements for several years to develop higher-order reference measurement procedures and Standard Reference Materials to support the validation of new routine analytical methods for iodine in foods and dietary supplements, for urinary iodine, and for several serum biomarkers of thyroid function including thyroid-stimulating hormone, thyroglobulin, total and free thyroxine, and total and free triiodothyronine. These materials and methods have the potential to improve the assessment of iodine status and thyroid function in observational studies and clinical trials, thereby promoting public health efforts related to iodine nutrition. © 2016 American Society for Nutrition.

  17. High-order cyclo-difference techniques: An alternative to finite differences

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Otto, John C.

    1993-01-01

    The summation-by-parts energy norm is used to establish a new class of high-order finite-difference techniques referred to here as 'cyclo-difference' techniques. These techniques are constructed cyclically from stable subelements, and require no numerical boundary conditions; when coupled with the simultaneous approximation term (SAT) boundary treatment, they are time asymptotically stable for an arbitrary hyperbolic system. These techniques are similar to spectral element techniques and are ideally suited for parallel implementation, but do not require special collocation points or orthogonal basis functions. The principal focus is on methods of sixth-order formal accuracy or less; however, these methods could be extended in principle to any arbitrary order of accuracy.

  18. On the accuracy of Whitham's method. [for steady ideal gas flow past cones

    NASA Technical Reports Server (NTRS)

    Zahalak, G. I.; Myers, M. K.

    1974-01-01

    The steady flow of an ideal gas past a conical body is studied by the method of matched asymptotic expansions and by Whitham's method in order to assess the accuracy of the latter. It is found that while Whitham's method does not yield a correct asymptotic representation of the perturbation field to second order in regions where the flow ahead of the Mach cone of the apex is disturbed, it does correctly predict the changes of the second-order perturbation quantities across a shock (the first-order shock strength). The results of the analysis are illustrated by a special case of a flat, rectangular plate at incidence.

  19. Boundary Closures for Fourth-order Energy Stable Weighted Essentially Non-Oscillatory Finite Difference Schemes

    NASA Technical Reports Server (NTRS)

    Fisher, Travis C.; Carpenter, Mark H.; Yamaleev, Nail K.; Frankel, Steven H.

    2009-01-01

    A general strategy exists for constructing Energy Stable Weighted Essentially Non Oscillatory (ESWENO) finite difference schemes up to eighth-order on periodic domains. These ESWENO schemes satisfy an energy norm stability proof for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, boundary closures are developed for the fourth-order ESWENO scheme that maintain wherever possible the WENO stencil biasing properties, while satisfying the summation-by-parts (SBP) operator convention, thereby ensuring stability in an L2 norm. Second-order, and third-order boundary closures are developed that achieve stability in diagonal and block norms, respectively. The global accuracy for the second-order closures is three, and for the third-order closures is four. A novel set of non-uniform flux interpolation points is necessary near the boundaries to simultaneously achieve 1) accuracy, 2) the SBP convention, and 3) WENO stencil biasing mechanics.

  20. Three-Dimensional High-Order Spectral Finite Volume Method for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Many areas require a very high-order accurate numerical solution of conservation laws for complex shapes. This paper deals with the extension to three dimensions of the Spectral Finite Volume (SV) method for unstructured grids, which was developed to solve such problems. We first summarize the limitations of traditional methods such as finite-difference, and finite-volume for both structured and unstructured grids. We then describe the basic formulation of the spectral finite volume method. What distinguishes the SV method from conventional high-order finite-volume methods for unstructured triangular or tetrahedral grids is the data reconstruction. Instead of using a large stencil of neighboring cells to perform a high-order reconstruction, the stencil is constructed by partitioning each grid cell, called a spectral volume (SV), into 'structured' sub-cells, called control volumes (CVs). One can show that if all the SV cells are partitioned into polygonal or polyhedral CV sub-cells in a geometrically similar manner, the reconstructions for all the SVs become universal, irrespective of their shapes, sizes, orientations, or locations. It follows that the reconstruction is reduced to a weighted sum of unknowns involving just a few simple adds and multiplies, and those weights are universal and can be pre-determined once for all. The method is thus very efficient, accurate, and yet geometrically flexible. The most critical part of the SV method is the partitioning of the SV into CVs. In this paper we present the partitioning of a tetrahedral SV into polyhedral CVs with one free parameter for polynomial reconstructions up to degree of precision five. (Note that the order of accuracy of the method is one order higher than the reconstruction degree of precision.) The free parameter will be determined by minimizing the Lebesgue constant of the reconstruction matrix or similar criteria to obtain optimized partitions. The details of an efficient, parallelizable code to solve three-dimensional problems for any order of accuracy are then presented. Important aspects of the data structure are discussed. Comparisons with the Discontinuous Galerkin (DG) method are made. Numerical examples for wave propagation problems are presented.

Top