Sample records for order difference methods

  1. The Adams formulas for numerical integration of differential equations from 1st to 20th order

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, J. C.

    1976-01-01

    The Adams Bashforth predictor coefficients and the Adams Moulton corrector coefficients for the integration of differential equations are presented for methods of 1st to 20th order. The order of the method as presented refers to the highest order difference formula used in Newton's backward difference interpolation formula, on which the Adams method is based. The Adams method is a polynomial approximation method derived from Newton's backward difference interpolation formula. The Newton formula is derived and expanded to 20th order. The Adams predictor and corrector formulas are derived and expressed in terms of differences of the derivatives, as well as in terms of the derivatives themselves. All coefficients are given to 18 significant digits. For the difference formula only, the ratio coefficients are given to 10th order.

  2. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  3. Finite difference and Runge-Kutta methods for solving vibration problems

    NASA Astrophysics Data System (ADS)

    Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi

    2017-11-01

    The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.

  4. High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains

    NASA Technical Reports Server (NTRS)

    Fisher, Travis C.; Carpenter, Mark H.

    2013-01-01

    Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.

  5. High order multi-grid methods to solve the Poisson equation

    NASA Technical Reports Server (NTRS)

    Schaffer, S.

    1981-01-01

    High order multigrid methods based on finite difference discretization of the model problem are examined. The following methods are described: (1) a fixed high order FMG-FAS multigrid algorithm; (2) the high order methods; and (3) results are presented on four problems using each method with the same underlying fixed FMG-FAS algorithm.

  6. ICASE Semiannual Report, October 1, 1992 through March 31, 1993

    DTIC Science & Technology

    1993-06-01

    NUMERICAL MATHEMATICS Saul Abarbanel Further results have been obtained regarding long time integration of high order compact finite difference schemes...overall accuracy. These problems are common to all numerical methods: finite differences , finite elements and spectral methods. It should be noted that...fourth order finite difference scheme. * In the same case, the D6 wavelets provide a sixth order finite difference , noncompact formula. * The wavelets

  7. The Complex-Step-Finite-Difference method

    NASA Astrophysics Data System (ADS)

    Abreu, Rafael; Stich, Daniel; Morales, Jose

    2015-07-01

    We introduce the Complex-Step-Finite-Difference method (CSFDM) as a generalization of the well-known Finite-Difference method (FDM) for solving the acoustic and elastic wave equations. We have found a direct relationship between modelling the second-order wave equation by the FDM and the first-order wave equation by the CSFDM in 1-D, 2-D and 3-D acoustic media. We present the numerical methodology in order to apply the introduced CSFDM and show an example for wave propagation in simple homogeneous and heterogeneous models. The CSFDM may be implemented as an extension into pre-existing numerical techniques in order to obtain fourth- or sixth-order accurate results with compact three time-level stencils. We compare advantages of imposing various types of initial motion conditions of the CSFDM and demonstrate its higher-order accuracy under the same computational cost and dispersion-dissipation properties. The introduced method can be naturally extended to solve different partial differential equations arising in other fields of science and engineering.

  8. Fourth order difference methods for hyperbolic IBVP's

    NASA Technical Reports Server (NTRS)

    Gustafsson, Bertil; Olsson, Pelle

    1994-01-01

    Fourth order difference approximations of initial-boundary value problems for hyperbolic partial differential equations are considered. We use the method of lines approach with both explicit and compact implicit difference operators in space. The explicit operator satisfies an energy estimate leading to strict stability. For the implicit operator we develop boundary conditions and give a complete proof of strong stability using the Laplace transform technique. We also present numerical experiments for the linear advection equation and Burgers' equation with discontinuities in the solution or in its derivative. The first equation is used for modeling contact discontinuities in fluid dynamics, the second one for modeling shocks and rarefaction waves. The time discretization is done with a third order Runge-Kutta TVD method. For solutions with discontinuities in the solution itself we add a filter based on second order viscosity. In case of the non-linear Burger's equation we use a flux splitting technique that results in an energy estimate for certain different approximations, in which case also an entropy condition is fulfilled. In particular we shall demonstrate that the unsplit conservative form produces a non-physical shock instead of the physically correct rarefaction wave. In the numerical experiments we compare our fourth order methods with a standard second order one and with a third order TVD-method. The results show that the fourth order methods are the only ones that give good results for all the considered test problems.

  9. Emotion recognition based on multiple order features using fractional Fourier transform

    NASA Astrophysics Data System (ADS)

    Ren, Bo; Liu, Deyin; Qi, Lin

    2017-07-01

    In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.

  10. Periodic solutions of second-order nonlinear difference equations containing a small parameter. IV - Multi-discrete time method

    NASA Technical Reports Server (NTRS)

    Mickens, Ronald E.

    1987-01-01

    It is shown that a discrete multi-time method can be constructed to obtain approximations to the periodic solutions of a special class of second-order nonlinear difference equations containing a small parameter. Three examples illustrating the method are presented.

  11. Method for Expressing Clinical and Statistical Significance of Ocular and Corneal Wavefront Error Aberrations

    PubMed Central

    Smolek, Michael K.

    2011-01-01

    Purpose The significance of ocular or corneal aberrations may be subject to misinterpretation whenever eyes with different pupil sizes or the application of different Zernike expansion orders are compared. A method is shown that uses simple mathematical interpolation techniques based on normal data to rapidly determine the clinical significance of aberrations, without concern for pupil and expansion order. Methods Corneal topography (Tomey, Inc.; Nagoya, Japan) from 30 normal corneas was collected and the corneal wavefront error analyzed by Zernike polynomial decomposition into specific aberration types for pupil diameters of 3, 5, 7, and 10 mm and Zernike expansion orders of 6, 8, 10 and 12. Using this 4×4 matrix of pupil sizes and fitting orders, best-fitting 3-dimensional functions were determined for the mean and standard deviation of the RMS error for specific aberrations. The functions were encoded into software to determine the significance of data acquired from non-normal cases. Results The best-fitting functions for 6 types of aberrations were determined: defocus, astigmatism, prism, coma, spherical aberration, and all higher-order aberrations. A clinical screening method of color-coding the significance of aberrations in normal, postoperative LASIK, and keratoconus cases having different pupil sizes and different expansion orders is demonstrated. Conclusions A method to calibrate wavefront aberrometry devices by using a standard sample of normal cases was devised. This method could be potentially useful in clinical studies involving patients with uncontrolled pupil sizes or in studies that compare data from aberrometers that use different Zernike fitting-order algorithms. PMID:22157570

  12. A comparative study of interface reconstruction methods for multi-material ALE simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  13. Energy stable and high-order-accurate finite difference methods on staggered grids

    NASA Astrophysics Data System (ADS)

    O'Reilly, Ossian; Lundquist, Tomas; Dunham, Eric M.; Nordström, Jan

    2017-10-01

    For wave propagation over distances of many wavelengths, high-order finite difference methods on staggered grids are widely used due to their excellent dispersion properties. However, the enforcement of boundary conditions in a stable manner and treatment of interface problems with discontinuous coefficients usually pose many challenges. In this work, we construct a provably stable and high-order-accurate finite difference method on staggered grids that can be applied to a broad class of boundary and interface problems. The staggered grid difference operators are in summation-by-parts form and when combined with a weak enforcement of the boundary conditions, lead to an energy stable method on multiblock grids. The general applicability of the method is demonstrated by simulating an explosive acoustic source, generating waves reflecting against a free surface and material discontinuity.

  14. Algorithm of resonance orders for the objects

    NASA Astrophysics Data System (ADS)

    Zhang, YongGang; Zhang, JianXue

    2018-03-01

    In mechanical engineering, the object resonance phenomena often occur when the external incident wave frequency is close to object of the natural frequency. Object resonance phenomena get the maximum value when the external incident frequency is equal to object the natural frequency. Experiments found that resonance intension of the object is changed, different objects resonance phenomena present different characteristics of ladders. Based on object orders resonance characteristics, the calculation method of object orders resonance is put forward in the paper, and the application for the light and sound waves on the seven order resonance characteristics by people feel, the result error is less than 1%.Visible in this paper, the method has high accuracy and usability. The calculation method reveals that some object resonance occur present order characteristic only four types, namely the first-orders resonance characteristics, third-orders characteristics, five orders characteristic, and seven orders characteristic.

  15. A robust method of computing finite difference coefficients based on Vandermonde matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Yijie; Gao, Jinghuai; Peng, Jigen; Han, Weimin

    2018-05-01

    When the finite difference (FD) method is employed to simulate the wave propagation, high-order FD method is preferred in order to achieve better accuracy. However, if the order of FD scheme is high enough, the coefficient matrix of the formula for calculating finite difference coefficients is close to be singular. In this case, when the FD coefficients are computed by matrix inverse operator of MATLAB, inaccuracy can be produced. In order to overcome this problem, we have suggested an algorithm based on Vandermonde matrix in this paper. After specified mathematical transformation, the coefficient matrix is transformed into a Vandermonde matrix. Then the FD coefficients of high-order FD method can be computed by the algorithm of Vandermonde matrix, which prevents the inverse of the singular matrix. The dispersion analysis and numerical results of a homogeneous elastic model and a geophysical model of oil and gas reservoir demonstrate that the algorithm based on Vandermonde matrix has better accuracy compared with matrix inverse operator of MATLAB.

  16. High order spectral difference lattice Boltzmann method for incompressible hydrodynamics

    NASA Astrophysics Data System (ADS)

    Li, Weidong

    2017-09-01

    This work presents a lattice Boltzmann equation (LBE) based high order spectral difference method for incompressible flows. In the present method, the spectral difference (SD) method is adopted to discretize the convection and collision term of the LBE to obtain high order (≥3) accuracy. Because the SD scheme represents the solution as cell local polynomials and the solution polynomials have good tensor-product property, the present spectral difference lattice Boltzmann method (SD-LBM) can be implemented on arbitrary unstructured quadrilateral meshes for effective and efficient treatment of complex geometries. Thanks to only first oder PDEs involved in the LBE, no special techniques, such as hybridizable discontinuous Galerkin method (HDG), local discontinuous Galerkin method (LDG) and so on, are needed to discrete diffusion term, and thus, it simplifies the algorithm and implementation of the high order spectral difference method for simulating viscous flows. The proposed SD-LBM is validated with four incompressible flow benchmarks in two-dimensions: (a) the Poiseuille flow driven by a constant body force; (b) the lid-driven cavity flow without singularity at the two top corners-Burggraf flow; and (c) the unsteady Taylor-Green vortex flow; (d) the Blasius boundary-layer flow past a flat plate. Computational results are compared with analytical solutions of these cases and convergence studies of these cases are also given. The designed accuracy of the proposed SD-LBM is clearly verified.

  17. High Order Finite Difference Methods, Multidimensional Linear Problems and Curvilinear Coordinates

    NASA Technical Reports Server (NTRS)

    Nordstrom, Jan; Carpenter, Mark H.

    1999-01-01

    Boundary and interface conditions are derived for high order finite difference methods applied to multidimensional linear problems in curvilinear coordinates. The boundary and interface conditions lead to conservative schemes and strict and strong stability provided that certain metric conditions are met.

  18. Variable High Order Multiblock Overlapping Grid Methods for Mixed Steady and Unsteady Multiscale Viscous Flows

    NASA Technical Reports Server (NTRS)

    Sjogreen, Bjoern; Yee, H. C.

    2007-01-01

    Flows containing steady or nearly steady strong shocks in parts of the flow field, and unsteady turbulence with shocklets on other parts of the flow field are difficult to capture accurately and efficiently employing the same numerical scheme even under the multiblock grid or adaptive grid refinement framework. On one hand, sixth-order or higher shock-capturing methods are appropriate for unsteady turbulence with shocklets. On the other hand, lower order shock-capturing methods are more effective for strong steady shocks in terms of convergence. In order to minimize the shortcomings of low order and high order shock-capturing schemes for the subject flows,a multi- block overlapping grid with different orders of accuracy on different blocks is proposed. Test cases to illustrate the performance of the new solver are included.

  19. [Heart rate variability study based on a novel RdR RR Intervals Scatter Plot].

    PubMed

    Lu, Hongwei; Lu, Xiuyun; Wang, Chunfang; Hua, Youyuan; Tian, Jiajia; Liu, Shihai

    2014-08-01

    On the basis of Poincare scatter plot and first order difference scatter plot, a novel heart rate variability (HRV) analysis method based on scatter plots of RR intervals and first order difference of RR intervals (namely, RdR) was proposed. The abscissa of the RdR scatter plot, the x-axis, is RR intervals and the ordinate, y-axis, is the difference between successive RR intervals. The RdR scatter plot includes the information of RR intervals and the difference between successive RR intervals, which captures more HRV information. By RdR scatter plot analysis of some records of MIT-BIH arrhythmias database, we found that the scatter plot of uncoupled premature ventricular contraction (PVC), coupled ventricular bigeminy and ventricular trigeminy PVC had specific graphic characteristics. The RdR scatter plot method has higher detecting performance than the Poincare scatter plot method, and simpler and more intuitive than the first order difference method.

  20. High-order cyclo-difference techniques: An alternative to finite differences

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Otto, John C.

    1993-01-01

    The summation-by-parts energy norm is used to establish a new class of high-order finite-difference techniques referred to here as 'cyclo-difference' techniques. These techniques are constructed cyclically from stable subelements, and require no numerical boundary conditions; when coupled with the simultaneous approximation term (SAT) boundary treatment, they are time asymptotically stable for an arbitrary hyperbolic system. These techniques are similar to spectral element techniques and are ideally suited for parallel implementation, but do not require special collocation points or orthogonal basis functions. The principal focus is on methods of sixth-order formal accuracy or less; however, these methods could be extended in principle to any arbitrary order of accuracy.

  1. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  2. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, M. R.; Georgiadis, N. J.; DeBonis, J. R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing large-eddy simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the high-order method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  3. The arbitrary order mixed mimetic finite difference method for the diffusion equation

    DOE PAGES

    Gyrya, Vitaliy; Lipnikov, Konstantin; Manzini, Gianmarco

    2016-05-01

    Here, we propose an arbitrary-order accurate mimetic finite difference (MFD) method for the approximation of diffusion problems in mixed form on unstructured polygonal and polyhedral meshes. As usual in the mimetic numerical technology, the method satisfies local consistency and stability conditions, which determines the accuracy and the well-posedness of the resulting approximation. The method also requires the definition of a high-order discrete divergence operator that is the discrete analog of the divergence operator and is acting on the degrees of freedom. The new family of mimetic methods is proved theoretically to be convergent and optimal error estimates for flux andmore » scalar variable are derived from the convergence analysis. A numerical experiment confirms the high-order accuracy of the method in solving diffusion problems with variable diffusion tensor. It is worth mentioning that the approximation of the scalar variable presents a superconvergence effect.« less

  4. Valuing Health Using Time Trade-Off and Discrete Choice Experiment Methods: Does Dimension Order Impact on Health State Values?

    PubMed

    Mulhern, Brendan; Shah, Koonal; Janssen, Mathieu F Bas; Longworth, Louise; Ibbotson, Rachel

    2016-01-01

    Health states defined by multiattribute instruments such as the EuroQol five-dimensional questionnaire with five response levels (EQ-5D-5L) can be valued using time trade-off (TTO) or discrete choice experiment (DCE) methods. A key feature of the tasks is the order in which the health state dimensions are presented. Respondents may use various heuristics to complete the tasks, and therefore the order of the dimensions may impact on the importance assigned to particular states. To assess the impact of different EQ-5D-5L dimension orders on health state values. Preferences for EQ-5D-5L health states were elicited from a broadly representative sample of members of the UK general public. Respondents valued EQ-5D-5L health states using TTO and DCE methods across one of three dimension orderings via face-to-face computer-assisted personal interviews. Differences in mean values and the size of the health dimension coefficients across the arms were compared using difference testing and regression analyses. Descriptive analysis suggested some differences between the mean TTO health state values across the different dimension orderings, but these were not systematic. Regression analysis suggested that the magnitude of the dimension coefficients differs across the different dimension orderings (for both TTO and DCE), but there was no clear pattern. There is some evidence that the order in which the dimensions are presented impacts on the coefficients, which may impact on the health state values provided. The order of dimensions is a key consideration in the design of health state valuation studies. Copyright © 2016. Published by Elsevier Inc.

  5. A Two Colorable Fourth Order Compact Difference Scheme and Parallel Iterative Solution of the 3D Convection Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Zhang, Jun; Ge, Lixin; Kouatchou, Jules

    2000-01-01

    A new fourth order compact difference scheme for the three dimensional convection diffusion equation with variable coefficients is presented. The novelty of this new difference scheme is that it Only requires 15 grid points and that it can be decoupled with two colors. The entire computational grid can be updated in two parallel subsweeps with the Gauss-Seidel type iterative method. This is compared with the known 19 point fourth order compact differenCe scheme which requires four colors to decouple the computational grid. Numerical results, with multigrid methods implemented on a shared memory parallel computer, are presented to compare the 15 point and the 19 point fourth order compact schemes.

  6. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    2004-01-01

    A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.

  7. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, Mina R.; Georgiadis, Nicholas J.; DeBonis, James R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing Large-Eddy Simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the highorder method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  8. Approach for discrimination and quantification of electroactive species: kinetics difference revealed by higher harmonics of Fourier transformed sinusoidal voltammetry.

    PubMed

    Fang, Yishan; Huang, Xinjian; Wang, Lishi

    2015-01-06

    Discrimination and quantification of electroactive species are traditionally realized by a potential difference which is mainly determined by thermodynamics. However, the resolution of this approach is limited to tens of millivolts. In this paper, we described an application of Fourier transformed sinusoidal voltammetry (FT-SV) that provides a new approach for discrimination and quantitative evaluation of electroactive species, especially thermodynamic similar ones. Numerical simulation indicates that electron transfer kinetics difference between electroactive species can be revealed by the phase angle of higher order harmonics of FT-SV, and the difference can be amplified order by order. Thus, even a very subtle kinetics difference can be amplified to be distinguishable at a certain order of harmonics. This method was verified with structurally similar ferrocene derivatives which were chosen as the model systems. Although these molecules have very close redox potential (<10 mV), discrimination and selective detection were achieved by as high as the thirteenth harmonics. The results demonstrated the feasibility and reliability of the method. It was also implied that the combination of the traditional thermodynamic method and this kinetics method can form a two-dimension resolved detection method, and it has the potential to extend the resolution of voltammetric techniques to a new level.

  9. [Selected enhancement of different order stokes lines of SRS by using fluorescence of mixed dye solution].

    PubMed

    Zuo, Hao-yi; Gao, Jie; Yang, Jing-guo

    2007-03-01

    A new method to enhance the intensity of the different orders of Stokes lines of SRS by using mixed dye fluorescence is reported. The Stokes lines from the second-order to the fifth-order of CCl4 were enhanced by the fluorescence of mixed R6G and RB solutions in different proportions of 20:2, 20:13 and 20:40 (R6g:Rb), respectively. It is considered that the Stokes lines from the second-order to the fifth-order are near the fluorescence peaks of the three mixed solutions, and far from the absorption peaks of R6g and Rb, so the enhancement effect dominates the absorption effect; as a result, these stokes lines are enhanced. On the contrary, the first-order stokes line is near the absorption peak of RB and far from the fluorescence peaks of the mixed solutions, which leads to the weakening of this stokes line. It is also reported that the first-order, the second-order and the third-order Stokes lines of benzene were enhanced by the fluorescence of mixed solutions of R6g and DCM with of different proportions. The potential application of this method is forecasted.

  10. Assessing Equating Results on Different Equating Criteria

    ERIC Educational Resources Information Center

    Tong, Ye; Kolen, Michael

    2005-01-01

    The performance of three equating methods--the presmoothed equipercentile method, the item response theory (IRT) true score method, and the IRT observed score method--were examined based on three equating criteria: the same distributions property, the first-order equity property, and the second-order equity property. The magnitude of the…

  11. Mass, height of burst, and source–receiver distance constraints on the acoustic coda phase delay method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Sarah; Bowman, Daniel; Rodgers, Arthur

    Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less

  12. Mass, height of burst, and source–receiver distance constraints on the acoustic coda phase delay method

    DOE PAGES

    Albert, Sarah; Bowman, Daniel; Rodgers, Arthur; ...

    2018-04-23

    Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less

  13. Evaluation of a visual layering methodology for colour coding control room displays.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2002-07-01

    Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.

  14. A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena

    NASA Technical Reports Server (NTRS)

    Zingg, David W.

    1996-01-01

    This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.

  15. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  16. Investigating Invariant Item Ordering in Personality and Clinical Scales: Some Empirical Findings and a Discussion

    ERIC Educational Resources Information Center

    Meijer, Rob R.; Egberink, Iris J. L.

    2012-01-01

    In recent studies, different methods were proposed to investigate invariant item ordering (IIO), but practical IIO research is an unexploited field in questionnaire construction and evaluation. In the present study, the authors explored the usefulness of different IIO methods to analyze personality scales and clinical scales. From the authors'…

  17. Relaxation and Preconditioning for High Order Discontinuous Galerkin Methods with Applications to Aeroacoustics and High Speed Flows

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    2004-01-01

    This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. Other related issues in high order WENO finite difference and finite volume methods have also been investigated. methods are two classes of high order, high resolution methods suitable for convection dominated simulations with possible discontinuous or sharp gradient solutions. In [18], we first review these two classes of methods, pointing out their similarities and differences in algorithm formulation, theoretical properties, implementation issues, applicability, and relative advantages. We then present some quantitative comparisons of the third order finite volume WENO methods and discontinuous Galerkin methods for a series of test problems to assess their relative merits in accuracy and CPU timing. In [3], we review the development of the Runge-Kutta discontinuous Galerkin (RKDG) methods for non-linear convection-dominated problems. These robust and accurate methods have made their way into the main stream of computational fluid dynamics and are quickly finding use in a wide variety of applications. They combine a special class of Runge-Kutta time discretizations, that allows the method to be non-linearly stable regardless of its accuracy, with a finite element space discretization by discontinuous approximations, that incorporates the ideas of numerical fluxes and slope limiters coined during the remarkable development of the high-resolution finite difference and finite volume schemes. The resulting RKDG methods are stable, high-order accurate, and highly parallelizable schemes that can easily handle complicated geometries and boundary conditions. We review the theoretical and algorithmic aspects of these methods and show several applications including nonlinear conservation laws, the compressible and incompressible Navier-Stokes equations, and Hamilton-Jacobi-like equations.

  18. Phylogenetic Analysis of Genome Rearrangements among Five Mammalian Orders

    PubMed Central

    Luo, Haiwei; Arndt, William; Zhang, Yiwei; Shi, Guanqun; Alekseyev, Max; Tang, Jijun; Hughes, Austin L.; Friedman, Robert

    2015-01-01

    Evolutionary relationships among placental mammalian orders have been controversial. Whole genome sequencing and new computational methods offer opportunities to resolve the relationships among 10 genomes belonging to the mammalian orders Primates, Rodentia, Carnivora, Perissodactyla and Artiodactyla. By application of the double cut and join distance metric, where gene order is the phylogenetic character, we computed genomic distances among the sampled mammalian genomes. With a marsupial outgroup, the gene order tree supported a topology in which Rodentia fell outside the cluster of Primates, Carnivora, Perissodactyla, and Artiodactyla. Results of breakpoint reuse rate and synteny block length analyses were consistent with the prediction of random breakage model, which provided a diagnostic test to support use of gene order as an appropriate phylogenetic character in this study. We the influence of rate differences among lineages and other factors that may contribute to different resolutions of mammalian ordinal relationships by different methods of phylogenetic reconstruction. PMID:22929217

  19. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.

    2004-01-01

    A new, high-order, conservative, and efficient method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. A discussion on the Discontinuous Spectral Difference (SD) Method, locations of the unknowns and flux points and numerical results are also presented.

  20. A family of four stages embedded explicit six-step methods with eliminated phase-lag and its derivatives for the numerical solution of the second order problems

    NASA Astrophysics Data System (ADS)

    Simos, T. E.

    2017-11-01

    A family of four stages high algebraic order embedded explicit six-step methods, for the numerical solution of second order initial or boundary-value problems with periodical and/or oscillating solutions, are studied in this paper. The free parameters of the new proposed methods are calculated solving the linear system of equations which is produced by requesting the vanishing of the phase-lag of the methods and the vanishing of the phase-lag's derivatives of the schemes. For the new obtained methods we investigate: • Its local truncation error (LTE) of the methods.• The asymptotic form of the LTE obtained using as model problem the radial Schrödinger equation.• The comparison of the asymptotic forms of LTEs for several methods of the same family. This comparison leads to conclusions on the efficiency of each method of the family.• The stability and the interval of periodicity of the obtained methods of the new family of embedded finite difference pairs.• The applications of the new obtained family of embedded finite difference pairs to the numerical solution of several second order problems like the radial Schrödinger equation, astronomical problems etc. The above applications lead to conclusion on the efficiency of the methods of the new family of embedded finite difference pairs.

  1. Numerical simulation using vorticity-vector potential formulation

    NASA Technical Reports Server (NTRS)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.

  2. A New High-Order Spectral Difference Method for Simulating Viscous Flows on Unstructured Grids with Mixed Elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mao; Qiu, Zihua; Liang, Chunlei

    In the present study, a new spectral difference (SD) method is developed for viscous flows on meshes with a mixture of triangular and quadrilateral elements. The standard SD method for triangular elements, which employs Lagrangian interpolating functions for fluxes, is not stable when the designed accuracy of spatial discretization is third-order or higher. Unlike the standard SD method, the method examined here uses vector interpolating functions in the Raviart-Thomas (RT) spaces to construct continuous flux functions on reference elements. Studies have been performed for 2D wave equation and Euler equa- tions. Our present results demonstrated that the SDRT method ismore » stable and high-order accurate for a number of test problems by using triangular-, quadrilateral-, and mixed- element meshes.« less

  3. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE PAGES

    Gao, Kai; Huang, Lianjie

    2017-08-31

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  4. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Huang, Lianjie

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  5. A numerical solution of a singular boundary value problem arising in boundary layer theory.

    PubMed

    Hu, Jiancheng

    2016-01-01

    In this paper, a second-order nonlinear singular boundary value problem is presented, which is equivalent to the well-known Falkner-Skan equation. And the one-dimensional third-order boundary value problem on interval [Formula: see text] is equivalently transformed into a second-order boundary value problem on finite interval [Formula: see text]. The finite difference method is utilized to solve the singular boundary value problem, in which the amount of computational effort is significantly less than the other numerical methods. The numerical solutions obtained by the finite difference method are in agreement with those obtained by previous authors.

  6. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    PubMed

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  7. The determination of third order linear models from a seventh order nonlinear jet engine model

    NASA Technical Reports Server (NTRS)

    Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex

    1989-01-01

    Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.

  8. Boundary and Interface Conditions for High Order Finite Difference Methods Applied to the Euler and Navier-Strokes Equations

    NASA Technical Reports Server (NTRS)

    Nordstrom, Jan; Carpenter, Mark H.

    1998-01-01

    Boundary and interface conditions for high order finite difference methods applied to the constant coefficient Euler and Navier-Stokes equations are derived. The boundary conditions lead to strict and strong stability. The interface conditions are stable and conservative even if the finite difference operators and mesh sizes vary from domain to domain. Numerical experiments show that the new conditions also lead to good results for the corresponding nonlinear problems.

  9. A time-space domain stereo finite difference method for 3D scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie

    2016-11-01

    The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).

  10. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  11. Technical report series on global modeling and data assimilation. Volume 2: Direct solution of the implicit formulation of fourth order horizontal diffusion for gridpoint models on the sphere

    NASA Technical Reports Server (NTRS)

    Li, Yong; Moorthi, S.; Bates, J. Ray; Suarez, Max J.

    1994-01-01

    High order horizontal diffusion of the form K Delta(exp 2m) is widely used in spectral models as a means of preventing energy accumulation at the shortest resolved scales. In the spectral context, an implicit formation of such diffusion is trivial to implement. The present note describes an efficient method of implementing implicit high order diffusion in global finite difference models. The method expresses the high order diffusion equation as a sequence of equations involving Delta(exp 2). The solution is obtained by combining fast Fourier transforms in longitude with a finite difference solver for the second order ordinary differential equation in latitude. The implicit diffusion routine is suitable for use in any finite difference global model that uses a regular latitude/longitude grid. The absence of a restriction on the timestep makes it particularly suitable for use in semi-Lagrangian models. The scale selectivity of the high order diffusion gives it an advantage over the uncentering method that has been used to control computational noise in two-time-level semi-Lagrangian models.

  12. A third-order computational method for numerical fluxes to guarantee nonnegative difference coefficients for advection-diffusion equations in a semi-conservative form

    NASA Astrophysics Data System (ADS)

    Sakai, K.; Watabe, D.; Minamidani, T.; Zhang, G. S.

    2012-10-01

    According to Godunov theorem for numerical calculations of advection equations, there exist no higher-order schemes with constant positive difference coefficients in a family of polynomial schemes with an accuracy exceeding the first-order. We propose a third-order computational scheme for numerical fluxes to guarantee the non-negative difference coefficients of resulting finite difference equations for advection-diffusion equations in a semi-conservative form, in which there exist two kinds of numerical fluxes at a cell surface and these two fluxes are not always coincident in non-uniform velocity fields. The present scheme is optimized so as to minimize truncation errors for the numerical fluxes while fulfilling the positivity condition of the difference coefficients which are variable depending on the local Courant number and diffusion number. The feature of the present optimized scheme consists in keeping the third-order accuracy anywhere without any numerical flux limiter. We extend the present method into multi-dimensional equations. Numerical experiments for advection-diffusion equations showed nonoscillatory solutions.

  13. High Order Numerical Methods for the Investigation of the Two Dimensional Richtmyer-Meshkov Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Don, W-S; Gotllieb, D; Shu, C-W

    2001-11-26

    For flows that contain significant structure, high order schemes offer large advantages over low order schemes. Fundamentally, the reason comes from the truncation error of the differencing operators. If one examines carefully the expression for the truncation error, one will see that for a fixed computational cost that the error can be made much smaller by increasing the numerical order than by increasing the number of grid points. One can readily derive the following expression which holds for systems dominated by hyperbolic effects and advanced explicitly in time: flops = const * p{sup 2} * k{sup (d+1)(p+1)/p}/E{sup (d+1)/p} where flopsmore » denotes floating point operations, p denotes numerical order, d denotes spatial dimension, where E denotes the truncation error of the difference operator, and where k denotes the Fourier wavenumber. For flows that contain structure, such as turbulent flows or any calculation where, say, vortices are present, there will be significant energy in the high values of k. Thus, one can see that the rate of growth of the flops is very different for different values of p. Further, the constant in front of the expression is also very different. With a low order scheme, one quickly reaches the limit of the computer. With the high order scheme, one can obtain far more modes before the limit of the computer is reached. Here we examine the application of spectral methods and the Weighted Essentially Non-Oscillatory (WENO) scheme to the Richtmyer-Meshkov Instability. We show the intricate structure that these high order schemes can calculate and we show that the two methods, though very different, converge to the same numerical solution indicating that the numerical solution is very likely physically correct.« less

  14. Modelling of Surfaces. Part 2: Metallic Alloy Surfaces Using the BFS Method

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Ferrante, John; Kobistek, Robert J.

    1994-01-01

    Using BFS, a new semiempirical method for alloys, we study the surface structure of fcc ordered binary alloys. We concentrate on the calculation of surface energies and surface relaxations for the L1(sub 0) and L1(sub 2) ordered structures. Different terminations of the low-index faces are studied. Also, we present results for the interlayer relaxations for planes close to the surface, revealing different relaxations for atoms of different species producing a rippled surface layer.

  15. A new multigrid formulation for high order finite difference methods on summation-by-parts form

    NASA Astrophysics Data System (ADS)

    Ruggiu, Andrea A.; Weinerfelt, Per; Nordström, Jan

    2018-04-01

    Multigrid schemes for high order finite difference methods on summation-by-parts form are studied by comparing the effect of different interpolation operators. By using the standard linear prolongation and restriction operators, the Galerkin condition leads to inaccurate coarse grid discretizations. In this paper, an alternative class of interpolation operators that bypass this issue and preserve the summation-by-parts property on each grid level is considered. Clear improvements of the convergence rate for relevant model problems are achieved.

  16. Formal Solutions for Polarized Radiative Transfer. II. High-order Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janett, Gioele; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch

    When integrating the radiative transfer equation for polarized light, the necessity of high-order numerical methods is well known. In fact, well-performing high-order formal solvers enable higher accuracy and the use of coarser spatial grids. Aiming to provide a clear comparison between formal solvers, this work presents different high-order numerical schemes and applies the systematic analysis proposed by Janett et al., emphasizing their advantages and drawbacks in terms of order of accuracy, stability, and computational cost.

  17. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  18. Stability of streamwise vortices

    NASA Technical Reports Server (NTRS)

    Khorrami, M. K.; Grosch, C. E.; Ash, R. L.

    1987-01-01

    A brief overview of some theoretical and computational studies of the stability of streamwise vortices is given. The local induction model and classical hydrodynamic vortex stability theories are discussed in some detail. The importance of the three-dimensionality of the mean velocity profile to the results of stability calculations is discussed briefly. The mean velocity profile is provided by employing the similarity solution of Donaldson and Sullivan. The global method of Bridges and Morris was chosen for the spatial stability calculations for the nonlinear eigenvalue problem. In order to test the numerical method, a second order accurate central difference scheme was used to obtain the coefficient matrices. It was shown that a second order finite difference method lacks the required accuracy for global eigenvalue calculations. Finally the problem was formulated using spectral methods and a truncated Chebyshev series.

  19. Lagrangian particle method for compressible fluid dynamics

    NASA Astrophysics Data System (ADS)

    Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang

    2018-06-01

    A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.

  20. A second order radiative transfer equation and its solution by meshless method with application to strongly inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Zhao, J. M.; Tan, J. Y.; Liu, L. H.

    2013-01-01

    A new second order form of radiative transfer equation (named MSORTE) is proposed, which overcomes the singularity problem of a previously proposed second order radiative transfer equation [J.E. Morel, B.T. Adams, T. Noh, J.M. McGhee, T.M. Evans, T.J. Urbatsch, Spatial discretizations for self-adjoint forms of the radiative transfer equations, J. Comput. Phys. 214 (1) (2006) 12-40 (where it was termed SAAI), J.M. Zhao, L.H. Liu, Second order radiative transfer equation and its properties of numerical solution using finite element method, Numer. Heat Transfer B 51 (2007) 391-409] in dealing with inhomogeneous media where some locations have very small/zero extinction coefficient. The MSORTE contains a naturally introduced diffusion (or second order) term which provides better numerical property than the classic first order radiative transfer equation (RTE). The stability and convergence characteristics of the MSORTE discretized by central difference scheme is analyzed theoretically, and the better numerical stability of the second order form radiative transfer equations than the RTE when discretized by the central difference type method is proved. A collocation meshless method is developed based on the MSORTE to solve radiative transfer in inhomogeneous media. Several critical test cases are taken to verify the performance of the presented method. The collocation meshless method based on the MSORTE is demonstrated to be capable of stably and accurately solve radiative transfer in strongly inhomogeneous media, media with void region and even with discontinuous extinction coefficient.

  1. Blind third-order dispersion estimation based on fractional Fourier transformation for coherent optical communication

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Guo, Peng; Yang, Aiying; Qiao, Yaojun

    2018-02-01

    In this paper, we propose a blind third-order dispersion estimation method based on fractional Fourier transformation (FrFT) in optical fiber communication system. By measuring the chromatic dispersion (CD) at different wavelengths, this method can estimation dispersion slope and further calculate the third-order dispersion. The simulation results demonstrate that the estimation error is less than 2 % in 28GBaud dual polarization quadrature phase-shift keying (DP-QPSK) and 28GBaud dual polarization 16 quadrature amplitude modulation (DP-16QAM) system. Through simulations, the proposed third-order dispersion estimation method is shown to be robust against nonlinear and amplified spontaneous emission (ASE) noise. In addition, to reduce the computational complexity, searching step with coarse and fine granularity is chosen to search optimal order of FrFT. The third-order dispersion estimation method based on FrFT can be used to monitor the third-order dispersion in optical fiber system.

  2. A numerical solution for a variable-order reaction-diffusion model by using fractional derivatives with non-local and non-singular kernel

    NASA Astrophysics Data System (ADS)

    Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Torres, L.; Escobar-Jiménez, R. F.

    2018-02-01

    A reaction-diffusion system can be represented by the Gray-Scott model. The reaction-diffusion dynamic is described by a pair of time and space dependent Partial Differential Equations (PDEs). In this paper, a generalization of the Gray-Scott model by using variable-order fractional differential equations is proposed. The variable-orders were set as smooth functions bounded in (0 , 1 ] and, specifically, the Liouville-Caputo and the Atangana-Baleanu-Caputo fractional derivatives were used to express the time differentiation. In order to find a numerical solution of the proposed model, the finite difference method together with the Adams method were applied. The simulations results showed the chaotic behavior of the proposed model when different variable-orders are applied.

  3. An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling

    NASA Astrophysics Data System (ADS)

    Wang, Enjiang; Liu, Yang

    2018-01-01

    The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.

  4. A method for reducing the order of nonlinear dynamic systems

    NASA Astrophysics Data System (ADS)

    Masri, S. F.; Miller, R. K.; Sassi, H.; Caughey, T. K.

    1984-06-01

    An approximate method that uses conventional condensation techniques for linear systems together with the nonparametric identification of the reduced-order model generalized nonlinear restoring forces is presented for reducing the order of discrete multidegree-of-freedom dynamic systems that possess arbitrary nonlinear characteristics. The utility of the proposed method is demonstrated by considering a redundant three-dimensional finite-element model half of whose elements incorporate hysteretic properties. A nonlinear reduced-order model, of one-third the order of the original model, is developed on the basis of wideband stationary random excitation and the validity of the reduced-order model is subsequently demonstrated by its ability to predict with adequate accuracy the transient response of the original nonlinear model under a different nonstationary random excitation.

  5. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  6. Reduced Order Methods for Prediction of Thermal-Acoustic Fatigue

    NASA Technical Reports Server (NTRS)

    Przekop, A.; Rizzi, S. A.

    2004-01-01

    The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing random nonlinear vibrations in a presence of thermal loading. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.

  7. A Nonlinear Reduced Order Method for Prediction of Acoustic Fatigue

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Rizzi, Stephen A.

    2006-01-01

    The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing geometrically nonlinear random vibrations. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.

  8. [Preliminary study on correlation between diversity of soluble proteins and producing area of Cordyceps sinensis].

    PubMed

    Ren, Yan; Qiu, Yi; Wan, De-Guang; Lu, Xian-Ming; Guo, Jin-Lin

    2013-05-01

    To analyze the content and type of soluble proteins in Cordyceps sinensis from different producing areas and processed with different methods with bradford method and 2-DE technology, in order to discover significant differences in soluble proteins in C. sinensis processed with different methods and from different producing areas. The preliminary study indicated that the content and diversity of soluble proteins were related to producing areas and processing methods to some extent.

  9. High-order local maximum principle preserving (MPP) discontinuous Galerkin finite element method for the transport equation

    NASA Astrophysics Data System (ADS)

    Anderson, R.; Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Quezada de Luna, M.; Rieben, R.; Tomov, V.

    2017-04-01

    In this work we present a FCT-like Maximum-Principle Preserving (MPP) method to solve the transport equation. We use high-order polynomial spaces; in particular, we consider up to 5th order spaces in two and three dimensions and 23rd order spaces in one dimension. The method combines the concepts of positive basis functions for discontinuous Galerkin finite element spatial discretization, locally defined solution bounds, element-based flux correction, and non-linear local mass redistribution. We consider a simple 1D problem with non-smooth initial data to explain and understand the behavior of different parts of the method. Convergence tests in space indicate that high-order accuracy is achieved. Numerical results from several benchmarks in two and three dimensions are also reported.

  10. Pseudo-second order models for the adsorption of safranin onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth

    2007-04-02

    Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.

  11. A New Discretization Method of Order Four for the Numerical Solution of One-Space Dimensional Second-Order Quasi-Linear Hyperbolic Equation

    ERIC Educational Resources Information Center

    Mohanty, R. K.; Arora, Urvashi

    2002-01-01

    Three level-implicit finite difference methods of order four are discussed for the numerical solution of the mildly quasi-linear second-order hyperbolic equation A(x, t, u)u[subscript xx] + 2B(x, t, u)u[subscript xt] + C(x, t, u)u[subscript tt] = f(x, t, u, u[subscript x], u[subscript t]), 0 less than x less than 1, t greater than 0 subject to…

  12. Lagrangian particle method for compressible fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang

    A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less

  13. Lagrangian particle method for compressible fluid dynamics

    DOE PAGES

    Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang

    2018-02-09

    A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less

  14. Unconditionally stable finite-difference time-domain methods for modeling the Sagnac effect

    NASA Astrophysics Data System (ADS)

    Novitski, Roman; Scheuer, Jacob; Steinberg, Ben Z.

    2013-02-01

    We present two unconditionally stable finite-difference time-domain (FDTD) methods for modeling the Sagnac effect in rotating optical microsensors. The methods are based on the implicit Crank-Nicolson scheme, adapted to hold in the rotating system reference frame—the rotating Crank-Nicolson (RCN) methods. The first method (RCN-2) is second order accurate in space whereas the second method (RCN-4) is fourth order accurate. Both methods are second order accurate in time. We show that the RCN-4 scheme is more accurate and has better dispersion isotropy. The numerical results show good correspondence with the expression for the classical Sagnac resonant frequency splitting when using group refractive indices of the resonant modes of a microresonator. Also we show that the numerical results are consistent with the perturbation theory for the rotating degenerate microcavities. We apply our method to simulate the effect of rotation on an entire Coupled Resonator Optical Waveguide (CROW) consisting of a set of coupled microresonators. Preliminary results validate the formation of a rotation-induced gap at the center of a transfer function of a CROW.

  15. The Relation of Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1976-01-01

    Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.

  16. Protein detection through different platforms of immuno-loop-mediated isothermal amplification

    NASA Astrophysics Data System (ADS)

    Pourhassan-Moghaddam, Mohammad; Rahmati-Yamchi, Mohammad; Akbarzadeh, Abolfazl; Daraee, Hadis; Nejati-Koshki, Kazem; Hanifehpour, Younes; Joo, Sang Woo

    2013-11-01

    Different immunoassay-based methods have been devised to detect protein targets. These methods have some challenges that make them inefficient for assaying ultra-low-amounted proteins. ELISA, iPCR, iRCA, and iNASBA are the common immunoassay-based methods of protein detection, each of which has specific and common technical challenges making it necessary to introduce a novel method in order to avoid their problems for detection of target proteins. Here we propose a new method nominated as `immuno-loop-mediated isothermal amplification' or `iLAMP'. This new method is free from the problems of the previous methods and has significant advantages over them. In this paper we also offer various configurations in order to improve the applicability of this method in real-world sample analyses. Important potential applications of this method are stated as well.

  17. Evaluating Equity at the Local Level Using Bootstrap Tests. Research Report 2016-4

    ERIC Educational Resources Information Center

    Kim, YoungKoung; DeCarlo, Lawrence T.

    2016-01-01

    Because of concerns about test security, different test forms are typically used across different testing occasions. As a result, equating is necessary in order to get scores from the different test forms that can be used interchangeably. In order to assure the quality of equating, multiple equating methods are often examined. Various equity…

  18. High order filtering methods for approximating hyberbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1990-01-01

    In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.

  19. Investigation of second-order hyperpolarizability of some organic compounds

    NASA Astrophysics Data System (ADS)

    Tajalli, H.; Zirak, P.; Ahmadi, S.

    2003-04-01

    In this work, we have measured the second order hyperpolarizability of some organic materials with (EFISH) method and also calculated the second order hyperpolarizability of 13 organic compound with Mopac6 software and investigated the different factors that affect the amount of second order hyperpolarizability and ways to increase it.

  20. A comparison of different methods to implement higher order derivatives of density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, Hubertus J.J.

    Density functional theory is the dominant approach in electronic structure methods today. To calculate properties higher order derivatives of the density functionals are required. These derivatives might be implemented manually,by automatic differentiation, or by symbolic algebra programs. Different authors have cited different reasons for using the particular method of their choice. This paper presents work where all three approaches were used and the strengths and weaknesses of each approach are considered. It is found that all three methods produce code that is suffficiently performanted for practical applications, despite the fact that our symbolic algebra generated code and our automatic differentiationmore » code still have scope for significant optimization. The automatic differentiation approach is the best option for producing readable and maintainable code.« less

  1. A lattice Boltzmann model for the Burgers-Fisher equation.

    PubMed

    Zhang, Jianying; Yan, Guangwu

    2010-06-01

    A lattice Boltzmann model is developed for the one- and two-dimensional Burgers-Fisher equation based on the method of the higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. In order to obtain the two-dimensional Burgers-Fisher equation, vector sigma(j) has been used. And in order to overcome the drawbacks of "error rebound," a new assumption of additional distribution is presented, where two additional terms, in first order and second order separately, are used. Comparisons with the results obtained by other methods reveal that the numerical solutions obtained by the proposed method converge to exact solutions. The model under new assumption gives better results than that with second order assumption. (c) 2010 American Institute of Physics.

  2. Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.

    PubMed

    Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M

    2017-05-15

    We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.

  3. On optimizing the treatment of exchange perturbations.

    NASA Technical Reports Server (NTRS)

    Hirschfelder, J. O.; Chipman, D. M.

    1972-01-01

    Most theories of exchange perturbations would give the exact energy and wave function if carried out to an infinite order. However, the different methods give different values for the second-order energy, and different values for E(1), the expectation value of the Hamiltonian corresponding to the zeroth- plus first-order wave function. In the presented paper, it is shown that the zeroth- plus first-order wave function obtained by optimizing the basic equation which is used in most exchange perturbation treatments is the exact wave function for the perturbation system and E(1) is the exact energy.

  4. High-order centered difference methods with sharp shock resolution

    NASA Technical Reports Server (NTRS)

    Gustafsson, Bertil; Olsson, Pelle

    1994-01-01

    In this paper we consider high-order centered finite difference approximations of hyperbolic conservation laws. We propose different ways of adding artificial viscosity to obtain sharp shock resolution. For the Riemann problem we give simple explicit formulas for obtaining stationary one and two-point shocks. This can be done for any order of accuracy. It is shown that the addition of artificial viscosity is equivalent to ensuring the Lax k-shock condition. We also show numerical experiments that verify the theoretical results.

  5. High order filtering methods for approximating hyperbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1991-01-01

    The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.

  6. Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines

    NASA Astrophysics Data System (ADS)

    Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž

    2017-05-01

    This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces

  7. Second order upwind Lagrangian particle method for Euler equations

    DOE PAGES

    Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin

    2016-06-01

    A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less

  8. Second order upwind Lagrangian particle method for Euler equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin

    A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less

  9. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  10. [C, N and P stoichiometric characteristics of different root orders for three dominant tree species in subalpine forests of western Sichuan, China].

    PubMed

    Tang, Shi-shan; Yang, Wan-qin; Xiong, Li; Yin, Rui; Wang, Hai-peng; Zhang, Yan; Xu, Zhen-feng

    2015-02-01

    Fine root order was classified according to Pregitzer's method. This study measured carbon (C), nitrogen (N) and phosphorus (P) concentrations of the 1-5 root orders (diameter < 2 mm) in three dominant subalpine tree species (Betula albosinensis, Abies faxoniana and Picea asperata) of western Sichuan. Their stoichiometric ratios of different root orders were also calculated. The results showed that C concentration, C/N and C/P increased, but N and P concentrations decreased from the first to fifth order of fine root for all tree species. No significant changes in N/P among root orders were detected in each species. There were significant differences in C, N, P concentrations and their stoichiometric ratios among the tree species. The species-associated differences were dependent on root order. There were significant correlations between C, N, P concentrations and their stoichiometric ratios in the three tree species.

  11. Stirling engine design manual, 2nd edition

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1983-01-01

    This manual is intended to serve as an introduction to Stirling cycle heat engines, as a key to the available literature on Stirling engines and to identify nonproprietary Stirling engine design methodologies. Two different fully described Stirling engines are discussed. Engine design methods are categorized as first order, second order, and third order with increased order number indicating increased complexity. FORTRAN programs are listed for both an isothermal second order design program and an adiabatic second order design program. Third order methods are explained and enumerated. In this second edition of the manual the references are updated. A revised personal and corporate author index is given and an expanded directory lists over 80 individuals and companies active in Stirling engines.

  12. Transient signal isotope analysis: validation of the method for isotope signal synchronization with the determination of amplifier first-order time constants.

    PubMed

    Gourgiotis, Alkiviadis; Manhès, Gérard; Louvat, Pascale; Moureau, Julien; Gaillardet, Jérôme

    2015-09-30

    During transient signal acquisition by Multi-Collection Inductively Coupled Plasma Mass Spectrometry (MC-ICPMS), an isotope ratio increase or decrease (isotopic drift hereafter) is often observed which is related to the different time responses of the amplifiers involved in multi-collection. This isotopic drift affects the quality of the isotopic data and, in a recent study, a method of internal amplifier signal synchronization for isotope drift correction was proposed. In this work the determination of the amplifier time constants was investigated in order to validate the method of internal amplifier signal synchronization for isotope ratio drift correction. Two different MC-ICPMS instruments, the Neptune and the Neptune Plus, were used, and both the lead transient signals and the signal decay curves of the amplifiers were investigated. Our results show that the first part of the amplifier signal decay curve is characterized by a pure exponential decay. This part of the signal decay was used for the effective calculation of the amplifier first-order time constants. The small differences between these time constants were compared with time lag values obtained from the method of isotope signal synchronization and were found to be in good agreement. This work proposes a way of determining amplifier first-order time constants. We show that isotopic drift is directly related to the amplifier first-order time constants and the method of internal amplifier signal synchronization for isotope ratio drift correction is validated. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Discrete conservation laws and the convergence of long time simulations of the mkdv equation

    NASA Astrophysics Data System (ADS)

    Gorria, C.; Alejo, M. A.; Vega, L.

    2013-02-01

    Pseudospectral collocation methods and finite difference methods have been used for approximating an important family of soliton like solutions of the mKdV equation. These solutions present a structural instability which make difficult to approximate their evolution in long time intervals with enough accuracy. The standard numerical methods do not guarantee the convergence to the proper solution of the initial value problem and often fail by approaching solutions associated to different initial conditions. In this frame the numerical schemes that preserve the discrete invariants related to some conservation laws of this equation produce better results than the methods which only take care of a high consistency order. Pseudospectral spatial discretization appear as the most robust of the numerical methods, but finite difference schemes are useful in order to analyze the rule played by the conservation of the invariants in the convergence.

  14. High order spectral volume and spectral difference methods on unstructured grids

    NASA Astrophysics Data System (ADS)

    Kannan, Ravishekar

    The spectral volume (SV) and the spectral difference (SD) methods were developed by Wang and Liu and their collaborators for conservation laws on unstructured grids. They were introduced to achieve high-order accuracy in an efficient manner. Recently, these methods were extended to three-dimensional systems and to the Navier Stokes equations. The simplicity and robustness of these methods have made them competitive against other higher order methods such as the discontinuous Galerkin and residual distribution methods. Although explicit TVD Runge-Kutta schemes for the temporal advancement are easy to implement, they suffer from small time step limited by the Courant-Friedrichs-Lewy (CFL) condition. When the polynomial order is high or when the grid is stretched due to complex geometries or boundary layers, the convergence rate of explicit schemes slows down rapidly. Solution strategies to remedy this problem include implicit methods and multigrid methods. A novel implicit lower-upper symmetric Gauss-Seidel (LU-SGS) relaxation method is employed as an iterative smoother. It is compared to the explicit TVD Runge-Kutta smoothers. For some p-multigrid calculations, combining implicit and explicit smoothers for different p-levels is also studied. The multigrid method considered is nonlinear and uses Full Approximation Scheme (FAS). An overall speed-up factor of up to 150 is obtained using a three-level p-multigrid LU-SGS approach in comparison with the single level explicit method for the Euler equations for the 3rd order SD method. A study of viscous flux formulations was carried out for the SV method. Three formulations were used to discretize the viscous fluxes: local discontinuous Galerkin (LDG), a penalty method and the 2nd method of Bassi and Rebay. Fourier analysis revealed some interesting advantages for the penalty method. These were implemented in the Navier Stokes solver. An implicit and p-multigrid method was also implemented for the above. An overall speed-up factor of up to 1500 is obtained using a three-level p-multigrid LU-SGS approach in comparison with the single level explicit method for the Navier-Stokes equations. The SV method was also extended to turbulent flows. The RANS based SA model was used to close the Reynolds stresses. The numerical results are very promising and indicate that the approaches have great potentials for 3D flow problems.

  15. Second order finite-difference ghost-point multigrid methods for elliptic problems with discontinuous coefficients on an arbitrary interface

    NASA Astrophysics Data System (ADS)

    Coco, Armando; Russo, Giovanni

    2018-05-01

    In this paper we propose a second-order accurate numerical method to solve elliptic problems with discontinuous coefficients (with general non-homogeneous jumps in the solution and its gradient) in 2D and 3D. The method consists of a finite-difference method on a Cartesian grid in which complex geometries (boundaries and interfaces) are embedded, and is second order accurate in the solution and the gradient itself. In order to avoid the drop in accuracy caused by the discontinuity of the coefficients across the interface, two numerical values are assigned on grid points that are close to the interface: a real value, that represents the numerical solution on that grid point, and a ghost value, that represents the numerical solution extrapolated from the other side of the interface, obtained by enforcing the assigned non-homogeneous jump conditions on the solution and its flux. The method is also extended to the case of matrix coefficient. The linear system arising from the discretization is solved by an efficient multigrid approach. Unlike the 1D case, grid points are not necessarily aligned with the normal derivative and therefore suitable stencils must be chosen to discretize interface conditions in order to achieve second order accuracy in the solution and its gradient. A proper treatment of the interface conditions will allow the multigrid to attain the optimal convergence factor, comparable with the one obtained by Local Fourier Analysis for rectangular domains. The method is robust enough to handle large jump in the coefficients: order of accuracy, monotonicity of the errors and good convergence factor are maintained by the scheme.

  16. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  17. Verification of a non-hydrostatic dynamical core using the horizontal spectral element method and vertical finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-11-01

    The non-hydrostatic (NH) compressible Euler equations for dry atmosphere were solved in a simplified two-dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. By using horizontal SEM, which decomposes the physical domain into smaller pieces with a small communication stencil, a high level of scalability can be achieved. By using vertical FDM, an easy method for coupling the dynamics and existing physics packages can be provided. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind-biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative and integral terms. For temporal integration, a time-split, third-order Runge-Kutta (RK3) integration technique was applied. The Euler equations that were used here are in flux form based on the hydrostatic pressure vertical coordinate. The equations are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate was implemented in this model. We validated the model by conducting the widely used standard tests: linear hydrostatic mountain wave, tracer advection, and gravity wave over the Schär-type mountain, as well as density current, inertia-gravity wave, and rising thermal bubble. The results from these tests demonstrated that the model using the horizontal SEM and the vertical FDM is accurate and robust provided sufficient diffusion is applied. The results with various horizontal resolutions also showed convergence of second-order accuracy due to the accuracy of the time integration scheme and that of the vertical direction, although high-order basis functions were used in the horizontal. By using the 2-D slice model, we effectively showed that the combined spatial discretization method of the spectral element and finite difference methods in the horizontal and vertical directions, respectively, offers a viable method for development of an NH dynamical core.

  18. Using multi-attribute decision-making approaches in the selection of a hospital management system.

    PubMed

    Arasteh, Mohammad Ali; Shamshirband, Shahaboddin; Yee, Por Lip

    2018-01-01

    The most appropriate organizational software is always a real challenge for managers, especially, the IT directors. The illustration of the term "enterprise software selection", is to purchase, create, or order a software that; first, is best adapted to require of the organization; and second, has suitable price and technical support. Specifying selection criteria and ranking them, is the primary prerequisite for this action. This article provides a method to evaluate, rank, and compare the available enterprise software for choosing the apt one. The prior mentioned method is constituted of three-stage processes. First, the method identifies the organizational requires and assesses them. Second, it selects the best method throughout three possibilities; indoor-production, buying software, and ordering special software for the native use. Third, the method evaluates, compares and ranks the alternative software. The third process uses different methods of multi attribute decision making (MADM), and compares the consequent results. Based on different characteristics of the problem; several methods had been tested, namely, Analytic Hierarchy Process (AHP), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Elimination and Choice Expressing Reality (ELECTURE), and easy weight method. After all, we propose the most practical method for same problems.

  19. Numerical solution of the wave equation with variable wave speed on nonconforming domains by high-order difference potentials

    NASA Astrophysics Data System (ADS)

    Britt, S.; Tsynkov, S.; Turkel, E.

    2018-02-01

    We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.

  20. A Vortex Particle-Mesh method for subsonic compressible flows

    NASA Astrophysics Data System (ADS)

    Parmentier, Philippe; Winckelmans, Grégoire; Chatelain, Philippe

    2018-02-01

    This paper presents the implementation and validation of a remeshed Vortex Particle-Mesh (VPM) method capable of simulating complex compressible and viscous flows. It is supplemented with a radiation boundary condition in order for the method to accommodate the radiating quantities of the flow. The efficiency of the methodology relies on the use of an underlying grid; it allows the use of a FFT-based Poisson solver to calculate the velocity field, and the use of high-order isotropic finite differences to evaluate the non-advective terms in the Lagrangian form of the conservation equations. The Möhring analogy is then also used to further obtain the far-field sound produced by two co-rotating Gaussian vortices. It is demonstrated that the method is in excellent quantitative agreement with reference results that were obtained using a high-order Eulerian method and using a high-order remeshed Vortex Particle (VP) method.

  1. Application of multiattribute decision-making methods for the determination of relative significance factor of impact categories.

    PubMed

    Noh, Jaesung; Lee, Kun Mo

    2003-05-01

    A relative significance factor (f(i)) of an impact category is the external weight of the impact category. The objective of this study is to propose a systematic and easy-to-use method for the determination of f(i). Multiattribute decision-making (MADM) methods including the analytical hierarchy process (AHP), the rank-order centroid method, and the fuzzy method were evaluated for this purpose. The results and practical aspects of using the three methods are compared. Each method shows the same trend, with minor differences in the value of f(i). Thus, all three methods can be applied to the determination of f(i). The rank order centroid method reduces the number of pairwise comparisons by placing the alternatives in order, although it has inherent weakness over the fuzzy method in expressing the degree of vagueness associated with assigning weights to criteria and alternatives. The rank order centroid method is considered a practical method for the determination of f(i) because it is easier and simpler to use compared to the AHP and the fuzzy method.

  2. Fully decoupled monolithic projection method for natural convection problems

    NASA Astrophysics Data System (ADS)

    Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il

    2017-04-01

    To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.

  3. Given a one-step numerical scheme, on which ordinary differential equations is it exact?

    NASA Astrophysics Data System (ADS)

    Villatoro, Francisco R.

    2009-01-01

    A necessary condition for a (non-autonomous) ordinary differential equation to be exactly solved by a one-step, finite difference method is that the principal term of its local truncation error be null. A procedure to determine some ordinary differential equations exactly solved by a given numerical scheme is developed. Examples of differential equations exactly solved by the explicit Euler, implicit Euler, trapezoidal rule, second-order Taylor, third-order Taylor, van Niekerk's second-order rational, and van Niekerk's third-order rational methods are presented.

  4. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  5. Boundary conditions in Chebyshev and Legendre methods

    NASA Technical Reports Server (NTRS)

    Canuto, C.

    1984-01-01

    Two different ways of treating non-Dirichlet boundary conditions in Chebyshev and Legendre collocation methods are discussed for second order differential problems. An error analysis is provided. The effect of preconditioning the corresponding spectral operators by finite difference matrices is also investigated.

  6. Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation.

    PubMed

    Sim, K S; Norhisham, S

    2016-11-01

    A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. Equivalence of internal and external mixture schemes of single scattering properties in vector radiative transfer

    PubMed Central

    Mukherjee, Lipi; Zhai, Peng-Wang; Hu, Yongxiang; Winker, David M.

    2018-01-01

    Polarized radiation fields in a turbid medium are influenced by single-scattering properties of scatterers. It is common that media contain two or more types of scatterers, which makes it essential to properly mix single-scattering properties of different types of scatterers in the vector radiative transfer theory. The vector radiative transfer solvers can be divided into two basic categories: the stochastic and deterministic methods. The stochastic method is basically the Monte Carlo method, which can handle scatterers with different scattering properties explicitly. This mixture scheme is called the external mixture scheme in this paper. The deterministic methods, however, can only deal with a single set of scattering properties in the smallest discretized spatial volume. The single-scattering properties of different types of scatterers have to be averaged before they are input to deterministic solvers. This second scheme is called the internal mixture scheme. The equivalence of these two different mixture schemes of scattering properties has not been demonstrated so far. In this paper, polarized radiation fields for several scattering media are solved using the Monte Carlo and successive order of scattering (SOS) methods and scattering media contain two types of scatterers: Rayleigh scatterers (molecules) and Mie scatterers (aerosols). The Monte Carlo and SOS methods employ external and internal mixture schemes of scatterers, respectively. It is found that the percentage differences between radiances solved by these two methods with different mixture schemes are of the order of 0.1%. The differences of Q/I, U/I, and V/I are of the order of 10−5 ~ 10−4, where I, Q, U, and V are the Stokes parameters. Therefore, the equivalence between these two mixture schemes is confirmed to the accuracy level of the radiative transfer numerical benchmarks. This result provides important guidelines for many radiative transfer applications that involve the mixture of different scattering and absorptive particles. PMID:29047543

  8. Generalized energy and potential enstrophy conserving finite difference schemes for the shallow water equations

    NASA Technical Reports Server (NTRS)

    Abramopoulos, Frank

    1988-01-01

    The conditions under which finite difference schemes for the shallow water equations can conserve both total energy and potential enstrophy are considered. A method of deriving such schemes using operator formalism is developed. Several such schemes are derived for the A-, B- and C-grids. The derived schemes include second-order schemes and pseudo-fourth-order schemes. The simplest B-grid pseudo-fourth-order schemes are presented.

  9. A high-order positivity-preserving single-stage single-step method for the ideal magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Christlieb, Andrew J.; Feng, Xiao; Seal, David C.; Tang, Qi

    2016-07-01

    We propose a high-order finite difference weighted ENO (WENO) method for the ideal magnetohydrodynamics (MHD) equations. The proposed method is single-stage (i.e., it has no internal stages to store), single-step (i.e., it has no time history that needs to be stored), maintains a discrete divergence-free condition on the magnetic field, and has the capacity to preserve the positivity of the density and pressure. To accomplish this, we use a Taylor discretization of the Picard integral formulation (PIF) of the finite difference WENO method proposed in Christlieb et al. (2015) [23], where the focus is on a high-order discretization of the fluxes (as opposed to the conserved variables). We use the version where fluxes are expanded to third-order accuracy in time, and for the fluid variables space is discretized using the classical fifth-order finite difference WENO discretization. We use constrained transport in order to obtain divergence-free magnetic fields, which means that we simultaneously evolve the magnetohydrodynamic (that has an evolution equation for the magnetic field) and magnetic potential equations alongside each other, and set the magnetic field to be the (discrete) curl of the magnetic potential after each time step. In this work, we compute these derivatives to fourth-order accuracy. In order to retain a single-stage, single-step method, we develop a novel Lax-Wendroff discretization for the evolution of the magnetic potential, where we start with technology used for Hamilton-Jacobi equations in order to construct a non-oscillatory magnetic field. The end result is an algorithm that is similar to our previous work Christlieb et al. (2014) [8], but this time the time stepping is replaced through a Taylor method with the addition of a positivity-preserving limiter. Finally, positivity preservation is realized by introducing a parameterized flux limiter that considers a linear combination of high and low-order numerical fluxes. The choice of the free parameter is then given in such a way that the fluxes are limited towards the low-order solver until positivity is attained. Given the lack of additional degrees of freedom in the system, this positivity limiter lacks energy conservation where the limiter turns on. However, this ingredient can be dropped for problems where the pressure does not become negative. We present two and three dimensional numerical results for several standard test problems including a smooth Alfvén wave (to verify formal order of accuracy), shock tube problems (to test the shock-capturing ability of the scheme), Orszag-Tang, and cloud shock interactions. These results assert the robustness and verify the high-order of accuracy of the proposed scheme.

  10. Least squares reverse time migration of controlled order multiples

    NASA Astrophysics Data System (ADS)

    Liu, Y.

    2016-12-01

    Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).

  11. An improved finite-difference analysis of uncoupled vibrations of tapered cantilever beams

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, K. B.; Kaza, K. R. V.

    1983-01-01

    An improved finite difference procedure for determining the natural frequencies and mode shapes of tapered cantilever beams undergoing uncoupled vibrations is presented. Boundary conditions are derived in the form of simple recursive relations involving the second order central differences. Results obtained by using the conventional first order central differences and the present second order central differences are compared, and it is observed that the present second order scheme is more efficient than the conventional approach. An important advantage offered by the present approach is that the results converge to exact values rapidly, and thus the extrapolation of the results is not necessary. Consequently, the basic handicap with the classical finite difference method of solution that requires the Richardson's extrapolation procedure is eliminated. Furthermore, for the cases considered herein, the present approach produces consistent lower bound solutions.

  12. A high-order staggered finite-element vertical discretization for non-hydrostatic atmospheric models

    DOE PAGES

    Guerra, Jorge E.; Ullrich, Paul A.

    2016-06-01

    Atmospheric modeling systems require economical methods to solve the non-hydrostatic Euler equations. Two major differences between hydrostatic models and a full non-hydrostatic description lies in the vertical velocity tendency and numerical stiffness associated with sound waves. In this work we introduce a new arbitrary-order vertical discretization entitled the staggered nodal finite-element method (SNFEM). Our method uses a generalized discrete derivative that consistently combines the discontinuous Galerkin and spectral element methods on a staggered grid. Our combined method leverages the accurate wave propagation and conservation properties of spectral elements with staggered methods that eliminate stationary (2Δ x) modes. Furthermore, high-order accuracymore » also eliminates the need for a reference state to maintain hydrostatic balance. In this work we demonstrate the use of high vertical order as a means of improving simulation quality at relatively coarse resolution. We choose a test case suite that spans the range of atmospheric flows from predominantly hydrostatic to nonlinear in the large-eddy regime. Lastly, our results show that there is a distinct benefit in using the high-order vertical coordinate at low resolutions with the same robust properties as the low-order alternative.« less

  13. A high-order staggered finite-element vertical discretization for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerra, Jorge E.; Ullrich, Paul A.

    Atmospheric modeling systems require economical methods to solve the non-hydrostatic Euler equations. Two major differences between hydrostatic models and a full non-hydrostatic description lies in the vertical velocity tendency and numerical stiffness associated with sound waves. In this work we introduce a new arbitrary-order vertical discretization entitled the staggered nodal finite-element method (SNFEM). Our method uses a generalized discrete derivative that consistently combines the discontinuous Galerkin and spectral element methods on a staggered grid. Our combined method leverages the accurate wave propagation and conservation properties of spectral elements with staggered methods that eliminate stationary (2Δ x) modes. Furthermore, high-order accuracymore » also eliminates the need for a reference state to maintain hydrostatic balance. In this work we demonstrate the use of high vertical order as a means of improving simulation quality at relatively coarse resolution. We choose a test case suite that spans the range of atmospheric flows from predominantly hydrostatic to nonlinear in the large-eddy regime. Lastly, our results show that there is a distinct benefit in using the high-order vertical coordinate at low resolutions with the same robust properties as the low-order alternative.« less

  14. ULTRA-SHARP solution of the Smith-Hutton problem

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1992-01-01

    Highly convective scalar transport involving near-discontinuities and strong streamline curvature was addressed in a paper by Smith and Hutton in 1982, comparing several different convection schemes applied to a specially devised test problem. First order methods showed significant artificial diffusion, whereas higher order methods gave less smearing but had a tendency to overshoot and oscillate. Perhaps because unphysical oscillations are more obvious than unphysical smearing, the intervening period has seen a rise in popularity of low order artificially diffusive schemes, especially in the numerical heat transfer industry. The present paper describes an alternate strategy of using non-artificially diffusive high order methods, while maintaining strictly monotonic transitions through the use of simple flux limited constraints. Limited third order upwinding is usually found to be the most cost effective basic convection scheme. Tighter resolution of discontinuities can be obtained at little additional cost by using automatic adaptive stencil expansion to higher order in local regions, as needed.

  15. Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    2001-01-01

    Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.

  16. Development and validation of different methods manipulating zero order and first order spectra for determination of the partially overlapped mixture benazepril and amlodipine: A comparative study

    NASA Astrophysics Data System (ADS)

    Hemdan, A.

    2016-07-01

    Three simple, selective, and accurate spectrophotometric methods have been developed and then validated for the analysis of Benazepril (BENZ) and Amlodipine (AML) in bulk powder and pharmaceutical dosage form. The first method is the absorption factor (AF) for zero order and amplitude factor (P-F) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 238 nm or from their first order spectra at 253 nm. The second method is the constant multiplication coupled with constant subtraction (CM-CS) for zero order and successive derivative subtraction-constant multiplication (SDS-CM) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 240 nm and 238 nm, respectively, or from their first order spectra at 214 nm and 253 nm for Benazepril and Amlodipine respectively. The third method is the novel constant multiplication coupled with derivative zero crossing (CM-DZC) which is a stability indicating assay method for determination of Benazepril and Amlodipine in presence of the main degradation product of Benazepril which is Benazeprilate (BENZT). The three methods were validated as per the ICH guidelines and the standard curves were found to be linear in the range of 5-60 μg/mL for Benazepril and 5-30 for Amlodipine, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.

  17. Development and validation of different methods manipulating zero order and first order spectra for determination of the partially overlapped mixture benazepril and amlodipine: A comparative study.

    PubMed

    Hemdan, A

    2016-07-05

    Three simple, selective, and accurate spectrophotometric methods have been developed and then validated for the analysis of Benazepril (BENZ) and Amlodipine (AML) in bulk powder and pharmaceutical dosage form. The first method is the absorption factor (AF) for zero order and amplitude factor (P-F) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 238nm or from their first order spectra at 253nm. The second method is the constant multiplication coupled with constant subtraction (CM-CS) for zero order and successive derivative subtraction-constant multiplication (SDS-CM) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 240nm and 238nm, respectively, or from their first order spectra at 214nm and 253nm for Benazepril and Amlodipine respectively. The third method is the novel constant multiplication coupled with derivative zero crossing (CM-DZC) which is a stability indicating assay method for determination of Benazepril and Amlodipine in presence of the main degradation product of Benazepril which is Benazeprilate (BENZT). The three methods were validated as per the ICH guidelines and the standard curves were found to be linear in the range of 5-60μg/mL for Benazepril and 5-30 for Amlodipine, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Slat Noise Predictions Using Higher-Order Finite-Difference Methods on Overset Grids

    NASA Technical Reports Server (NTRS)

    Housman, Jeffrey A.; Kiris, Cetin

    2016-01-01

    Computational aeroacoustic simulations using the structured overset grid approach and higher-order finite difference methods within the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for slat noise predictions. The simulations are part of a collaborative study comparing noise generation mechanisms between a conventional slat and a Krueger leading edge flap. Simulation results are compared with experimental data acquired during an aeroacoustic test in the NASA Langley Quiet Flow Facility. Details of the structured overset grid, numerical discretization, and turbulence model are provided.

  19. Divergence Free High Order Filter Methods for Multiscale Non-ideal MHD Flows

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2003-01-01

    Low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field (Delta . B) numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.

  20. Group analysis for natural convection from a vertical plate

    NASA Astrophysics Data System (ADS)

    Rashed, A. S.; Kassem, M. M.

    2008-12-01

    The steady laminar natural convection of a fluid having chemical reaction of order n past a semi-infinite vertical plate is considered. The solution of the problem by means of one-parameter group method reduces the number of independent variables by one leading to a system of nonlinear ordinary differential equations. Two different similarity transformations are found. In each case the set of differential equations are solved numerically using Runge-Kutta and the shooting method. For each transformation different Schmidt numbers and chemical reaction orders are tested.

  1. High-Order Methods for Computational Fluid Dynamics: A Brief Review of Compact Differential Formulations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.; Wang, Z. J.; Vincent, P. E.

    2013-01-01

    Popular high-order schemes with compact stencils for Computational Fluid Dynamics (CFD) include Discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV) methods. The recently proposed Flux Reconstruction (FR) approach or Correction Procedure using Reconstruction (CPR) is based on a differential formulation and provides a unifying framework for these high-order schemes. Here we present a brief review of recent developments for the FR/CPR schemes as well as some pacing items.

  2. Potential Energy Surface of the Chromium Dimer Re-re-revisited with Multiconfigurational Perturbation Theory.

    PubMed

    Vancoillie, Steven; Malmqvist, Per Åke; Veryazov, Valera

    2016-04-12

    The chromium dimer has long been a benchmark molecule to evaluate the performance of different computational methods ranging from density functional theory to wave function methods. Among the latter, multiconfigurational perturbation theory was shown to be able to reproduce the potential energy surface of the chromium dimer accurately. However, for modest active space sizes, it was later shown that different definitions of the zeroth-order Hamiltonian have a large impact on the results. In this work, we revisit the system for the third time with multiconfigurational perturbation theory, now in order to increase the active space of the reference wave function. This reduces the impact of the choice of zeroth-order Hamiltonian and improves the shape of the potential energy surface significantly. We conclude by comparing our results of the dissocation energy and vibrational spectrum to those obtained from several highly accurate multiconfigurational methods and experiment. For a meaningful comparison, we used the extrapolation to the complete basis set for all methods involved.

  3. Linear and non-linear regression analysis for the sorption kinetics of methylene blue onto activated carbon.

    PubMed

    Kumar, K Vasanth

    2006-10-11

    Batch kinetic experiments were carried out for the sorption of methylene blue onto activated carbon. The experimental kinetics were fitted to the pseudo first-order and pseudo second-order kinetics by linear and a non-linear method. The five different types of Ho pseudo second-order expression have been discussed. A comparison of linear least-squares method and a trial and error non-linear method of estimating the pseudo second-order rate kinetic parameters were examined. The sorption process was found to follow a both pseudo first-order kinetic and pseudo second-order kinetic model. Present investigation showed that it is inappropriate to use a type 1 and type pseudo second-order expressions as proposed by Ho and Blanachard et al. respectively for predicting the kinetic rate constants and the initial sorption rate for the studied system. Three correct possible alternate linear expressions (type 2 to type 4) to better predict the initial sorption rate and kinetic rate constants for the studied system (methylene blue/activated carbon) was proposed. Linear method was found to check only the hypothesis instead of verifying the kinetic model. Non-linear regression method was found to be the more appropriate method to determine the rate kinetic parameters.

  4. A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.

    1989-01-01

    A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.

  5. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  6. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  7. Wavelet-based adaptation methodology combined with finite difference WENO to solve ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Do, Seongju; Li, Haojun; Kang, Myungjoo

    2017-06-01

    In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.

  8. Modified Taylor series method for solving nonlinear differential equations with mixed boundary conditions defined on finite intervals.

    PubMed

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel Antonio; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Marin-Hernandez, Antonio; Herrera-May, Agustin Leobardo; Diaz-Sanchez, Alejandro; Huerta-Chua, Jesus

    2014-01-01

    In this article, we propose the application of a modified Taylor series method (MTSM) for the approximation of nonlinear problems described on finite intervals. The issue of Taylor series method with mixed boundary conditions is circumvented using shooting constants and extra derivatives of the problem. In order to show the benefits of this proposal, three different kinds of problems are solved: three-point boundary valued problem (BVP) of third-order with a hyperbolic sine nonlinearity, two-point BVP for a second-order nonlinear differential equation with an exponential nonlinearity, and a two-point BVP for a third-order nonlinear differential equation with a radical nonlinearity. The result shows that the MTSM method is capable to generate easily computable and highly accurate approximations for nonlinear equations. 34L30.

  9. Single and double acquisition strategies for compensation of artifacts from eddy current and transient oscillation in balanced steady-state free precession.

    PubMed

    Lee, Hyun-Soo; Choi, Seung Hong; Park, Sung-Hong

    2017-07-01

    To develop single and double acquisition methods to compensate for artifacts from eddy currents and transient oscillations in balanced steady-state free precession (bSSFP) with centric phase-encoding (PE) order for magnetization-prepared bSSFP imaging. A single and four different double acquisition methods were developed and evaluated with Bloch equation simulations, phantom/in vivo experiments, and quantitative analyses. For the single acquisition method, multiple PE groups, each of which was composed of N linearly changing PE lines, were ordered in a pseudocentric manner for optimal contrast and minimal signal fluctuations. Double acquisition methods used complex averaging of two images that had opposite artifact patterns from different acquisition orders or from different numbers of dummy scans. Simulation results showed high sensitivity of eddy-current and transient-oscillation artifacts to off-resonance frequency and PE schemes. The artifacts were reduced with the PE-grouping with N values from 3 to 8, similar to or better than the conventional pairing scheme of N = 2. The proposed double acquisition methods removed the remaining artifacts significantly. The proposed methods conserved detailed structures in magnetization transfer imaging well, compared with the conventional methods. The proposed single and double acquisition methods can be useful for artifact-free magnetization-prepared bSSFP imaging with desired contrast and minimized dummy scans. Magn Reson Med 78:254-263, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  10. Characterization of bone collagen organization defects in murine hypophosphatasia using a Zernike model of optical aberrations

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan Forouhesh; Pendleton, Emily G.; Leitmann, Bobby; Barrow, Ruth; Mortensen, Luke J.

    2018-02-01

    Bone growth and strength is severely impacted by Hypophosphatasia (HPP). It is a genetic disease that affects the mineralization of the bone. We hypothesize that it impacts overall organization, density, and porosity of collagen fibers. Lower density of fibers and higher porosity cause less absorption and scattering of light, and therefore a different regime of transport mean free path. To find a cure for this disease, a metric for the evaluation of bone is required. Here we present an evaluation method based on our Phase Accumulation Ray Tracing (PART) method. This method uses second harmonic generation (SHG) in bone collagen fiber to model bone indices of refraction, which is used to calculate phase retardation on the propagation path of light in bone. The calculated phase is then expanded using Zernike polynomials up to 15th order, to make a quantitative analysis of tissue anomalies. Because the Zernike modes are a complete set of orthogonal polynomials, we can compare low and high order modes in HPP, compare them with healthy wild type mice, to identify the differences between their geometry and structure. Larger coefficients of low order modes show more uniform fiber density and less porosity, whereas the opposite is shown with larger coefficients of higher order modes. Our analyses show significant difference between Zernike modes in different types of bone evidenced by Principal Components Analysis (PCA).

  11. An arbitrary-order Runge–Kutta discontinuous Galerkin approach to reinitialization for banded conservative level sets

    DOE PAGES

    Jibben, Zechariah Joel; Herrmann, Marcus

    2017-08-24

    Here, we present a Runge-Kutta discontinuous Galerkin method for solving conservative reinitialization in the context of the conservative level set method. This represents an extension of the method recently proposed by Owkes and Desjardins [21], by solving the level set equations on the refined level set grid and projecting all spatially-dependent variables into the full basis used by the discontinuous Galerkin discretization. By doing so, we achieve the full k+1 order convergence rate in the L1 norm of the level set field predicted for RKDG methods given kth degree basis functions when the level set profile thickness is held constantmore » with grid refinement. Shape and volume errors for the 0.5-contour of the level set, on the other hand, are found to converge between first and second order. We show a variety of test results, including the method of manufactured solutions, reinitialization of a circle and sphere, Zalesak's disk, and deforming columns and spheres, all showing substantial improvements over the high-order finite difference traditional level set method studied for example by Herrmann. We also demonstrate the need for kth order accurate normal vectors, as lower order normals are found to degrade the convergence rate of the method.« less

  12. Order-disorder effects in structure and color relation of photonic-crystal-type nanostructures in butterfly wing scales.

    PubMed

    Márk, Géza I; Vértesy, Zofia; Kertész, Krisztián; Bálint, Zsolt; Biró, László P

    2009-11-01

    In order to study local and global order in butterfly wing scales possessing structural colors, we have developed a direct space algorithm, based on averaging the local environment of the repetitive units building up the structure. The method provides the statistical distribution of the local environments, including the histogram of the nearest-neighbor distance and the number of nearest neighbors. We have analyzed how the different kinds of randomness present in the direct space structure influence the reciprocal space structure. It was found that the Fourier method is useful in the case of a structure randomly deviating from an ordered lattice. The direct space averaging method remains applicable even for structures lacking long-range order. Based on the first Born approximation, a link is established between the reciprocal space image and the optical reflectance spectrum. Results calculated within this framework agree well with measured reflectance spectra because of the small width and moderate refractive index contrast of butterfly scales. By the analysis of the wing scales of Cyanophrys remus and Albulina metallica butterflies, we tested the methods for structures having long-range order, medium-range order, and short-range order.

  13. Order-disorder effects in structure and color relation of photonic-crystal-type nanostructures in butterfly wing scales

    NASA Astrophysics Data System (ADS)

    Márk, Géza I.; Vértesy, Zofia; Kertész, Krisztián; Bálint, Zsolt; Biró, László P.

    2009-11-01

    In order to study local and global order in butterfly wing scales possessing structural colors, we have developed a direct space algorithm, based on averaging the local environment of the repetitive units building up the structure. The method provides the statistical distribution of the local environments, including the histogram of the nearest-neighbor distance and the number of nearest neighbors. We have analyzed how the different kinds of randomness present in the direct space structure influence the reciprocal space structure. It was found that the Fourier method is useful in the case of a structure randomly deviating from an ordered lattice. The direct space averaging method remains applicable even for structures lacking long-range order. Based on the first Born approximation, a link is established between the reciprocal space image and the optical reflectance spectrum. Results calculated within this framework agree well with measured reflectance spectra because of the small width and moderate refractive index contrast of butterfly scales. By the analysis of the wing scales of Cyanophrys remus and Albulina metallica butterflies, we tested the methods for structures having long-range order, medium-range order, and short-range order.

  14. Stripe order in the underdoped region of the two-dimensional Hubbard model

    NASA Astrophysics Data System (ADS)

    Zheng, Bo-Xiao; Chung, Chia-Min; Corboz, Philippe; Ehlers, Georg; Qin, Ming-Pu; Noack, Reinhard M.; Shi, Hao; White, Steven R.; Zhang, Shiwei; Chan, Garnet Kin-Lic

    2017-12-01

    Competing inhomogeneous orders are a central feature of correlated electron materials, including the high-temperature superconductors. The two-dimensional Hubbard model serves as the canonical microscopic physical model for such systems. Multiple orders have been proposed in the underdoped part of the phase diagram, which corresponds to a regime of maximum numerical difficulty. By combining the latest numerical methods in exhaustive simulations, we uncover the ordering in the underdoped ground state. We find a stripe order that has a highly compressible wavelength on an energy scale of a few kelvin, with wavelength fluctuations coupled to pairing order. The favored filled stripe order is different from that seen in real materials. Our results demonstrate the power of modern numerical methods to solve microscopic models, even in challenging settings.

  15. Investigation of Attitudinal Differences among Individuals of Different Employment Status

    DTIC Science & Technology

    2010-10-28

    be included in order to statistically control for common method variance (see Podsakoff , MacKenzie, Lee, & Podsakoff , 2003). Results Hypotheses 1...social identity theory. Social Psychology Quarterly, 58, 255-269. Podsakoff , P. M., MacKenzie, S. B., Lee, J., & Podsakoff , N. P. (2003). Common method

  16. Ultrahigh-order Maxwell solver with extreme scalability for electromagnetic PIC simulations of plasmas

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri; Vay, Jean-Luc

    2018-07-01

    The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.

  17. A comparison of matrix methods for calculating eigenvalues in acoustically lined ducts

    NASA Technical Reports Server (NTRS)

    Watson, W.; Lansing, D. L.

    1976-01-01

    Three approximate methods - finite differences, weighted residuals, and finite elements - were used to solve the eigenvalue problem which arises in finding the acoustic modes and propagation constants in an absorptively lined two-dimensional duct without airflow. The matrix equations derived for each of these methods were solved for the eigenvalues corresponding to various values of wall impedance. Two matrix orders, 20 x 20 and 40 x 40, were used. The cases considered included values of wall admittance for which exact eigenvalues were known and for which several nearly equal roots were present. Ten of the lower order eigenvalues obtained from the three approximate methods were compared with solutions calculated from the exact characteristic equation in order to make an assessment of the relative accuracy and reliability of the three methods. The best results were given by the finite element method using a cubic polynomial. Excellent accuracy was consistently obtained, even for nearly equal eigenvalues, by using a 20 x 20 order matrix.

  18. A meshless method for solving two-dimensional variable-order time fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Tayebi, A.; Shekari, Y.; Heydari, M. H.

    2017-07-01

    Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.

  19. Curing kinetics of 4,4‧-Methylenebis epoxy and m-Xylylenediamine

    NASA Astrophysics Data System (ADS)

    Li, Z. R.; Li, X. D.; Guo, X. Y.

    2017-11-01

    In this paper, the curing kinetics of 4,4‧-Methylenebis epoxy resin(TGDDM) and m-Xylylenediamine(m-XDA) was investigated by non-isothermal differential scanning calorimetry(DSC) at various heating rates. Selected non-isothermal methods for analyzing curing kinetics were compared. The activation energy(E) and the correlation coefficient(R) were obtained by different isoconversional methods. The reaction order(n) was obtained by the activation energy in different isoconversional methods for the by Crane equation. The results show that the apparent activation energy are 65.23kJ/mol, 52.20 kJ/mol and 66.10 kJ/mol by using the method of Kissinger, Friedman and F-W-O, the reaction order are 0.911, 0.729 and 0.923 by using the method of Kissinger, Friedman and F-W-O.

  20. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibben, Zechariah Joel; Herrmann, Marcus

    Here, we present a Runge-Kutta discontinuous Galerkin method for solving conservative reinitialization in the context of the conservative level set method. This represents an extension of the method recently proposed by Owkes and Desjardins [21], by solving the level set equations on the refined level set grid and projecting all spatially-dependent variables into the full basis used by the discontinuous Galerkin discretization. By doing so, we achieve the full k+1 order convergence rate in the L1 norm of the level set field predicted for RKDG methods given kth degree basis functions when the level set profile thickness is held constantmore » with grid refinement. Shape and volume errors for the 0.5-contour of the level set, on the other hand, are found to converge between first and second order. We show a variety of test results, including the method of manufactured solutions, reinitialization of a circle and sphere, Zalesak's disk, and deforming columns and spheres, all showing substantial improvements over the high-order finite difference traditional level set method studied for example by Herrmann. We also demonstrate the need for kth order accurate normal vectors, as lower order normals are found to degrade the convergence rate of the method.« less

  2. Pseudo second order kinetics and pseudo isotherms for malachite green onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2006-08-25

    Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.

  3. Extremely low order time-fractional differential equation and application in combustion process

    NASA Astrophysics Data System (ADS)

    Xu, Qinwu; Xu, Yufeng

    2018-11-01

    Fractional blow-up model, especially which is of very low order of fractional derivative, plays a significant role in combustion process. The order of time-fractional derivative in diffusion model essentially distinguishes the super-diffusion and sub-diffusion processes when it is relatively high or low accordingly. In this paper, the blow-up phenomenon and condition of its appearance are theoretically proved. The blow-up moment is estimated by using differential inequalities. To numerically study the behavior around blow-up point, a mixed numerical method based on adaptive finite difference on temporal direction and highly effective discontinuous Galerkin method on spatial direction is proposed. The time of blow-up is calculated accurately. In simulation, we analyze the dynamics of fractional blow-up model under different orders of fractional derivative. It is found that the lower the order, the earlier the blow-up comes, by fixing the other parameters in the model. Our results confirm the physical truth that a combustor for explosion cannot be too small.

  4. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less

  5. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    NASA Astrophysics Data System (ADS)

    Duru, Kenneth; Dunham, Eric M.

    2016-01-01

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.

  6. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  7. Difference magnitude is not measured by discrimination steps for order of point patterns.

    PubMed

    Protonotarios, Emmanouil D; Johnston, Alan; Griffin, Lewis D

    2016-07-01

    We have shown in previous work that the perception of order in point patterns is consistent with an interval scale structure (Protonotarios, Baum, Johnston, Hunter, & Griffin, 2014). The psychophysical scaling method used relies on the confusion between stimuli with similar levels of order, and the resulting discrimination scale is expressed in just-noticeable differences (jnds). As with other perceptual dimensions, an interesting question is whether suprathreshold (perceptual) differences are consistent with distances between stimuli on the discrimination scale. To test that, we collected discrimination data, and data based on comparison of perceptual differences. The stimuli were jittered square lattices of dots, covering the range from total disorder (Poisson) to perfect order (square lattice), roughly equally spaced on the discrimination scale. Observers picked the most ordered pattern from a pair, and the pair of patterns with the greatest difference in order from two pairs. Although the judgments of perceptual difference were found to be consistent with an interval scale, like the discrimination judgments, no common interval scale that could predict both sets of data was possible. In particular, the midpattern of the perceptual scale is 11 jnds away from the ordered end, and 5 jnds from the disordered end of the discrimination scale.

  8. Do Birth Order, Family Size and Gender Affect Arithmetic Achievement in Elementary School?

    ERIC Educational Resources Information Center

    Desoete, Annemie

    2008-01-01

    Introduction: For decades birth order and gender differences have attracted research attention. Method: Birth order, family size and gender, and the relationship with arithmetic achievement is studied among 1152 elementary school children (540 girls, 612 boys) in Flanders. Children were matched on socioeconomic status of the parents and…

  9. Impact localization on composite laminates using fiber Bragg grating sensors and a novel technique based on strain amplitude

    NASA Astrophysics Data System (ADS)

    Zhao, Gang; Li, Shuxin; Hu, Haixiao; Zhong, Yucheng; Li, Kun

    2018-01-01

    Carbon fiber reinforced composite materials have been widely used in aerospace and other high-tech fields because of their excellent performance. However barely visible impact damage can be introduced by low velocity impact, which might bring out tremendous risk. In this paper, a new method is proposed to predict the position of low velocity impact. The dynamic strain signal that is caused by low velocity impact is obtained by the fiber Bragg grating (FBG) sensor. The amplitude of the first K order natural frequency is extracted by Fast Fourier Transform (FFT). The amplitude data is normalized, and then establish k order vector matrix model is established. It is proposed that K order sum of squares of deviations can be used as the basis to predict positioning. Two different validation tests were performed. The experimental model was made of different layers. FBG were used to embed and paste type method, experiments were conducted with impact of different energy levels. The results show that proposed method is feasible.

  10. A study on the behaviour of high-order flux reconstruction method with different low-dissipation numerical fluxes for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Boxi, Lin; Chao, Yan; Shusheng, Chen

    2017-10-01

    This work focuses on the numerical dissipation features of high-order flux reconstruction (FR) method combined with different numerical fluxes in turbulence flows. The famous Roe and AUSM+ numerical fluxes together with their corresponding low-dissipation enhanced versions (LMRoe, SLAU2) and higher resolution variants (HR-LMRoe, HR-SLAU2) are incorporated into FR framework, and the dissipation interplay of these combinations is investigated in implicit large eddy simulation. The numerical dissipation stemming from these convective numerical fluxes is quantified by simulating the inviscid Gresho vortex, the transitional Taylor-Green vortex and the homogenous decaying isotropic turbulence. The results suggest that low-dissipation enhanced versions are preferential both in high-order and low-order cases to their original forms, while the use of HR-SLAU2 has marginal improvements and the HR-LMRoe leads to degenerated solution with high-order. In high-order the effects of numerical fluxes are reduced, and their viscosity may not be dissipative enough to provide physically consistent turbulence when under-resolved.

  11. On a fourth order accurate implicit finite difference scheme for hyperbolic conservation laws. I - Nonstiff strongly dynamic problems

    NASA Technical Reports Server (NTRS)

    Harten, A.; Tal-Ezer, H.

    1981-01-01

    An implicit finite difference method of fourth order accuracy in space and time is introduced for the numerical solution of one-dimensional systems of hyperbolic conservation laws. The basic form of the method is a two-level scheme which is unconditionally stable and nondissipative. The scheme uses only three mesh points at level t and three mesh points at level t + delta t. The dissipative version of the basic method given is conditionally stable under the CFL (Courant-Friedrichs-Lewy) condition. This version is particularly useful for the numerical solution of problems with strong but nonstiff dynamic features, where the CFL restriction is reasonable on accuracy grounds. Numerical results are provided to illustrate properties of the proposed method.

  12. Glucose dispersion measurement using white-light LCI

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Bagherzadeh, Morteza; Hitzenberger, Christoph K.; Pircher, Michael; Zawadzki, Robert; Fercher, Adolf F.

    2003-07-01

    We measured second order dispersion of glucose solution using a Michelson Low Coherent Interferometer (LCI). Three different glucose concentrations: 20mg/dl (hypoglycemia), 100mg/dl (normal level), and 500mg/dl (hyperglycemia) are investigated over the wavelength range 0.5μm to 0.85μm, and the investigation shows that different concentrations are associated with different second-order dispersions. The second-order dispersions for wavelengths from 0.55μm to 0.8μm are determined by Fourier analysis of the interferogram. This approach can be applied to measure the second-order dispersion for distinguishing the different glucose concentrations. It can be considered as a potentially noninvasive method to determine glucose concentration in human eye. A brief discussion is presented in this poster as well.

  13. A discontinuous control volume finite element method for multi-phase flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Salinas, P.; Pavlidis, D.; Xie, Z.; Osman, H.; Pain, C. C.; Jackson, M. D.

    2018-01-01

    We present a new, high-order, control-volume-finite-element (CVFE) method for multiphase porous media flow with discontinuous 1st-order representation for pressure and discontinuous 2nd-order representation for velocity. The method has been implemented using unstructured tetrahedral meshes to discretize space. The method locally and globally conserves mass. However, unlike conventional CVFE formulations, the method presented here does not require the use of control volumes (CVs) that span the boundaries between domains with differing material properties. We demonstrate that the approach accurately preserves discontinuous saturation changes caused by permeability variations across such boundaries, allowing efficient simulation of flow in highly heterogeneous models. Moreover, accurate solutions are obtained at significantly lower computational cost than using conventional CVFE methods. We resolve a long-standing problem associated with the use of classical CVFE methods to model flow in highly heterogeneous porous media.

  14. Localization of tumors in various organs, using edge detection algorithms

    NASA Astrophysics Data System (ADS)

    López Vélez, Felipe

    2015-09-01

    The edge of an image is a set of points organized in a curved line, where in each of these points the brightness of the image changes abruptly, or has discontinuities, in order to find these edges there will be five different mathematical methods to be used and later on compared with its peers, this is with the aim of finding which of the methods is the one that can find the edges of any given image. In this paper these five methods will be used for medical purposes in order to find which one is capable of finding the edges of a scanned image more accurately than the others. The problem consists in analyzing the following two biomedicals images. One of them represents a brain tumor and the other one a liver tumor. These images will be analyzed with the help of the five methods described and the results will be compared in order to determine the best method to be used. It was decided to use different algorithms of edge detection in order to obtain the results shown below; Bessel algorithm, Morse algorithm, Hermite algorithm, Weibull algorithm and Sobel algorithm. After analyzing the appliance of each of the methods to both images it's impossible to determine the most accurate method for tumor detection due to the fact that in each case the best method changed, i.e., for the brain tumor image it can be noticed that the Morse method was the best at finding the edges of the image but for the liver tumor image it was the Hermite method. Making further observations it is found that Hermite and Morse have for these two cases the lowest standard deviations, concluding that these two are the most accurate method to find the edges in analysis of biomedical images.

  15. Examining Differential Item Functions of Different Item Ordered Test Forms According to Item Difficulty Levels

    ERIC Educational Resources Information Center

    Çokluk, Ömay; Gül, Emrah; Dogan-Gül, Çilem

    2016-01-01

    The study aims to examine whether differential item function is displayed in three different test forms that have item orders of random and sequential versions (easy-to-hard and hard-to-easy), based on Classical Test Theory (CTT) and Item Response Theory (IRT) methods and bearing item difficulty levels in mind. In the correlational research, the…

  16. A comparison of HAS & NICE guidelines for the economic evaluation of health technologies in the context of their respective national health care systems and cultural environments

    PubMed Central

    Massetti, Marc; Aballéa, Samuel; Videau, Yann; Rémuzat, Cécile; Roïz, Julie; Toumi, Mondher

    2015-01-01

    Background Health technology assessment (HTA) has been reinforced in France, notably with the introduction of economic evaluation in the pricing process for the most innovative and expensive treatments. Similarly to the National Institute for Clinical Excellence (NICE) in England, the National Authority for Health (HAS), which is responsible for economic evaluation of new health technologies in France, has published recommendations on the methods of economic evaluation. Since economic assessment represents a major element of HTA in England, exploring the differences between these methodological guidelines might help to comprehend both the shape and the role economic assessment is intended to have in the French health care system. Methods Methodological guidelines for economic evaluation in France and England have been compared topic-by-topic in order to bring out key differences in the recommended methods for economic evaluation. Results The analysis of both guidelines has revealed multiple similarities between France and England, although a number of differences were also noted regarding the elected methodology of analysis, the comparison of studies’ outcomes with cost-effectiveness thresholds, the study population to consider, the quality of life valuation methods, the perspective on costs, the types of resources considered and their valuation, the discount rates to apply in order to reflect the present value of interventions, etc. To account for these differences, modifications will be required in order to adapt economic models from one country to the other. Conclusions Changes in HTA assessment methods occur in response to different challenges determined by the different philosophical and cultural considerations surrounding health and welfare as well as the political considerations regarding the role of public policies and the importance of their evaluation. PMID:27123190

  17. High-Order Finite-Difference Schemes for Numerical Simulation of Hypersonic Boundary-Layer Transition

    NASA Astrophysics Data System (ADS)

    Zhong, Xiaolin

    1998-08-01

    Direct numerical simulation (DNS) has become a powerful tool in studying fundamental phenomena of laminar-turbulent transition of high-speed boundary layers. Previous DNS studies of supersonic and hypersonic boundary layer transition have been limited to perfect-gas flow over flat-plate boundary layers without shock waves. For hypersonic boundary layers over realistic blunt bodies, DNS studies of transition need to consider the effects of bow shocks, entropy layers, surface curvature, and finite-rate chemistry. It is necessary that numerical methods for such studies are robust and high-order accurate both in resolving wide ranges of flow time and length scales and in resolving the interaction between the bow shocks and flow disturbance waves. This paper presents a new high-order shock-fitting finite-difference method for the DNS of the stability and transition of hypersonic boundary layers over blunt bodies with strong bow shocks and with (or without) thermo-chemical nonequilibrium. The proposed method includes a set of new upwind high-order finite-difference schemes which are stable and are less dissipative than a straightforward upwind scheme using an upwind-bias grid stencil, a high-order shock-fitting formulation, and third-order semi-implicit Runge-Kutta schemes for temporal discretization of stiff reacting flow equations. The accuracy and stability of the new schemes are validated by numerical experiments of the linear wave equation and nonlinear Navier-Stokes equations. The algorithm is then applied to the DNS of the receptivity of hypersonic boundary layers over a parabolic leading edge to freestream acoustic disturbances.

  18. Numerical investigation of sixth order Boussinesq equation

    NASA Astrophysics Data System (ADS)

    Kolkovska, N.; Vucheva, V.

    2017-10-01

    We propose a family of conservative finite difference schemes for the Boussinesq equation with sixth order dispersion terms. The schemes are of second order of approximation. The method is conditionally stable with a mild restriction τ = O(h) on the step sizes. Numerical tests are performed for quadratic and cubic nonlinearities. The numerical experiments show second order of convergence of the discrete solution to the exact one.

  19. Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.

    2004-01-01

    The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.

  20. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2017-09-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  1. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2018-07-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  2. Nonlocal electron transport: direct and Greens function solution and comparison of our model with the SNB model

    NASA Astrophysics Data System (ADS)

    Colombant, Denis; Manheimer, Wallace; Schmitt, Andrew J.

    2013-10-01

    At least two models, ours and SNB (Schurtz-Nicolai-Busquet), and two methods of solution, direct numerical solution (DS) and Greens function (GF) are being used in multi-dimensional radiation hydrodynamics codes. We present results of a laser target implosion using both methods of solution. Although our model and SNB differ in some physical content, direct comparisons have been non-existent up to now. However a paper by Marocchino et al. has recently presented the results of two nanosecond-time-scale test problems, showing that the preheat calculated by the two models are different by about three orders of magnitude. We have rerun these problems and we find much less difference between the two than they do. One can show analytically that the results should be quite similar and are about an order of magnitude less than the maximum, and two orders of magnitude more than the minimum preheating in. We have been able to trace the somewhat different results back to the different physical assumptions made in each model. Work supported by DoE-NNSA and ONR.

  3. Adaptive Low Dissipative High Order Filter Methods for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2004-01-01

    Adaptive low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field [divergence of B] numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.

  4. Wave propagation in anisotropic elastic materials and curvilinear coordinates using a summation-by-parts finite difference method

    DOE PAGES

    Petersson, N. Anders; Sjogreen, Bjorn

    2015-07-20

    We develop a fourth order accurate finite difference method for solving the three-dimensional elastic wave equation in general heterogeneous anisotropic materials on curvilinear grids. The proposed method is an extension of the method for isotropic materials, previously described in the paper by Sjögreen and Petersson (2012) [11]. The method we proposed discretizes the anisotropic elastic wave equation in second order formulation, using a node centered finite difference method that satisfies the principle of summation by parts. The summation by parts technique results in a provably stable numerical method that is energy conserving. Also, we generalize and evaluate the super-grid far-fieldmore » technique for truncating unbounded domains. Unlike the commonly used perfectly matched layers (PML), the super-grid technique is stable for general anisotropic material, because it is based on a coordinate stretching combined with an artificial dissipation. Moreover, the discretization satisfies an energy estimate, proving that the numerical approximation is stable. We demonstrate by numerical experiments that sufficiently wide super-grid layers result in very small artificial reflections. Applications of the proposed method are demonstrated by three-dimensional simulations of anisotropic wave propagation in crystals.« less

  5. Importance of curvature evaluation scale for predictive simulations of dynamic gas-liquid interfaces

    NASA Astrophysics Data System (ADS)

    Owkes, Mark; Cauble, Eric; Senecal, Jacob; Currie, Robert A.

    2018-07-01

    The effect of the scale used to compute the interfacial curvature on the prediction of dynamic gas-liquid interfaces is investigated. A new interface curvature calculation methodology referred to herein as the Adjustable Curvature Evaluation Scale (ACES) is proposed. ACES leverages a weighted least squares regression to fit a polynomial through points computed on the volume-of-fluid representation of the gas-liquid interface. The interface curvature is evaluated from this polynomial. Varying the least squares weight with distance from the location where the curvature is being computed, adjusts the scale the curvature is evaluated on. ACES is verified using canonical static test cases and compared against second- and fourth-order height function methods. Simulations of dynamic interfaces, including a standing wave and oscillating droplet, are performed to assess the impact of the curvature evaluation scale for predicting interface motions. ACES and the height function methods are combined with two different unsplit geometric volume-of-fluid (VoF) schemes that define the interface on meshes with different levels of refinement. We find that the results depend significantly on curvature evaluation scale. Particularly, the ACES scheme with a properly chosen weight function is accurate, but fails when the scale is too small or large. Surprisingly, the second-order height function method is more accurate than the fourth-order variant for the dynamic tests even though the fourth-order method performs better for static interfaces. Comparing the curvature evaluation scale of the second- and fourth-order height function methods, we find the second-order method is closer to the optimum scale identified with ACES. This result suggests that the curvature scale is driving the accuracy of the dynamics. This work highlights the importance of studying numerical methods with realistic (dynamic) test cases and that the interactions of the various discretizations is as important as the accuracy of one part of the discretization.

  6. Type-2 fuzzy set extension of DEMATEL method combined with perceptual computing for decision making

    NASA Astrophysics Data System (ADS)

    Hosseini, Mitra Bokaei; Tarokh, Mohammad Jafar

    2013-05-01

    Most decision making methods used to evaluate a system or demonstrate the weak and strength points are based on fuzzy sets and evaluate the criteria with words that are modeled with fuzzy sets. The ambiguity and vagueness of the words and different perceptions of a word are not considered in these methods. For this reason, the decision making methods that consider the perceptions of decision makers are desirable. Perceptual computing is a subjective judgment method that considers that words mean different things to different people. This method models words with interval type-2 fuzzy sets that consider the uncertainty of the words. Also, there are interrelations and dependency between the decision making criteria in the real world; therefore, using decision making methods that cannot consider these relations is not feasible in some situations. The Decision-Making Trail and Evaluation Laboratory (DEMATEL) method considers the interrelations between decision making criteria. The current study used the combination of DEMATEL and perceptual computing in order to improve the decision making methods. For this reason, the fuzzy DEMATEL method was extended into type-2 fuzzy sets in order to obtain the weights of dependent criteria based on the words. The application of the proposed method is presented for knowledge management evaluation criteria.

  7. Numerical investigation of implementation of air-earth boundary by acoustic-elastic boundary approach

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2007-01-01

    The need for incorporating the traction-free condition at the air-earth boundary for finite-difference modeling of seismic wave propagation has been discussed widely. A new implementation has been developed for simulating elastic wave propagation in which the free-surface condition is replaced by an explicit acoustic-elastic boundary. Detailed comparisons of seismograms with different implementations for the air-earth boundary were undertaken using the (2,2) (the finite-difference operators are second order in time and space) and the (2,6) (second order in time and sixth order in space) standard staggered-grid (SSG) schemes. Methods used in these comparisons to define the air-earth boundary included the stress image method (SIM), the heterogeneous approach, the scheme of modifying material properties based on transversely isotropic medium approach, the acoustic-elastic boundary approach, and an analytical approach. The method proposed achieves the same or higher accuracy of modeled body waves relative to the SIM. Rayleigh waves calculated using the explicit acoustic-elastic boundary approach differ slightly from those calculated using the SIM. Numerical results indicate that when using the (2,2) SSG scheme for SIM and our new method, a spatial step of 16 points per minimum wavelength is sufficient to achieve 90% accuracy; 32 points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. When using the (2,6) SSG scheme for the two methods, a spatial step of eight points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. Our proposed method is physically reasonable and, based on dispersive analysis of simulated seismographs from a layered half-space model, is highly accurate. As a bonus, our proposed method is easy to program and slightly faster than the SIM. ?? 2007 Society of Exploration Geophysicists.

  8. Simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps and traditional Chinese medicine Radix angelicae pubescentis using excitation-emission matrix fluorescence coupled with second-order calibration method

    NASA Astrophysics Data System (ADS)

    Wang, Li; Wu, Hai-Long; Yin, Xiao-Li; Hu, Yong; Gu, Hui-Wen; Yu, Ru-Qin

    2017-01-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method is presented for simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps (SL) and traditional Chinese medicine Radix angelicae pubescentis (RAP). Using the strategy of combining EEM fluorescence data with second-order calibration method based on the alternating trilinear decomposition (ATLD) algorithm, the simultaneous quantification of umbelliferone and scopoletin in the two different complex systems was achieved successfully, even in the presence of potential interferents. The pretreatment is simple due to the "second-order advantage" and the use of "mathematical separation" instead of awkward "physical or chemical separation". Satisfactory results have been achieved with the limits of detection (LODs) of umbelliferone and scopoletin being 0.06 ng mL- 1 and 0.16 ng mL- 1, respectively. The average spike recoveries of umbelliferone and scopoletin are 98.8 ± 4.3% and 102.5 ± 3.3%, respectively. Besides, HPLC-DAD method was used to further validate the presented strategy, and t-test indicates that prediction results of the two methods have no significant differences. Satisfactory experimental results imply that our method is fast, low-cost and sensitive when compared with HPLC-DAD method.

  9. Ordered delinquency: the "effects" of birth order on delinquency.

    PubMed

    Cundiff, Patrick R

    2013-08-01

    Juvenile delinquency has long been associated with birth order in popular culture. While images of the middle child acting out for attention or the rebellious youngest child readily spring to mind, little research has attempted to explain why. Drawing from Adlerian birth order theory and Sulloway's born-to-rebel hypothesis, I examine the relationship between birth order and a variety of delinquent outcomes during adolescence. Following some recent research on birth order and intelligence, I use new methods that allow for the examination of between-individual and within-family differences to better address the potential spurious relationship. My findings suggest that contrary to popular belief, the relationship between birth order and delinquency is spurious. Specifically, I find that birth order effects on delinquency are spurious and largely products of the analytic methods used in previous tests of the relationship. The implications of this finding are discussed.

  10. Ordered Delinquency: The “Effects” of Birth Order On Delinquency

    PubMed Central

    Cundiff, Patrick R.

    2014-01-01

    Juvenile delinquency has long been associated with birth order in popular culture. While images of the middle child acting out for attention or the rebellious youngest child readily spring to mind, little research has attempted to explain why. Drawing from Adlerian birth order theory and Sulloway's born to rebel hypothesis I examine the relationship between birth order and a variety of delinquent outcomes during adolescence. Following some recent research on birth order and intelligence, I use new methods that allow for the examination of both between-individual and within-family differences to better address the potential spurious relationship. My findings suggest that contrary to popular belief the relationship between birth order and delinquency is spurious. Specifically, I find that birth order effects on delinquency are spurious and largely products of the analytic methods used in previous tests of the relationship. The implications of this finding are discussed. PMID:23719623

  11. Simulation of thermal transpiration flow using a high-order moment method

    NASA Astrophysics Data System (ADS)

    Sheng, Qiang; Tang, Gui-Hua; Gu, Xiao-Jun; Emerson, David R.; Zhang, Yong-Hao

    2014-04-01

    Nonequilibrium thermal transpiration flow is numerically analyzed by an extended thermodynamic approach, a high-order moment method. The captured velocity profiles of temperature-driven flow in a parallel microchannel and in a micro-chamber are compared with available kinetic data or direct simulation Monte Carlo (DSMC) results. The advantages of the high-order moment method are shown as a combination of more accuracy than the Navier-Stokes-Fourier (NSF) equations and less computation cost than the DSMC method. In addition, the high-order moment method is also employed to simulate the thermal transpiration flow in complex geometries in two types of Knudsen pumps. One is based on micro-mechanized channels, where the effect of different wall temperature distributions on thermal transpiration flow is studied. The other relies on porous structures, where the variation of flow rate with a changing porosity or pore surface area ratio is investigated. These simulations can help to optimize the design of a real Knudsen pump.

  12. A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows

    DOE PAGES

    Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...

    2015-03-11

    High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less

  13. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  14. Periodic solutions of second-order nonlinear difference equations containing a small parameter. II - Equivalent linearization

    NASA Technical Reports Server (NTRS)

    Mickens, R. E.

    1985-01-01

    The classical method of equivalent linearization is extended to a particular class of nonlinear difference equations. It is shown that the method can be used to obtain an approximation of the periodic solutions of these equations. In particular, the parameters of the limit cycle and the limit points can be determined. Three examples illustrating the method are presented.

  15. Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.

  16. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    NASA Astrophysics Data System (ADS)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  17. A method to stabilize linear systems using eigenvalue gradient information

    NASA Technical Reports Server (NTRS)

    Wieseman, C. D.

    1985-01-01

    Formal optimization methods and eigenvalue gradient information are used to develop a stabilizing control law for a closed loop linear system that is initially unstable. The method was originally formulated by using direct, constrained optimization methods with the constraints being the real parts of the eigenvalues. However, because of problems in trying to achieve stabilizing control laws, the problem was reformulated to be solved differently. The method described uses the Davidon-Fletcher-Powell minimization technique to solve an indirect, constrained minimization problem in which the performance index is the Kreisselmeier-Steinhauser function of the real parts of all the eigenvalues. The method is applied successfully to solve two different problems: the determination of a fourth-order control law stabilizes a single-input single-output active flutter suppression system and the determination of a second-order control law for a multi-input multi-output lateral-directional flight control system. Various sets of design variables and initial starting points were chosen to show the robustness of the method.

  18. A seismic analysis for masonry constructions: The different schematization methods of masonry walls

    NASA Astrophysics Data System (ADS)

    Olivito, Renato. S.; Codispoti, Rosamaria; Scuro, Carmelo

    2017-11-01

    Seismic analysis of masonry structures is usually analyzed through the use of structural calculation software based on equivalent frames method or to macro-elements method. In these approaches, the masonry walls are divided into vertical elements, masonry walls, and horizontal elements, so-called spandrel elements, interconnected by rigid nodes. The aim of this work is to make a critical comparison between different schematization methods of masonry wall underlining the structural importance of the spandrel elements. In order to implement the methods, two different structural calculation software were used and an existing masonry building has been examined.

  19. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  20. Multi-Dimensional High Order Essentially Non-Oscillatory Finite Difference Methods in Generalized Coordinates

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1998-01-01

    This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.

  1. Comparative analysis of a nontraditional general chemistry textbook and selected traditional textbooks used in Texas community colleges

    NASA Astrophysics Data System (ADS)

    Salvato, Steven Walter

    The purpose of this study was to analyze questions within the chapters of a nontraditional general chemistry textbook and the four general chemistry textbooks most widely used by Texas community colleges in order to determine if the questions require higher- or lower-order thinking according to Bloom's taxonomy. The study employed quantitative methods. Bloom's taxonomy (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) was utilized as the main instrument in the study. Additional tools were used to help classify the questions into the proper category of the taxonomy (McBeath, 1992; Metfessel, Michael, & Kirsner, 1969). The top four general chemistry textbooks used in Texas community colleges and Chemistry: A Project of the American Chemical Society (Bell et al., 2005) were analyzed during the fall semester of 2010 in order to categorize the questions within the chapters into one of the six levels of Bloom's taxonomy. Two coders were used to assess reliability. The data were analyzed using descriptive and inferential methods. The descriptive method involved calculation of the frequencies and percentages of coded questions from the books as belonging to the six categories of the taxonomy. Questions were dichotomized into higher- and lower-order thinking questions. The inferential methods involved chi-square tests of association to determine if there were statistically significant differences among the four traditional college general chemistry textbooks in the proportions of higher- and lower-order questions and if there were statistically significant differences between the nontraditional chemistry textbook and the four traditional general chemistry textbooks. Findings indicated statistically significant differences among the four textbooks frequently used in Texas community colleges in the number of higher- and lower-level questions. Statistically significant differences were also found among the four textbooks and the nontraditional textbook. After the analysis of the data, conclusions were drawn, implications for practice were delineated, and recommendations for future research were given.

  2. A high-order 3D spectral difference solver for simulating flows about rotating geometries

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Liang, Chunlei

    2017-11-01

    Fluid flows around rotating geometries are ubiquitous. For example, a spinning ping pong ball can quickly change its trajectory in an air flow; a marine propeller can provide enormous amount of thrust to a ship. It has been a long-time challenge to accurately simulate these flows. In this work, we present a high-order and efficient 3D flow solver based on unstructured spectral difference (SD) method and a novel sliding-mesh method. In the SD method, solution and fluxes are reconstructed using tensor products of 1D polynomials and the equations are solved in differential-form, which leads to high-order accuracy and high efficiency. In the sliding-mesh method, a computational domain is decomposed into non-overlapping subdomains. Each subdomain can enclose a geometry and can rotate relative to its neighbor, resulting in nonconforming sliding interfaces. A curved dynamic mortar approach is designed for communication on these interfaces. In this approach, solutions and fluxes are projected from cell faces to mortars to compute common values which are then projected back to ensures continuity and conservation. Through theoretical analysis and numerical tests, it is shown that this solver is conservative, free-stream preservative, and high-order accurate in both space and time.

  3. Background Adjusted Alignment-Free Dissimilarity Measures Improve the Detection of Horizontal Gene Transfer.

    PubMed

    Tang, Kujin; Lu, Yang Young; Sun, Fengzhu

    2018-01-01

    Horizontal gene transfer (HGT) plays an important role in the evolution of microbial organisms including bacteria. Alignment-free methods based on single genome compositional information have been used to detect HGT. Currently, Manhattan and Euclidean distances based on tetranucleotide frequencies are the most commonly used alignment-free dissimilarity measures to detect HGT. By testing on simulated bacterial sequences and real data sets with known horizontal transferred genomic regions, we found that more advanced alignment-free dissimilarity measures such as CVTree and [Formula: see text] that take into account the background Markov sequences can solve HGT detection problems with significantly improved performance. We also studied the influence of different factors such as evolutionary distance between host and donor sequences, size of sliding window, and host genome composition on the performances of alignment-free methods to detect HGT. Our study showed that alignment-free methods can predict HGT accurately when host and donor genomes are in different order levels. Among all methods, CVTree with word length of 3, [Formula: see text] with word length 3, Markov order 1 and [Formula: see text] with word length 4, Markov order 1 outperform others in terms of their highest F 1 -score and their robustness under the influence of different factors.

  4. Seismic modeling with radial basis function-generated finite differences (RBF-FD) – a simplified treatment of interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less

  5. Seismic modeling with radial basis function-generated finite differences (RBF-FD) - a simplified treatment of interfaces

    NASA Astrophysics Data System (ADS)

    Martin, Bradley; Fornberg, Bengt

    2017-04-01

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.

  6. Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3

    NASA Technical Reports Server (NTRS)

    Chakravarthy, Sukumar R.

    1990-01-01

    An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.

  7. Non-symmetric forms of non-linear vibrations of flexible cylindrical panels and plates under longitudinal load and additive white noise

    NASA Astrophysics Data System (ADS)

    Krysko, V. A.; Awrejcewicz, J.; Krylova, E. Yu; Papkova, I. V.; Krysko, A. V.

    2018-06-01

    Parametric non-linear vibrations of flexible cylindrical panels subjected to additive white noise are studied. The governing Marguerre equations are investigated using the finite difference method (FDM) of the second-order accuracy and the Runge-Kutta method. The considered mechanical structural member is treated as a system of many/infinite number of degrees of freedom (DoF). The dependence of chaotic vibrations on the number of DoFs is investigated. Reliability of results is guaranteed by comparing the results obtained using two qualitatively different methods to reduce the problem of PDEs (partial differential equations) to ODEs (ordinary differential equations), i.e. the Faedo-Galerkin method in higher approximations and the 4th and 6th order FDM. The Cauchy problem obtained by the FDM is eventually solved using the 4th-order Runge-Kutta methods. The numerical experiment yielded, for a certain set of parameters, the non-symmetric vibration modes/forms with and without white noise. In particular, it has been illustrated and discussed that action of white noise on chaotic vibrations implies quasi-periodicity, whereas the previously non-symmetric vibration modes are closer to symmetric ones.

  8. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  9. Reconstructing baryon oscillations: A Lagrangian theory perspective

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Cohn, J. D.

    2009-03-01

    Recently Eisenstein and collaborators introduced a method to “reconstruct” the linear power spectrum from a nonlinearly evolved galaxy distribution in order to improve precision in measurements of baryon acoustic oscillations. We reformulate this method within the Lagrangian picture of structure formation, to better understand what such a method does, and what the resulting power spectra are. We show that reconstruction does not reproduce the linear density field, at second order. We however show that it does reduce the damping of the oscillations due to nonlinear structure formation, explaining the improvements seen in simulations. Our results suggest that the reconstructed power spectrum is potentially better modeled as the sum of three different power spectra, each dominating over different wavelength ranges and with different nonlinear damping terms. Finally, we also show that reconstruction reduces the mode-coupling term in the power spectrum, explaining why miscalibrations of the acoustic scale are reduced when one considers the reconstructed power spectrum.

  10. Improved finite-difference computation of the van der Waals force: One-dimensional case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinto, Fabrizio

    2009-10-15

    We present an improved demonstration of the calculation of Casimir forces in one-dimensional systems based on the recently proposed numerical imaginary frequency Green's function computation approach. The dispersion force on two thick lossy dielectric slabs separated by an empty gap and placed within a perfectly conducting cavity is obtained from the Green's function of the modified Helmholtz equation by means of an ordinary finite-difference method. In order to demonstrate the possibility to develop algorithms to explore complex geometries in two and three dimensions to higher order in the mesh spacing, we generalize existing classical electromagnetism algebraic methods to generate themore » difference equations for dielectric boundaries not coinciding with any grid points. Diagnostic tests are presented to monitor the accuracy of our implementation of the method and follow-up applications in higher dimensions are introduced.« less

  11. A total variation diminishing finite difference algorithm for sonic boom propagation models

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1993-01-01

    It is difficult to accurately model the rise phases of sonic boom waveforms with traditional finite difference algorithms because of finite difference phase dispersion. This paper introduces the concept of a total variation diminishing (TVD) finite difference method as a tool for accurately modeling the rise phases of sonic booms. A standard second order finite difference algorithm and its TVD modified counterpart are both applied to the one-way propagation of a square pulse. The TVD method clearly outperforms the non-TVD method, showing great potential as a new computational tool in the analysis of sonic boom propagation.

  12. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  13. Output Feedback Distributed Containment Control for High-Order Nonlinear Multiagent Systems.

    PubMed

    Li, Yafeng; Hua, Changchun; Wu, Shuangshuang; Guan, Xinping

    2017-01-31

    In this paper, we study the problem of output feedback distributed containment control for a class of high-order nonlinear multiagent systems under a fixed undirected graph and a fixed directed graph, respectively. Only the output signals of the systems can be measured. The novel reduced order dynamic gain observer is constructed to estimate the unmeasured state variables of the system with the less conservative condition on nonlinear terms than traditional Lipschitz one. Via the backstepping method, output feedback distributed nonlinear controllers for the followers are designed. By means of the novel first virtual controllers, we separate the estimated state variables of different agents from each other. Consequently, the designed controllers show independence on the estimated state variables of neighbors except outputs information, and the dynamics of each agent can be greatly different, which make the design method have a wider class of applications. Finally, a numerical simulation is presented to illustrate the effectiveness of the proposed method.

  14. Analytical and numerical treatment of the heat conduction equation obtained via time-fractional distributed-order heat conduction law

    NASA Astrophysics Data System (ADS)

    Želi, Velibor; Zorica, Dušan

    2018-02-01

    Generalization of the heat conduction equation is obtained by considering the system of equations consisting of the energy balance equation and fractional-order constitutive heat conduction law, assumed in the form of the distributed-order Cattaneo type. The Cauchy problem for system of energy balance equation and constitutive heat conduction law is treated analytically through Fourier and Laplace integral transform methods, as well as numerically by the method of finite differences through Adams-Bashforth and Grünwald-Letnikov schemes for approximation derivatives in temporal domain and leap frog scheme for spatial derivatives. Numerical examples, showing time evolution of temperature and heat flux spatial profiles, demonstrate applicability and good agreement of both methods in cases of multi-term and power-type distributed-order heat conduction laws.

  15. Exercise order affects the total training volume and the ratings of perceived exertion in response to a super-set resistance training session

    PubMed Central

    Balsamo, Sandor; Tibana, Ramires Alsamir; Nascimento, Dahan da Cunha; de Farias, Gleyverton Landim; Petruccelli, Zeno; de Santana, Frederico dos Santos; Martins, Otávio Vanni; de Aguiar, Fernando; Pereira, Guilherme Borges; de Souza, Jéssica Cardoso; Prestes, Jonato

    2012-01-01

    The super-set is a widely used resistance training method consisting of exercises for agonist and antagonist muscles with limited or no rest interval between them – for example, bench press followed by bent-over rows. In this sense, the aim of the present study was to compare the effects of different super-set exercise sequences on the total training volume. A secondary aim was to evaluate the ratings of perceived exertion and fatigue index in response to different exercise order. On separate testing days, twelve resistance-trained men, aged 23.0 ± 4.3 years, height 174.8 ± 6.75 cm, body mass 77.8 ± 13.27 kg, body fat 12.0% ± 4.7%, were submitted to a super-set method by using two different exercise orders: quadriceps (leg extension) + hamstrings (leg curl) (QH) or hamstrings (leg curl) + quadriceps (leg extension) (HQ). Sessions consisted of three sets with a ten-repetition maximum load with 90 seconds rest between sets. Results revealed that the total training volume was higher for the HQ exercise order (P = 0.02) with lower perceived exertion than the inverse order (P = 0.04). These results suggest that HQ exercise order involving lower limbs may benefit practitioners interested in reaching a higher total training volume with lower ratings of perceived exertion compared with the leg extension plus leg curl order. PMID:22371654

  16. A method for the computational modeling of the physics of heart murmurs

    NASA Astrophysics Data System (ADS)

    Seo, Jung Hee; Bakhshaee, Hani; Garreau, Guillaume; Zhu, Chi; Andreou, Andreas; Thompson, William R.; Mittal, Rajat

    2017-05-01

    A computational method for direct simulation of the generation and propagation of blood flow induced sounds is proposed. This computational hemoacoustic method is based on the immersed boundary approach and employs high-order finite difference methods to resolve wave propagation and scattering accurately. The current method employs a two-step, one-way coupled approach for the sound generation and its propagation through the tissue. The blood flow is simulated by solving the incompressible Navier-Stokes equations using the sharp-interface immersed boundary method, and the equations corresponding to the generation and propagation of the three-dimensional elastic wave corresponding to the murmur are resolved with a high-order, immersed boundary based, finite-difference methods in the time-domain. The proposed method is applied to a model problem of aortic stenosis murmur and the simulation results are verified and validated by comparing with known solutions as well as experimental measurements. The murmur propagation in a realistic model of a human thorax is also simulated by using the computational method. The roles of hemodynamics and elastic wave propagation on the murmur are discussed based on the simulation results.

  17. Integral-equation based methods for parameter estimation in output pulses of radiation detectors: Application in nuclear medicine and spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-04-01

    Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

  18. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  19. Verification of a non-hydrostatic dynamical core using horizontally spectral element vertically finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-06-01

    The non-hydrostatic (NH) compressible Euler equations of dry atmosphere are solved in a simplified two dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative terms and quadrature. The Euler equations used here are in a flux form based on the hydrostatic pressure vertical coordinate, which are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate is implemented in this model. We verified the model by conducting widely used standard benchmark tests: the inertia-gravity wave, rising thermal bubble, density current wave, and linear hydrostatic mountain wave. The results from those tests demonstrate that the horizontally spectral element vertically finite difference model is accurate and robust. By using the 2-D slice model, we effectively show that the combined spatial discretization method of the spectral element and finite difference method in the horizontal and vertical directions, respectively, offers a viable method for the development of a NH dynamical core.

  20. Correlation among auto-refractor, wavefront aberration, and subjective manual refraction

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ren, Qiushi

    2005-01-01

    Three optometry methods which include auto-refractor, wavefront aberrometer and subjective manual refraction were studied and compared in measuring low order aberrations of 60 people"s 117 normal eyes. Paired t-test and linear regression were used to study these three methods" relationship when measuring myopia with astigmatism. In order to make the analysis more clear, we divided the 117 normal eyes into different groups according to their subjective manual refraction and redid the statistical analysis. Correlations among three methods show significant in sphere, cylinder and axis in all groups, with sphere"s correlation coefficients largest(R>0.98, P<0.01) and cylinder"s smallest (0.900.01). Auto-refractor had significant change from the other two methods when measuring cylinder (P<0.01). The results after grouping differed a little from the analysis among total people. Although three methods showed significant change from each other in certain parameters, the amplitude of these differences were not large, which indicated that the coherence of auto-refractor, wavefront aberrometer and subjective refraction is good. However, we suggested that wavefront aberration measurement could be a good starting point of optometry, subjective refraction is still necessary for refinement.

  1. High Order Discontinuous Gelerkin Methods for Convection Dominated Problems with Application to Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    2000-01-01

    This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. On the analysis side, we have studied the efficient and stable discontinuous Galerkin framework for small second derivative terms, for example in Navier-Stokes equations, and also for related equations such as the Hamilton-Jacobi equations. This is a truly local discontinuous formulation where derivatives are considered as new variables. On the applied side, we have implemented and tested the efficiency of different approaches numerically. Related issues in high order ENO and WENO finite difference methods and spectral methods have also been investigated. Jointly with Hu, we have presented a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the RungeKutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method. Jointly with Hu, we have constructed third and fourth order WENO schemes on two dimensional unstructured meshes (triangles) in the finite volume formulation. The third order schemes are based on a combination of linear polynomials with nonlinear weights, and the fourth order schemes are based on combination of quadratic polynomials with nonlinear weights. We have addressed several difficult issues associated with high order WENO schemes on unstructured mesh, including the choice of linear and nonlinear weights, what to do with negative weights, etc. Numerical examples are shown to demonstrate the accuracies and robustness of the methods for shock calculations. Jointly with P. Montarnal, we have used a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition under the form epsilon = epsilon(sub 1) + epsilon(sub 2), where epsilon(sub 1) is associated with a simpler pressure law (gamma)-law in this paper) and the nonlinear deviation epsilon(sub 2) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the epsilon(sub l) gamma-law. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.

  2. Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    1997-01-01

    An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.

  3. Periodical capacity setting methods for make-to-order multi-machine production systems

    PubMed Central

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-01-01

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649

  4. On High-Order Upwind Methods for Advection

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2017-01-01

    In the fourth installment of the celebrated series of five papers entitled "Towards the ultimate conservative difference scheme", Van Leer (1977) introduced five schemes for advection, the first three are piecewise linear, and the last two, piecewise parabolic. Among the five, scheme I, which is the least accurate, extends with relative ease to systems of equations in multiple dimensions. As a result, it became the most popular and is widely known as the MUSCL scheme (monotone upstream-centered schemes for conservation laws). Schemes III and V have the same accuracy, are the most accurate, and are closely related to current high-order methods. Scheme III uses a piecewise linear approximation that is discontinuous across cells, and can be considered as a precursor of the discontinuous Galerkin methods. Scheme V employs a piecewise quadratic approximation that is, as opposed to the case of scheme III, continuous across cells. This method is the basis for the on-going "active flux scheme" developed by Roe and collaborators. Here, schemes III and V are shown to be equivalent in the sense that they yield identical (reconstructed) solutions, provided the initial condition for scheme III is defined from that of scheme V in a manner dependent on the CFL number. This equivalence is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The finding also shows a key connection between the approaches of discontinuous and continuous polynomial approximations. In addition to the discussed equivalence, a framework using both projection and interpolation that extends schemes III and V into a single family of high-order schemes is introduced. For these high-order extensions, it is demonstrated via Fourier analysis that schemes with the same number of degrees of freedom ?? per cell, in spite of the different piecewise polynomial degrees, share the same sets of eigenvalues and thus, have the same stability and accuracy. Moreover, these schemes are accurate to order 2??-1, which is higher than the expected order of ??.

  5. Improvement to microphysical schemes in WRF Model based on observed data, part I: size distribution function

    NASA Astrophysics Data System (ADS)

    Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.

    2015-12-01

    In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit to 1-3-5 moments of the SDF was possible compared to fitting the 0-3-6 moment group.

  6. Adaptive conversion of a high-order mode beam into a near-diffraction-limited beam.

    PubMed

    Zhao, Haichuan; Wang, Xiaolin; Ma, Haotong; Zhou, Pu; Ma, Yanxing; Xu, Xiaojun; Zhao, Yijun

    2011-08-01

    We present a new method for efficiently transforming a high-order mode beam into a nearly Gaussian beam with much higher beam quality. The method is based on modulation of phases of different lobes by stochastic parallel gradient descent algorithm and coherent addition after phase flattening. We demonstrate the method by transforming an LP11 mode into a nearly Gaussian beam. The experimental results reveal that the power in the diffraction-limited bucket in the far field is increased by more than a factor of 1.5.

  7. Superconvergent second order Cartesian method for solving free boundary problem for invadopodia formation

    NASA Astrophysics Data System (ADS)

    Gallinato, Olivier; Poignard, Clair

    2017-06-01

    In this paper, we present a superconvergent second order Cartesian method to solve a free boundary problem with two harmonic phases coupled through the moving interface. The model recently proposed by the authors and colleagues describes the formation of cell protrusions. The moving interface is described by a level set function and is advected at the velocity given by the gradient of the inner phase. The finite differences method proposed in this paper consists of a new stabilized ghost fluid method and second order discretizations for the Laplace operator with the boundary conditions (Dirichlet, Neumann or Robin conditions). Interestingly, the method to solve the harmonic subproblems is superconvergent on two levels, in the sense that the first and second order derivatives of the numerical solutions are obtained with the second order of accuracy, similarly to the solution itself. We exhibit numerical criteria on the data accuracy to get such properties and numerical simulations corroborate these criteria. In addition to these properties, we propose an appropriate extension of the velocity of the level-set to avoid any loss of consistency, and to obtain the second order of accuracy of the complete free boundary problem. Interestingly, we highlight the transmission of the superconvergent properties for the static subproblems and their preservation by the dynamical scheme. Our method is also well suited for quasistatic Hele-Shaw-like or Muskat-like problems.

  8. The development and optimisation of 3D black-blood R2* mapping of the carotid artery wall.

    PubMed

    Yuan, Jianmin; Graves, Martin J; Patterson, Andrew J; Priest, Andrew N; Ruetten, Pascal P R; Usman, Ammara; Gillard, Jonathan H

    2017-12-01

    To develop and optimise a 3D black-blood R 2 * mapping sequence for imaging the carotid artery wall, using optimal blood suppression and k-space view ordering. Two different blood suppression preparation methods were used; Delay Alternating with Nutation for Tailored Excitation (DANTE) and improved Motion Sensitive Driven Equilibrium (iMSDE) were each combined with a three-dimensional (3D) multi-echo Fast Spoiled GRadient echo (ME-FSPGR) readout. Three different k-space view-order designs: Radial Fan-beam Encoding Ordering (RFEO), Distance-Determined Encoding Ordering (DDEO) and Centric Phase Encoding Order (CPEO) were investigated. The sequences were evaluated through Bloch simulation and in a cohort of twenty volunteers. The vessel wall Signal-to-Noise Ratio (SNR), Contrast-to-Noise Ratio (CNR) and R 2 *, and the sternocleidomastoid muscle R 2 * were measured and compared. Different numbers of acquisitions-per-shot (APS) were evaluated to further optimise the effectiveness of blood suppression. All sequences resulted in comparable R 2 * measurements to a conventional, i.e. non-blood suppressed sequence in the sternocleidomastoid muscle of the volunteers. Both Bloch simulations and volunteer data showed that DANTE has a higher signal intensity and results in a higher image SNR than iMSDE. Blood suppression efficiency was not significantly different when using different k-space view orders. Smaller APS achieved better blood suppression. The use of blood-suppression preparation methods does not affect the measurement of R 2 *. DANTE prepared ME-FSPGR sequence with a small number of acquisitions-per-shot can provide high quality black-blood R 2 * measurements of the carotid vessel wall. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Arbitrary Order Mixed Mimetic Finite Differences Method with Nodal Degrees of Freedom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iaroshenko, Oleksandr; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-09-01

    In this work we consider a modification to an arbitrary order mixed mimetic finite difference method (MFD) for a diffusion equation on general polygonal meshes [1]. The modification is based on moving some degrees of freedom (DoF) for a flux variable from edges to vertices. We showed that for a non-degenerate element this transformation is locally equivalent, i.e. there is a one-to-one map between the new and the old DoF. Globally, on the other hand, this transformation leads to a reduction of the total number of degrees of freedom (by up to 40%) and additional continuity of the discrete flux.

  10. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  11. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  12. Time-Domain Evaluation of Fractional Order Controllers’ Direct Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ma, Chengbin; Hori, Yoichi

    Fractional Order Control (FOC), in which the controlled systems and/or controllers are described by fractional order differential equations, has been applied to various control problems. Though it is not difficult to understand FOC’s theoretical superiority, realization issue keeps being somewhat problematic. Since the fractional order systems have an infinite dimension, proper approximation by finite difference equation is needed to realize the designed fractional order controllers. In this paper, the existing direct discretization methods are evaluated by their convergences and time-domain comparison with the baseline case. Proposed sampling time scaling property is used to calculate the baseline case with full memory length. This novel discretization method is based on the classical trapezoidal rule but with scaled sampling time. Comparative studies show good performance and simple algorithm make the Short Memory Principle method most practically superior. The FOC research is still at its primary stage. But its applications in modeling and robustness against non-linearities reveal the promising aspects. Parallel to the development of FOC theories, applying FOC to various control problems is also crucially important and one of top priority issues.

  13. Dynamics and Instabilities of the Shastry-Sutherland Model

    NASA Astrophysics Data System (ADS)

    Wang, Zhentao; Batista, Cristian D.

    2018-06-01

    We study the excitation spectrum in the dimer phase of the Shastry-Sutherland model by using an unbiased variational method that works in the thermodynamic limit. The method outputs dynamical correlation functions in all possible channels. This output is exploited to identify the order parameters with the highest susceptibility (single or multitriplon condensation in a specific channel) upon approaching a quantum phase transition in the magnetic field versus the J'/J phase diagram. We find four different instabilities: antiferro spin nematic, plaquette spin nematic, stripe magnetic order, and plaquette order, two of which have been reported in previous studies.

  14. On reinitializing level set functions

    NASA Astrophysics Data System (ADS)

    Min, Chohong

    2010-04-01

    In this paper, we consider reinitializing level functions through equation ϕt+sgn(ϕ0)(‖∇ϕ‖-1)=0[16]. The method of Russo and Smereka [11] is taken in the spatial discretization of the equation. The spatial discretization is, simply speaking, the second order ENO finite difference with subcell resolution near the interface. Our main interest is on the temporal discretization of the equation. We compare the three temporal discretizations: the second order Runge-Kutta method, the forward Euler method, and a Gauss-Seidel iteration of the forward Euler method. The fact that the time in the equation is fictitious makes a hypothesis that all the temporal discretizations result in the same result in their stationary states. The fact that the absolute stability region of the forward Euler method is not wide enough to include all the eigenvalues of the linearized semi-discrete system of the second order ENO spatial discretization makes another hypothesis that the forward Euler temporal discretization should invoke numerical instability. Our results in this paper contradict both the hypotheses. The Runge-Kutta and Gauss-Seidel methods obtain the second order accuracy, and the forward Euler method converges with order between one and two. Examining all their properties, we conclude that the Gauss-Seidel method is the best among the three. Compared to the Runge-Kutta, it is twice faster and requires memory two times less with the same accuracy.

  15. Computation of rapidly varied unsteady, free-surface flow

    USGS Publications Warehouse

    Basco, D.R.

    1987-01-01

    Many unsteady flows in hydraulics occur with relatively large gradients in free surface profiles. The assumption of hydrostatic pressure distribution with depth is no longer valid. These are rapidly-varied unsteady flows (RVF) of classical hydraulics and also encompass short wave propagation of coastal hydraulics. The purpose of this report is to present an introductory review of the Boussinnesq-type differential equations that describe these flows and to discuss methods for their numerical integration. On variable slopes and for large scale (finite-amplitude) disturbances, three independent derivational methods all gave differences in the motion equation for higher order terms. The importance of these higher-order terms for riverine applications must be determined by numerical experiments. Care must be taken in selection of the appropriate finite-difference scheme to minimize truncation error effects and the possibility of diverging (double mode) numerical solutions. It is recommended that practical hydraulics cases be established and tested numerically to demonstrate the order of differences in solution with those obtained from the long wave equations of St. Venant. (USGS)

  16. Physical activity assessed with three different methods and the Framingham Risk Score on 10-year coronary heart disease risk

    USDA-ARS?s Scientific Manuscript database

    Physical activity (PA) protects against coronary heart disease (CHD) by favorably altering several CHD risk factors. In order to best understand the true nature of the relationship between PA and CHD, the impact different PA assessment methods have on the relationships must first be clarified. The p...

  17. Methods for compressible fluid simulation on GPUs using high-order finite differences

    NASA Astrophysics Data System (ADS)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  18. Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.

    PubMed

    Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M

    2018-01-01

    Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p  < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.

  19. Recognition of Risk Information - Adaptation of J. Bertin's Orderable Matrix for social communication

    NASA Astrophysics Data System (ADS)

    Ishida, Keiichi

    2018-05-01

    This paper aims to show capability of the Orderable Matrix of Jacques Bertin which is a visualization method of data analyze and/or a method to recognize data. That matrix can show the data by replacing numbers to visual element. As an example, using a set of data regarding natural hazard rankings for certain metropolitan cities in the world, this paper describes how the Orderable Matrix handles the data set and show characteristic factors of this data to understand it. Not only to see a kind of risk ranking of cities, the Orderable Matrix shows how differently danger concerned cities ones and others are. Furthermore, we will see that the visualized data by Orderable Matrix allows us to see the characteristics of the data set comprehensively and instantaneously.

  20. Validation of phenol red versus gravimetric method for water reabsorption correction and study of gender differences in Doluisio's absorption technique.

    PubMed

    Tuğcu-Demiröz, Fatmanur; Gonzalez-Alvarez, Isabel; Gonzalez-Alvarez, Marta; Bermejo, Marival

    2014-10-01

    The aim of the present study was to develop a method for water flux reabsorption measurement in Doluisio's Perfusion Technique based on the use of phenol red as a non-absorbable marker and to validate it by comparison with gravimetric procedure. The compounds selected for the study were metoprolol, atenolol, cimetidine and cefadroxil in order to include low, intermediate and high permeability drugs absorbed by passive diffusion and by carrier mediated mechanism. The intestinal permeabilities (Peff) of the drugs were obtained in male and female Wistar rats and calculated using both methods of water flux correction. The absorption rate coefficients of all the assayed compounds did not show statistically significant differences between male and female rats consequently all the individual values were combined to compare between reabsorption methods. The absorption rate coefficients and permeability values did not show statistically significant differences between the two strategies of concentration correction. The apparent zero order water absorption coefficients were also similar in both correction procedures. In conclusion gravimetric and phenol red method for water reabsorption correction are accurate and interchangeable for permeability estimation in closed loop perfusion method. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Three-Dimensional Navier-Stokes Method with Two-Equation Turbulence Models for Efficient Numerical Simulation of Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.

    1994-01-01

    A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.

  2. One-dimensional high-order compact method for solving Euler's equations

    NASA Astrophysics Data System (ADS)

    Mohamad, M. A. H.; Basri, S.; Basuno, B.

    2012-06-01

    In the field of computational fluid dynamics, many numerical algorithms have been developed to simulate inviscid, compressible flows problems. Among those most famous and relevant are based on flux vector splitting and Godunov-type schemes. Previously, this system was developed through computational studies by Mawlood [1]. However the new test cases for compressible flows, the shock tube problems namely the receding flow and shock waves were not investigated before by Mawlood [1]. Thus, the objective of this study is to develop a high-order compact (HOC) finite difference solver for onedimensional Euler equation. Before developing the solver, a detailed investigation was conducted to assess the performance of the basic third-order compact central discretization schemes. Spatial discretization of the Euler equation is based on flux-vector splitting. From this observation, discretization of the convective flux terms of the Euler equation is based on a hybrid flux-vector splitting, known as the advection upstream splitting method (AUSM) scheme which combines the accuracy of flux-difference splitting and the robustness of flux-vector splitting. The AUSM scheme is based on the third-order compact scheme to the approximate finite difference equation was completely analyzed consequently. In one-dimensional problem for the first order schemes, an explicit method is adopted by using time integration method. In addition to that, development and modification of source code for the one-dimensional flow is validated with four test cases namely, unsteady shock tube, quasi-one-dimensional supersonic-subsonic nozzle flow, receding flow and shock waves in shock tubes. From these results, it was also carried out to ensure that the definition of Riemann problem can be identified. Further analysis had also been done in comparing the characteristic of AUSM scheme against experimental results, obtained from previous works and also comparative analysis with computational results generated by van Leer, KFVS and AUSMPW schemes. Furthermore, there is a remarkable improvement with the extension of the AUSM scheme from first-order to third-order accuracy in terms of shocks, contact discontinuities and rarefaction waves.

  3. Qualitative and quantitative evaluation of six algorithms for correcting intensity nonuniformity effects.

    PubMed

    Arnold, J B; Liow, J S; Schaper, K A; Stern, J J; Sled, J G; Shattuck, D W; Worth, A J; Cohen, M S; Leahy, R M; Mazziotta, J C; Rottenberg, D A

    2001-05-01

    The desire to correct intensity nonuniformity in magnetic resonance images has led to the proliferation of nonuniformity-correction (NUC) algorithms with different theoretical underpinnings. In order to provide end users with a rational basis for selecting a given algorithm for a specific neuroscientific application, we evaluated the performance of six NUC algorithms. We used simulated and real MRI data volumes, including six repeat scans of the same subject, in order to rank the accuracy, precision, and stability of the nonuniformity corrections. We also compared algorithms using data volumes from different subjects and different (1.5T and 3.0T) MRI scanners in order to relate differences in algorithmic performance to intersubject variability and/or differences in scanner performance. In phantom studies, the correlation of the extracted with the applied nonuniformity was highest in the transaxial (left-to-right) direction and lowest in the axial (top-to-bottom) direction. Two of the six algorithms demonstrated a high degree of stability, as measured by the iterative application of the algorithm to its corrected output. While none of the algorithms performed ideally under all circumstances, locally adaptive methods generally outperformed nonadaptive methods. Copyright 2001 Academic Press.

  4. ECG artifact cancellation in surface EMG signals by fractional order calculus application.

    PubMed

    Miljković, Nadica; Popović, Nenad; Djordjević, Olivera; Konstantinović, Ljubica; Šekara, Tomislav B

    2017-03-01

    New aspects for automatic electrocardiography artifact removal from surface electromyography signals by application of fractional order calculus in combination with linear and nonlinear moving window filters are explored. Surface electromyography recordings of skeletal trunk muscles are commonly contaminated with spike shaped artifacts. This artifact originates from electrical heart activity, recorded by electrocardiography, commonly present in the surface electromyography signals recorded in heart proximity. For appropriate assessment of neuromuscular changes by means of surface electromyography, application of a proper filtering technique of electrocardiography artifact is crucial. A novel method for automatic artifact cancellation in surface electromyography signals by applying fractional order calculus and nonlinear median filter is introduced. The proposed method is compared with the linear moving average filter, with and without prior application of fractional order calculus. 3D graphs for assessment of window lengths of the filters, crest factors, root mean square differences, and fractional calculus orders (called WFC and WRC graphs) have been introduced. For an appropriate quantitative filtering evaluation, the synthetic electrocardiography signal and analogous semi-synthetic dataset have been generated. The examples of noise removal in 10 able-bodied subjects and in one patient with muscle dystrophy are presented for qualitative analysis. The crest factors, correlation coefficients, and root mean square differences of the recorded and semi-synthetic electromyography datasets showed that the most successful method was the median filter in combination with fractional order calculus of the order 0.9. Statistically more significant (p < 0.001) ECG peak reduction was obtained by the median filter application compared to the moving average filter in the cases of low level amplitude of muscle contraction compared to ECG spikes. The presented results suggest that the novel method combining a median filter and fractional order calculus can be used for automatic filtering of electrocardiography artifacts in the surface electromyography signal envelopes recorded in trunk muscles. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  5. High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs

    NASA Technical Reports Server (NTRS)

    Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.

    2014-01-01

    This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.

  6. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  7. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  8. Peacekeeping. Perspectives in World Order.

    ERIC Educational Resources Information Center

    Fraenkel, Jack R., Ed.; And Others

    This pamphlet, intended for senior high classroom use, defines war, peace, and peacekeeping systems; discusses the destructiveness of war; and proposes the case study method for studying world order. The major portion of the booklet explores ways of peacekeeping through analysis of four different models: collective security, collective force,…

  9. Comparison among Magnus/Floquet/Fer expansion schemes in solid-state NMR.

    PubMed

    Takegoshi, K; Miyazawa, Norihiro; Sharma, Kshama; Madhu, P K

    2015-04-07

    We here revisit expansion schemes used in nuclear magnetic resonance (NMR) for the calculation of effective Hamiltonians and propagators, namely, Magnus, Floquet, and Fer expansions. While all the expansion schemes are powerful methods there are subtle differences among them. To understand the differences, we performed explicit calculation for heteronuclear dipolar decoupling, cross-polarization, and rotary-resonance experiments in solid-state NMR. As the propagator from the Fer expansion takes the form of a product of sub-propagators, it enables us to appreciate effects of time-evolution under Hamiltonians with different orders separately. While 0th-order average Hamiltonian is the same for the three expansion schemes with the three cases examined, there is a case that the 2nd-order term for the Magnus/Floquet expansion is different from that obtained with the Fer expansion. The difference arises due to the separation of the 0th-order term in the Fer expansion. The separation enables us to appreciate time-evolution under the 0th-order average Hamiltonian, however, for that purpose, we use a so-called left-running Fer expansion. Comparison between the left-running Fer expansion and the Magnus expansion indicates that the sign of the odd orders in Magnus may better be reversed if one would like to consider its effect in order.

  10. Comparison among Magnus/Floquet/Fer expansion schemes in solid-state NMR

    NASA Astrophysics Data System (ADS)

    Takegoshi, K.; Miyazawa, Norihiro; Sharma, Kshama; Madhu, P. K.

    2015-04-01

    We here revisit expansion schemes used in nuclear magnetic resonance (NMR) for the calculation of effective Hamiltonians and propagators, namely, Magnus, Floquet, and Fer expansions. While all the expansion schemes are powerful methods there are subtle differences among them. To understand the differences, we performed explicit calculation for heteronuclear dipolar decoupling, cross-polarization, and rotary-resonance experiments in solid-state NMR. As the propagator from the Fer expansion takes the form of a product of sub-propagators, it enables us to appreciate effects of time-evolution under Hamiltonians with different orders separately. While 0th-order average Hamiltonian is the same for the three expansion schemes with the three cases examined, there is a case that the 2nd-order term for the Magnus/Floquet expansion is different from that obtained with the Fer expansion. The difference arises due to the separation of the 0th-order term in the Fer expansion. The separation enables us to appreciate time-evolution under the 0th-order average Hamiltonian, however, for that purpose, we use a so-called left-running Fer expansion. Comparison between the left-running Fer expansion and the Magnus expansion indicates that the sign of the odd orders in Magnus may better be reversed if one would like to consider its effect in order.

  11. Comparison among Magnus/Floquet/Fer expansion schemes in solid-state NMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takegoshi, K., E-mail: takeyan@kuchem.kyoto-u.ac.jp; Miyazawa, Norihiro; Sharma, Kshama

    2015-04-07

    We here revisit expansion schemes used in nuclear magnetic resonance (NMR) for the calculation of effective Hamiltonians and propagators, namely, Magnus, Floquet, and Fer expansions. While all the expansion schemes are powerful methods there are subtle differences among them. To understand the differences, we performed explicit calculation for heteronuclear dipolar decoupling, cross-polarization, and rotary-resonance experiments in solid-state NMR. As the propagator from the Fer expansion takes the form of a product of sub-propagators, it enables us to appreciate effects of time-evolution under Hamiltonians with different orders separately. While 0th-order average Hamiltonian is the same for the three expansion schemes withmore » the three cases examined, there is a case that the 2nd-order term for the Magnus/Floquet expansion is different from that obtained with the Fer expansion. The difference arises due to the separation of the 0th-order term in the Fer expansion. The separation enables us to appreciate time-evolution under the 0th-order average Hamiltonian, however, for that purpose, we use a so-called left-running Fer expansion. Comparison between the left-running Fer expansion and the Magnus expansion indicates that the sign of the odd orders in Magnus may better be reversed if one would like to consider its effect in order.« less

  12. Spectral (Finite) Volume Method for Conservation Laws on Unstructured Grids II: Extension to Two Dimensional Scalar Equation

    NASA Technical Reports Server (NTRS)

    Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The framework for constructing a high-order, conservative Spectral (Finite) Volume (SV) method is presented for two-dimensional scalar hyperbolic conservation laws on unstructured triangular grids. Each triangular grid cell forms a spectral volume (SV), and the SV is further subdivided into polygonal control volumes (CVs) to supported high-order data reconstructions. Cell-averaged solutions from these CVs are used to reconstruct a high order polynomial approximation in the SV. Each CV is then updated independently with a Godunov-type finite volume method and a high-order Runge-Kutta time integration scheme. A universal reconstruction is obtained by partitioning all SVs in a geometrically similar manner. The convergence of the SV method is shown to depend on how a SV is partitioned. A criterion based on the Lebesgue constant has been developed and used successfully to determine the quality of various partitions. Symmetric, stable, and convergent linear, quadratic, and cubic SVs have been obtained, and many different types of partitions have been evaluated. The SV method is tested for both linear and non-linear model problems with and without discontinuities.

  13. High-order time-marching reinitialization for regional level-set functions

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-02-01

    In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.

  14. Numerical Modelling of Ground Penetrating Radar Antennas

    NASA Astrophysics Data System (ADS)

    Giannakis, Iraklis; Giannopoulos, Antonios; Pajewski, Lara

    2014-05-01

    Numerical methods are needed in order to solve Maxwell's equations in complicated and realistic problems. Over the years a number of numerical methods have been developed to do so. Amongst them the most popular are the finite element, finite difference implicit techniques, frequency domain solution of Helmontz equation, the method of moments, transmission line matrix method. However, the finite-difference time-domain method (FDTD) is considered to be one of the most attractive choice basically because of its simplicity, speed and accuracy. FDTD first introduced in 1966 by Kane Yee. Since then, FDTD has been established and developed to be a very rigorous and well defined numerical method for solving Maxwell's equations. The order characteristics, accuracy and limitations are rigorously and mathematically defined. This makes FDTD reliable and easy to use. Numerical modelling of Ground Penetrating Radar (GPR) is a very useful tool which can be used in order to give us insight into the scattering mechanisms and can also be used as an alternative approach to aid data interpretation. Numerical modelling has been used in a wide range of GPR applications including archeology, geophysics, forensic, landmine detection etc. In engineering, some applications of numerical modelling include the estimation of the effectiveness of GPR to detect voids in bridges, to detect metal bars in concrete, to estimate shielding effectiveness etc. The main challenges in numerical modelling of GPR for engineering applications are A) the implementation of the dielectric properties of the media (soils, concrete etc.) in a realistic way, B) the implementation of the geometry of the media (soils inhomogeneities, rough surface, vegetation, concrete features like fractures and rock fragments etc.) and C) the detailed modelling of the antenna units. The main focus of this work (which is part of the COST Action TU1208) is the accurate and realistic implementation of GPR antenna units into the FDTD model. Accurate models based on general characteristics of the commercial antennas GSSI 1.5 GHz and MALA 1.2 GHz have been already incorporated in GprMax, a free software which solves Maxwell's equation using a second order in space and time FDTD algorithm. This work presents the implementation of horn antennas with different parameters as well as ridged horn antennas into this FDTD model and their effectiveness is tested in realistic modelled situations. Accurate models of soils and concrete are used to test and compare different antenna units. Stochastic methods are used in order to realistically simulate the geometrical characteristics of the medium. Regarding the dielectric properties, Debye approximations are incorporated in order to simulate realistically the dielectric properties of the medium on the frequency range of interest.

  15. De-Aliasing Through Over-Integration Applied to the Flux Reconstruction and Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.

    2015-01-01

    High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).

  16. Sufficient conditions for asymptotic stability and stabilization of autonomous fractional order systems

    NASA Astrophysics Data System (ADS)

    Lenka, Bichitra Kumar; Banerjee, Soumitro

    2018-03-01

    We discuss the asymptotic stability of autonomous linear and nonlinear fractional order systems where the state equations contain same or different fractional orders which lie between 0 and 2. First, we use the Laplace transform method to derive some sufficient conditions which ensure asymptotic stability of linear fractional order systems. Then by using the obtained results and linearization technique, a stability theorem is presented for autonomous nonlinear fractional order system. Finally, we design a control strategy for stabilization of autonomous nonlinear fractional order systems, and apply the results to the chaotic fractional order Lorenz system in order to verify its effectiveness.

  17. Dangerous gas detection based on infrared video

    NASA Astrophysics Data System (ADS)

    Ding, Kang; Hong, Hanyu; Huang, Likun

    2018-03-01

    As the gas leak infrared imaging detection technology has significant advantages of high efficiency and remote imaging detection, in order to enhance the detail perception of observers and equivalently improve the detection limit, we propose a new type of gas leak infrared image detection method, which combines background difference methods and multi-frame interval difference method. Compared to the traditional frame methods, the multi-frame interval difference method we proposed can extract a more complete target image. By fusing the background difference image and the multi-frame interval difference image, we can accumulate the information of infrared target image of the gas leak in many aspect. The experiment demonstrate that the completeness of the gas leakage trace information is enhanced significantly, and the real-time detection effect can be achieved.

  18. A higher-order conservation element solution element method for solving hyperbolic differential equations on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Bilyeu, David

    This dissertation presents an extension of the Conservation Element Solution Element (CESE) method from second- to higher-order accuracy. The new method retains the favorable characteristics of the original second-order CESE scheme, including (i) the use of the space-time integral equation for conservation laws, (ii) a compact mesh stencil, (iii) the scheme will remain stable up to a CFL number of unity, (iv) a fully explicit, time-marching integration scheme, (v) true multidimensionality without using directional splitting, and (vi) the ability to handle two- and three-dimensional geometries by using unstructured meshes. This algorithm has been thoroughly tested in one, two and three spatial dimensions and has been shown to obtain the desired order of accuracy for solving both linear and non-linear hyperbolic partial differential equations. The scheme has also shown its ability to accurately resolve discontinuities in the solutions. Higher order unstructured methods such as the Discontinuous Galerkin (DG) method and the Spectral Volume (SV) methods have been developed for one-, two- and three-dimensional application. Although these schemes have seen extensive development and use, certain drawbacks of these methods have been well documented. For example, the explicit versions of these two methods have very stringent stability criteria. This stability criteria requires that the time step be reduced as the order of the solver increases, for a given simulation on a given mesh. The research presented in this dissertation builds upon the work of Chang, who developed a fourth-order CESE scheme to solve a scalar one-dimensional hyperbolic partial differential equation. The completed research has resulted in two key deliverables. The first is a detailed derivation of a high-order CESE methods on unstructured meshes for solving the conservation laws in two- and three-dimensional spaces. The second is the code implementation of these numerical methods in a computer code. For code development, a one-dimensional solver for the Euler equations was developed. This work is an extension of Chang's work on the fourth-order CESE method for solving a one-dimensional scalar convection equation. A generic formulation for the nth-order CESE method, where n ≥ 4, was derived. Indeed, numerical implementation of the scheme confirmed that the order of convergence was consistent with the order of the scheme. For the two- and three-dimensional solvers, SOLVCON was used as the basic framework for code implementation. A new solver kernel for the fourth-order CESE method has been developed and integrated into the framework provided by SOLVCON. The main part of SOLVCON, which deals with unstructured meshes and parallel computing, remains intact. The SOLVCON code for data transmission between computer nodes for High Performance Computing (HPC). To validate and verify the newly developed high-order CESE algorithms, several one-, two- and three-dimensional simulations where conducted. For the arbitrary order, one-dimensional, CESE solver, three sets of governing equations were selected for simulation: (i) the linear convection equation, (ii) the linear acoustic equations, (iii) the nonlinear Euler equations. All three systems of equations were used to verify the order of convergence through mesh refinement. In addition the Euler equations were used to solve the Shu-Osher and Blastwave problems. These two simulations demonstrated that the new high-order CESE methods can accurately resolve discontinuities in the flow field.For the two-dimensional, fourth-order CESE solver, the Euler equation was employed in four different test cases. The first case was used to verify the order of convergence through mesh refinement. The next three cases demonstrated the ability of the new solver to accurately resolve discontinuities in the flows. This was demonstrated through: (i) the interaction between acoustic waves and an entropy pulse, (ii) supersonic flow over a circular blunt body, (iii) supersonic flow over a guttered wedge. To validate and verify the three-dimensional, fourth-order CESE solver, two different simulations where selected. The first used the linear convection equations to demonstrate fourth-order convergence. The second used the Euler equations to simulate supersonic flow over a spherical body to demonstrate the scheme's ability to accurately resolve shocks. All test cases used are well known benchmark problems and as such, there are multiple sources available to validate the numerical results. Furthermore, the simulations showed that the high-order CESE solver was stable at a CFL number near unity.

  19. A range-free method to determine antoine vapor-pressure heat transfer-related equation coefficients using the Boubaker polynomial expansion scheme

    NASA Astrophysics Data System (ADS)

    Koçak, H.; Dahong, Z.; Yildirim, A.

    2011-05-01

    In this study, a range-free method is proposed in order to determine the Antoine constants for a given material (salicylic acid). The advantage of this method is mainly yielding analytical expressions which fit different temperature ranges.

  20. An algorithm for solving the perturbed gas dynamic equations

    NASA Technical Reports Server (NTRS)

    Davis, Sanford

    1993-01-01

    The present application of a compact, higher-order central-difference approximation to the linearized Euler equations illustrates the multimodal character of these equations by means of computations for acoustic, vortical, and entropy waves. Such dissipationless central-difference methods are shown to propagate waves exhibiting excellent phase and amplitude resolution on the basis of relatively large time-steps; they can be applied to wave problems governed by systems of first-order partial differential equations.

  1. Rotordynamic coefficients for labyrinth seals calculated by means of a finite difference technique

    NASA Technical Reports Server (NTRS)

    Nordmann, R.; Weiser, P.

    1989-01-01

    The compressible, turbulent, time dependent and three dimensional flow in a labyrinth seal can be described by the Navier-Stokes equations in conjunction with a turbulence model. Additionally, equations for mass and energy conservation and an equation of state are required. To solve these equations, a perturbation analysis is performed yielding zeroth order equations for centric shaft position and first order equations describing the flow field for small motions around the seal center. For numerical solution a finite difference method is applied to the zeroth and first order equations resulting in leakage and dynamic seal coefficients respectively.

  2. Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: Methodology and application to high-order compact schemes

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul

    1993-01-01

    We present a systematic method for constructing boundary conditions (numerical and physical) of the required accuracy, for compact (Pade-like) high-order finite-difference schemes for hyperbolic systems. First, a roper summation-by-parts formula is found for the approximate derivative. A 'simultaneous approximation term' (SAT) is then introduced to treat the boundary conditions. This procedure leads to time-stable schemes even in the system case. An explicit construction of the fourth-order compact case is given. Numerical studies are presented to verify the efficacy of the approach.

  3. Analysis of evolutionary conservation patterns and their influence on identifying protein functional sites.

    PubMed

    Fang, Chun; Noguchi, Tamotsu; Yamana, Hayato

    2014-10-01

    Evolutionary conservation information included in position-specific scoring matrix (PSSM) has been widely adopted by sequence-based methods for identifying protein functional sites, because all functional sites, whether in ordered or disordered proteins, are found to be conserved at some extent. However, different functional sites have different conservation patterns, some of them are linear contextual, some of them are mingled with highly variable residues, and some others seem to be conserved independently. Every value in PSSMs is calculated independently of each other, without carrying the contextual information of residues in the sequence. Therefore, adopting the direct output of PSSM for prediction fails to consider the relationship between conservation patterns of residues and the distribution of conservation scores in PSSMs. In order to demonstrate the importance of combining PSSMs with the specific conservation patterns of functional sites for prediction, three different PSSM-based methods for identifying three kinds of functional sites have been analyzed. Results suggest that, different PSSM-based methods differ in their capability to identify different patterns of functional sites, and better combining PSSMs with the specific conservation patterns of residues would largely facilitate the prediction.

  4. On the ranking of chemicals based on their PBT characteristics: comparison of different ranking methodologies using selected POPs as an illustrative example.

    PubMed

    Sailaukhanuly, Yerbolat; Zhakupbekova, Arai; Amutova, Farida; Carlsen, Lars

    2013-01-01

    Knowledge of the environmental behavior of chemicals is a fundamental part of the risk assessment process. The present paper discusses various methods of ranking of a series of persistent organic pollutants (POPs) according to the persistence, bioaccumulation and toxicity (PBT) characteristics. Traditionally ranking has been done as an absolute (total) ranking applying various multicriteria data analysis methods like simple additive ranking (SAR) or various utility functions (UFs) based rankings. An attractive alternative to these ranking methodologies appears to be partial order ranking (POR). The present paper compares different ranking methods like SAR, UF and POR. Significant discrepancies between the rankings are noted and it is concluded that partial order ranking, as a method without any pre-assumptions concerning possible relation between the single parameters, appears as the most attractive ranking methodology. In addition to the initial ranking partial order methodology offers a wide variety of analytical tools to elucidate the interplay between the objects to be ranked and the ranking parameters. In the present study is included an analysis of the relative importance of the single P, B and T parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, J.C.

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  6. Techniques and Methods for Testing the Postural Function in Healthy and Pathological Subjects

    PubMed Central

    Paillard, Thierry; Noé, Frédéric

    2015-01-01

    The different techniques and methods employed as well as the different quantitative and qualitative variables measured in order to objectify postural control are often chosen without taking into account the population studied, the objective of the postural test, and the environmental conditions. For these reasons, the aim of this review was to present and justify the different testing techniques and methods with their different quantitative and qualitative variables to make it possible to precisely evaluate each sensory, central, and motor component of the postural function according to the experiment protocol under consideration. The main practical and technological methods and techniques used in evaluating postural control were explained and justified according to the experimental protocol defined. The main postural conditions (postural stance, visual condition, balance condition, and test duration) were also analyzed. Moreover, the mechanistic exploration of the postural function often requires implementing disturbing postural conditions by using motor disturbance (mechanical disturbance), sensory stimulation (sensory manipulation), and/or cognitive disturbance (cognitive task associated with maintaining postural balance) protocols. Each type of disturbance was tackled in order to facilitate understanding of subtle postural control mechanisms and the means to explore them. PMID:26640800

  7. Techniques and Methods for Testing the Postural Function in Healthy and Pathological Subjects.

    PubMed

    Paillard, Thierry; Noé, Frédéric

    2015-01-01

    The different techniques and methods employed as well as the different quantitative and qualitative variables measured in order to objectify postural control are often chosen without taking into account the population studied, the objective of the postural test, and the environmental conditions. For these reasons, the aim of this review was to present and justify the different testing techniques and methods with their different quantitative and qualitative variables to make it possible to precisely evaluate each sensory, central, and motor component of the postural function according to the experiment protocol under consideration. The main practical and technological methods and techniques used in evaluating postural control were explained and justified according to the experimental protocol defined. The main postural conditions (postural stance, visual condition, balance condition, and test duration) were also analyzed. Moreover, the mechanistic exploration of the postural function often requires implementing disturbing postural conditions by using motor disturbance (mechanical disturbance), sensory stimulation (sensory manipulation), and/or cognitive disturbance (cognitive task associated with maintaining postural balance) protocols. Each type of disturbance was tackled in order to facilitate understanding of subtle postural control mechanisms and the means to explore them.

  8. Evaluation of the direct and diffusion methods for the determination of fluoride content in table salt

    PubMed Central

    Martínez-Mier, E. Angeles; Soto-Rojas, Armando E.; Buckley, Christine M.; Margineda, Jorge; Zero, Domenick T.

    2010-01-01

    Objective The aim of this study was to assess methods currently used for analyzing fluoridated salt in order to identify the most useful method for this type of analysis. Basic research design Seventy-five fluoridated salt samples were obtained. Samples were analyzed for fluoride content, with and without pretreatment, using direct and diffusion methods. Element analysis was also conducted in selected samples. Fluoride was added to ultra pure NaCl and non-fluoridated commercial salt samples and Ca and Mg were added to fluoride samples in order to assess fluoride recoveries using modifications to the methods. Results Larger amounts of fluoride were found and recovered using diffusion than direct methods (96%–100% for diffusion vs. 67%–90% for direct). Statistically significant differences were obtained between direct and diffusion methods using different ion strength adjusters. Pretreatment methods reduced the amount of recovered fluoride. Determination of fluoride content was influenced both by the presence of NaCl and other ions in the salt. Conclusion Direct and diffusion techniques for analysis of fluoridated salt are suitable methods for fluoride analysis. The choice of method should depend on the purpose of the analysis. PMID:20088217

  9. Construction of moment-matching multinomial lattices using Vandermonde matrices and Gröbner bases

    NASA Astrophysics Data System (ADS)

    Lundengârd, Karl; Ogutu, Carolyne; Silvestrov, Sergei; Ni, Ying; Weke, Patrick

    2017-01-01

    In order to describe and analyze the quantitative behavior of stochastic processes, such as the process followed by a financial asset, various discretization methods are used. One such set of methods are lattice models where a time interval is divided into equal time steps and the rate of change for the process is restricted to a particular set of values in each time step. The well-known binomial- and trinomial models are the most commonly used in applications, although several kinds of higher order models have also been examined. Here we will examine various ways of designing higher order lattice schemes with different node placements in order to guarantee moment-matching with the process.

  10. Research in computational fluid dynamics and analysis of algorithms

    NASA Technical Reports Server (NTRS)

    Gottlieb, David

    1992-01-01

    Recently, higher-order compact schemes have seen increasing use in the DNS (Direct Numerical Simulations) of the Navier-Stokes equations. Although they do not have the spatial resolution of spectral methods, they offer significant increases in accuracy over conventional second order methods. They can be used on any smooth grid, and do not have an overly restrictive CFL dependence as compared with the O(N(exp -2)) CFL dependence observed in Chebyshev spectral methods on finite domains. In addition, they are generally more robust and less costly than spectral methods. The issue of the relative cost of higher-order schemes (accuracy weighted against physical and numerical cost) is a far more complex issue, depending ultimately on what features of the solution are sought and how accurately they must be resolved. In any event, the further development of the underlying stability theory of these schemes is important. The approach of devising suitable boundary clusters and then testing them with various stability techniques (such as finding the norm) is entirely the wrong approach when dealing with high-order methods. Very seldom are high-order boundary closures stable, making them difficult to isolate. An alternative approach is to begin with a norm which satisfies all the stability criteria for the hyperbolic system, and look for the boundary closure forms which will match the norm exactly. This method was used recently by Strand to isolate stable boundary closure schemes for the explicit central fourth- and sixth-order schemes. The norm used was an energy norm mimicking the norm for the differential equations. Further research should be devoted to BC for high order schemes in order to make sure that the results obtained are reliable. The compact fourth order and sixth order finite difference scheme had been incorporated into a code to simulate flow past circular cylinders. This code will serve as a verification of the full spectral codes. A detailed stability analysis by Carpenter (from the fluid Mechanics Division) and Gottlieb gave analytic conditions for stability as well as asymptotic stability. This had been incorporated in the code in form of stable boundary conditions. Effects of the cylinder rotations had been studied. The results differ from the known theoretical results. We are in the middle of analyzing the results. A detailed analysis of the effects of the heating of the cylinder on the shedding frequency had been studied using the above schemes. It has been found that the shedding frequency decreases when the wire was heated. Experimental work is being carried out to affirm this result.

  11. Chosen interval methods for solving linear interval systems with special type of matrix

    NASA Astrophysics Data System (ADS)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  12. Simultaneous determination of mebeverine hydrochloride and chlordiazepoxide in their binary mixture using novel univariate spectrophotometric methods via different manipulation pathways.

    PubMed

    Lotfy, Hayam M; Fayez, Yasmin M; Michael, Adel M; Nessim, Christine K

    2016-02-15

    Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD(1)) or second derivative (D(2)). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28μg/mL for mebeverine hydrochloride and 1-12μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Simultaneous determination of mebeverine hydrochloride and chlordiazepoxide in their binary mixture using novel univariate spectrophotometric methods via different manipulation pathways

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Fayez, Yasmin M.; Michael, Adel M.; Nessim, Christine K.

    2016-02-01

    Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD1) or second derivative (D2). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28 μg/mL for mebeverine hydrochloride and 1-12 μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.

  14. On the control of the chaotic attractors of the 2-d Navier-Stokes equations.

    PubMed

    Smaoui, Nejib; Zribi, Mohamed

    2017-03-01

    The control problem of the chaotic attractors of the two dimensional (2-d) Navier-Stokes (N-S) equations is addressed in this paper. First, the Fourier Galerkin method based on a reduced-order modelling approach developed by Chen and Price is applied to the 2-d N-S equations to construct a fifth-order system of nonlinear ordinary differential equations (ODEs). The dynamics of the fifth-order system was studied by analyzing the system's attractor for different values of Reynolds number, R e . Then, control laws are proposed to drive the states of the ODE system to a desired attractor. Finally, an adaptive controller is designed to synchronize two reduced order ODE models having different Reynolds numbers and starting from different initial conditions. Simulation results indicate that the proposed control schemes work well.

  15. On the control of the chaotic attractors of the 2-d Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Smaoui, Nejib; Zribi, Mohamed

    2017-03-01

    The control problem of the chaotic attractors of the two dimensional (2-d) Navier-Stokes (N-S) equations is addressed in this paper. First, the Fourier Galerkin method based on a reduced-order modelling approach developed by Chen and Price is applied to the 2-d N-S equations to construct a fifth-order system of nonlinear ordinary differential equations (ODEs). The dynamics of the fifth-order system was studied by analyzing the system's attractor for different values of Reynolds number, Re. Then, control laws are proposed to drive the states of the ODE system to a desired attractor. Finally, an adaptive controller is designed to synchronize two reduced order ODE models having different Reynolds numbers and starting from different initial conditions. Simulation results indicate that the proposed control schemes work well.

  16. Adaptive temperature-accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2011-02-01

    We present three adaptive methods for optimizing the high temperature Thigh on-the-fly in temperature-accelerated dynamics (TAD) simulations. In all three methods, the high temperature is adjusted periodically in order to maximize the performance. While in the first two methods the adjustment depends on the number of observed events, the third method depends on the minimum activation barrier observed so far and requires an a priori knowledge of the optimal high temperature T^{opt}_{high}(E_a) as a function of the activation barrier Ea for each accepted event. In order to determine the functional form of T^{opt}_{high}(E_a), we have carried out extensive simulations of submonolayer annealing on the (100) surface for a variety of metals (Ag, Cu, Ni, Pd, and Au). While the results for all five metals are different, when they are scaled with the melting temperature Tm, we find that they all lie on a single scaling curve. Similar results have also been obtained for (111) surfaces although in this case the scaling function is slightly different. In order to test the performance of all three methods, we have also carried out adaptive TAD simulations of Ag/Ag(100) annealing and growth at T = 80 K and compared with fixed high-temperature TAD simulations for different values of Thigh. We find that the performance of all three adaptive methods is typically as good as or better than that obtained in fixed high-temperature TAD simulations carried out using the effective optimal fixed high temperature. In addition, we find that the final high temperatures obtained in our adaptive TAD simulations are very close to our results for T^{opt}_{high}(E_a). The applicability of the adaptive methods to a variety of TAD simulations is also briefly discussed.

  17. Comparison through a prospective and randomized study of two replenishment methods at polyvalent hospitalization units with two-bin storage systems

    PubMed

    Bernal, José Luis; Mera-Flores, Ana María; Baena Lázaro, Pedro Pablo; Sebastián Viana, Tomás

    2017-11-27

    Two-bin storage systems increase nursing staff satisfaction and decrease inventories, but the implications that logistic staff would determine the needs of replenishment are unknown. This study aimed to evaluate whether entrust to logistics staff this responsibility at the polyvalent hospitalization units with two-bin storage is associated with higher risk of outstanding orders. This was a prospective randomized experiment whit masking. Outstanding orders were considered variable response, those corresponding to assessments of the logistics staff were included in the control group and those corresponding to the nursing staff in the control group. Concordance between observers was analyzed using the Bland-Altman method; the difference between groups, with the U of Mann-Whitney and the cumulative incidence of outstanding orders and their relative risk was calculated. The mean amount requested by the logistic and nursing staff was 29.9 (SD:167.4) and 36 (SD:190) units respectively, the mean difference between observers was 6.11 (SD:128.95) units and no significant differences were found between groups (p = 0.430). The incidence of outstanding orders was 0.64% in the intervention group and 0.15% in the control group; the relative risk, 2.31 (0.83 - 6.48) and the number of cases required for an outstanding order, 516. Outstanding order relative risk is not associated with the category of the staff that identifies the replenishment needs at the polyvalent hospitalization units.

  18. The Finite-Surface Method for incompressible flow: a step beyond staggered grid

    NASA Astrophysics Data System (ADS)

    Hokpunna, Arpiruk; Misaka, Takashi; Obayashi, Shigeru

    2017-11-01

    We present a newly developed higher-order finite surface method for the incompressible Navier-Stokes equations (NSE). This method defines the velocities as a surface-averaged value on the surfaces of the pressure cells. Consequently, the mass conservation on the pressure cells becomes an exact equation. The only things left to approximate is the momentum equation and the pressure at the new time step. At certain conditions, the exact mass conservation enables the explicit n-th order accurate NSE solver to be used with the pressure treatment that is two or four order less accurate without loosing the apparent convergence rate. This feature was not possible with finite volume of finite difference methods. We use Fourier analysis with a model spectrum to determine the condition and found that the range covers standard boundary layer flows. The formal convergence and the performance of the proposed scheme is compared with a sixth-order finite volume method. Finally, the accuracy and performance of the method is evaluated in turbulent channel flows. This work is partially funded by a research colloaboration from IFS, Tohoku university and ASEAN+3 funding scheme from CMUIC, Chiang Mai University.

  19. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries

    PubMed Central

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  20. Design and development of second order MEMS sound pressure gradient sensor

    NASA Astrophysics Data System (ADS)

    Albahri, Shehab

    The design and development of a second order MEMS sound pressure gradient sensor is presented in this dissertation. Inspired by the directional hearing ability of the parasitoid fly, Ormia ochracea, a novel first order directional microphone that mimics the mechanical structure of the fly's ears and detects the sound pressure gradient has been developed. While the first order directional microphones can be very beneficial in a large number of applications, there is great potential for remarkable improvements in performance through the use of second order systems. The second order directional microphone is able to provide a theoretical improvement in Sound to Noise ratio (SNR) of 9.5dB, compared to the first-order system that has its maximum SNR of 6dB. Although second order microphone is more sensitive to sound angle of incidence, the nature of the design and fabrication process imposes different factors that could lead to deterioration in its performance. The first Ormia ochracea second order directional microphone was designed in 2004 and fabricated in 2006 at Binghamton University. The results of the tested parts indicate that the Ormia ochracea second order directional microphone performs mostly as an Omni directional microphone. In this work, the previous design is reexamined and analyzed to explain the unexpected results. A more sophisticated tool implementing a finite element package ANSYS is used to examine the previous design response. This new tool is used to study different factors that used to be ignored in the previous design, mainly; response mismatch and fabrication uncertainty. A continuous model using Hamilton's principle is introduced to verify the results using the new method. Both models agree well, and propose a new way for optimizing the second order directional microphone using geometrical manipulation. In this work we also introduce a new fabrication process flow to increase the fabrication yield. The newly suggested method uses the shell layered analysis method in ANSYS. The developed models simulate the fabricated chips at different stages; with the stress at each layer is introduced using thermal loading. The results indicate a new fabrication process flow to increase the rigidity of the composite layers, and countering the deformation caused by the high stress in the thermal oxide layer.

  1. Efficient composite broadband polarization retarders and polarization filters

    NASA Astrophysics Data System (ADS)

    Dimova, E.; Ivanov, S. S.; Popkirov, G.; Vitanov, N. V.

    2014-12-01

    A new type of broadband polarization half-wave retarder and narrowband polarization filters are described and experimentally tested. Both, the retarders and the filters are designed as composite stacks of standard optical half-wave plates, each of them twisted at specific angles. The theoretical background of the proposed optical devices was obtained by analogy with the method of composite pulses, known from the nuclear and quantum physics. We show that combining two composite filters built from different numbers and types of waveplates, the transmission spectrum is reduced from about 700 nm to about 10 nm width.We experimentally demonstrate that this method can be applied to different types of waveplates (broadband, zero-order, multiple order, etc.).

  2. Study on Measuring the Viscosity of Lubricating Oil by Viscometer Based on Hele - Shaw Principle

    NASA Astrophysics Data System (ADS)

    Li, Longfei

    2017-12-01

    In order to explore the method of accurately measuring the viscosity value of oil samples using the viscometer based on Hele-Shaw principle, three different measurement methods are designed in the laboratory, and the statistical characteristics of the measured values are compared, in order to get the best measurement method. The results show that the oil sample to be measured is placed in the magnetic field formed by the magnet, and the oil sample can be sucked from the same distance from the magnet. The viscosity value of the sample can be measured accurately.

  3. Determination of triclosan in antiperspirant gels by first-order derivative spectrophotometry.

    PubMed

    Du, Lina; Li, Miao; Jin, Yiguang

    2011-10-01

    A first-order derivative UV spectrophotometric method was developed to determine triclosan, a broad-spectrum antimicrobial agent, in health care products containing fragrances which could interfere the determination as impurities. Different extraction methods were compared. Triclosan was extracted with chloroform and diluted with ethanol followed by the derivative spectrophotometric measurement. The interference of fragrances was completely eliminated. The calibration graph was found to be linear in the range of 7.5-45 microg x mL(-1). The method is simple, rapid, sensitive and proper to determine triclosan in fragrance-containing health care products.

  4. Linguistic Analysis of the Human Heartbeat Using Frequency and Rank Order Statistics

    NASA Astrophysics Data System (ADS)

    Yang, Albert C.-C.; Hseu, Shu-Shya; Yien, Huey-Wen; Goldberger, Ary L.; Peng, C.-K.

    2003-03-01

    Complex physiologic signals may carry unique dynamical signatures that are related to their underlying mechanisms. We present a method based on rank order statistics of symbolic sequences to investigate the profile of different types of physiologic dynamics. We apply this method to heart rate fluctuations, the output of a central physiologic control system. The method robustly discriminates patterns generated from healthy and pathologic states, as well as aging. Furthermore, we observe increased randomness in the heartbeat time series with physiologic aging and pathologic states and also uncover nonrandom patterns in the ventricular response to atrial fibrillation.

  5. The status of augmented reality in laparoscopic surgery as of 2016.

    PubMed

    Bernhardt, Sylvain; Nicolau, Stéphane A; Soler, Luc; Doignon, Christophe

    2017-04-01

    This article establishes a comprehensive review of all the different methods proposed by the literature concerning augmented reality in intra-abdominal minimally invasive surgery (also known as laparoscopic surgery). A solid background of surgical augmented reality is first provided in order to support the survey. Then, the various methods of laparoscopic augmented reality as well as their key tasks are categorized in order to better grasp the current landscape of the field. Finally, the various issues gathered from these reviewed approaches are organized in order to outline the remaining challenges of augmented reality in laparoscopic surgery. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Least-squares finite element solution of 3D incompressible Navier-Stokes problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, Tsung-Liang; Povinelli, Louis A.

    1992-01-01

    Although significant progress has been made in the finite element solution of incompressible viscous flow problems. Development of more efficient methods is still needed before large-scale computation of 3D problems becomes feasible. This paper presents such a development. The most popular finite element method for the solution of incompressible Navier-Stokes equations is the classic Galerkin mixed method based on the velocity-pressure formulation. The mixed method requires the use of different elements to interpolate the velocity and the pressure in order to satisfy the Ladyzhenskaya-Babuska-Brezzi (LBB) condition for the existence of the solution. On the other hand, due to the lack of symmetry and positive definiteness of the linear equations arising from the mixed method, iterative methods for the solution of linear systems have been hard to come by. Therefore, direct Gaussian elimination has been considered the only viable method for solving the systems. But, for three-dimensional problems, the computer resources required by a direct method become prohibitively large. In order to overcome these difficulties, a least-squares finite element method (LSFEM) has been developed. This method is based on the first-order velocity-pressure-vorticity formulation. In this paper the LSFEM is extended for the solution of three-dimensional incompressible Navier-Stokes equations written in the following first-order quasi-linear velocity-pressure-vorticity formulation.

  7. Identification and quantification of homologous series of compound in complex mixtures: autocovariance study of GC/MS chromatograms.

    PubMed

    Pietrogrande, Maria Chiara; Zampolli, Maria Grazia; Dondi, Francesco

    2006-04-15

    The paper describes a method for determining homologous classes of compounds in a multicomponent complex chromatogram obtained under programming elution conditions. The method is based on the computation of the autocovariance function of the experimental chromatogram (EACVF). The EACVF plot, if properly interpreted, can be regarded as a "class chromatogram" i.e., a virtual chromatogram formed by peaks whose positions and heights allow identification and quantification of the different homologous series, even if they are embedded in a random complex chromatogram. Theoretical models were developed to describe complex chromatograms displaying random retention pattern, ordered sequences or a combination of them. On the basis of theoretical autocovariance function, the properties of the chromatogram can be experimentally evaluated, under well-defined conditions: in particular, the two components of the chromatogram, ordered and random, can be identified. Moreover, the total number of single components (SCs) and the separated number of the SCs belonging to the random and ordered components can be determined, when the two components display the same concentration. If the mixture contains several homologous series with common frequency and different phase values, the number and identity of the different homologous series as well as the number of SCs belonging to each of them can be evaluated. Moreover, the power of the EACVF method can be magnified by applying it to the single ion monitoring (SIM) signals to selectively detect specific compound classes in order to identify the different homologous series. By this way, a full "decoding" of the complex multicomponent chromatogram is achieved. The method was validated on synthetic mixtures containing known amount of SCs belonging to homologous series of hydrocarbon, alcohols, ketones, and aromatic compounds in addition to other not structurally related SCs. The method was applied to both the total ion monitoring (TIC) and the SIM signals, to describe step by step the essence of the procedure. Moreover, the systematic use of both SIM and TIC can simplify the decoding procedure of complex chromatograms by singling out only specific compound classes or by confirming the identification of the different homologous series. The method was further applied to a sample containing unknown number of compounds and homologous series (a petroleum benzin, bp 140-160 degrees C): the results obtained were meaningful in terms of both the identified number of components and identified homologous series.

  8. Numerical time-domain electromagnetics based on finite-difference and convolution

    NASA Astrophysics Data System (ADS)

    Lin, Yuanqu

    Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.

  9. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  10. MICROBIAL SOURCE TRACKING: DIFFERENT USES AND APPROACHES

    EPA Science Inventory

    Microbial Source Tracking (MST) methods are used to determine the origin of fecal pollution impacting natural water systems. Several methods require the isolation of pure cultures in order to develop phenotypic or genotypic fingerprint libraries of both source and water bacterial...

  11. Green's function enriched Poisson solver for electrostatics in many-particle systems

    NASA Astrophysics Data System (ADS)

    Sutmann, Godehard

    2016-06-01

    A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.

  12. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  13. Synergies from using higher order symplectic decompositions both for ordinary differential equations and quantum Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matuttis, Hans-Georg; Wang, Xiaoxing

    Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.

  14. On the superconvergence of Galerkin methods for hyperbolic IBVP

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Gustafsson, Bertil; Olsson, Pelle; Strand, BO

    1993-01-01

    Finite element Galerkin methods for periodic first order hyperbolic equations exhibit superconvergence on uniform grids at the nodes, i.e., there is an error estimate 0(h(sup 2r)) instead of the expected approximation order 0(h(sup r)). It will be shown that no matter how the approximating subspace S(sup h) is chosen, the superconvergence property is lost if there are characteristics leaving the domain. The implications of this result when constructing compact implicit difference schemes is also discussed.

  15. Errors from approximation of ODE systems with reduced order models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevska, Tanya

    2016-12-30

    This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.

  16. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  17. 3D CSEM data inversion using Newton and Halley class methods

    NASA Astrophysics Data System (ADS)

    Amaya, M.; Hansen, K. R.; Morten, J. P.

    2016-05-01

    For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those applied in this paper.

  18. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less

  19. Measurement of Cognitive Load in Multimedia Learning: A Comparison of Different Objective Measures

    ERIC Educational Resources Information Center

    Korbach, Andreas; Brünken, Roland; Park, Babette

    2017-01-01

    Different indicators are interesting for analyzing human learning processes. Recent studies analyze learning performance in combination with cognitive load, as an indicator for learners' invested mental effort. In order to compare different measures of cognitive load research, the present study uses three different objective methods and one…

  20. High-order flux correction/finite difference schemes for strand grids

    NASA Astrophysics Data System (ADS)

    Katz, Aaron; Work, Dalon

    2015-02-01

    A novel high-order method combining unstructured flux correction along body surfaces and high-order finite differences normal to surfaces is formulated for unsteady viscous flows on strand grids. The flux correction algorithm is applied in each unstructured layer of the strand grid, and the layers are then coupled together via a source term containing derivatives in the strand direction. Strand-direction derivatives are approximated to high-order via summation-by-parts operators for first derivatives and second derivatives with variable coefficients. We show how this procedure allows for the proper truncation error canceling properties required for the flux correction scheme. The resulting scheme possesses third-order design accuracy, but often exhibits fourth-order accuracy when higher-order derivatives are employed in the strand direction, especially for highly viscous flows. We prove discrete conservation for the new scheme and time stability in the absence of the flux correction terms. Results in two dimensions are presented that demonstrate improvements in accuracy with minimal computational and algorithmic overhead over traditional second-order algorithms.

  1. Well-balanced high-order centered schemes on unstructured meshes for shallow water equations with fixed and mobile bed

    NASA Astrophysics Data System (ADS)

    Canestrelli, Alberto; Dumbser, Michael; Siviglia, Annunziato; Toro, Eleuterio F.

    2010-03-01

    In this paper, we study the numerical approximation of the two-dimensional morphodynamic model governed by the shallow water equations and bed-load transport following a coupled solution strategy. The resulting system of governing equations contains non-conservative products and it is solved simultaneously within each time step. The numerical solution is obtained using a new high-order accurate centered scheme of the finite volume type on unstructured meshes, which is an extension of the one-dimensional PRICE-C scheme recently proposed in Canestrelli et al. (2009) [5]. The resulting first-order accurate centered method is then extended to high order of accuracy in space via a high order WENO reconstruction technique and in time via a local continuous space-time Galerkin predictor method. The scheme is applied to the shallow water equations and the well-balanced properties of the method are investigated. Finally, we apply the new scheme to different test cases with both fixed and movable bed. An attractive future of the proposed method is that it is particularly suitable for engineering applications since it allows practitioners to adopt the most suitable sediment transport formula which better fits the field data.

  2. NLO properties of ester containing fluorescent carbazole based styryl dyes - Consolidated spectroscopic and DFT approach

    NASA Astrophysics Data System (ADS)

    Rajeshirke, Manali; Sekar, Nagaiyan

    2018-02-01

    The linear and nonlinear optical (NLO) properties of new fluorescent styryl dyes based on anchoring ester containing carbazole as donor appended to different acceptor groups to have a conjugated π-system with push-pull geometry are studied. The NLO properties have been determined using solvatochromic and computational methods. Three different TD-DFT functional are used namely, B3LYP, BHandHLYP, and CAM-B3LYP, with aim of elucidating better functional for NLOphores. Further, the two photon properties (σ2PA) have been described theoretically by two level model considering the dipole moment difference between the ground and the final electronic states and bypassing the intermediated resonance state. The compounds with a high charge transfer from the acceptor group to the carbazole ring have relatively high two-photon absorption cross-sections (60-317 GM). The linear polarizability (αCT), first order hyperpolarizability (β) and second order hyperpolarizability (ɣ) for 4c dye was the highest among the studied dyes which is attributed to the lesser energy gap evident by both the methods. But in contrary, the σ2PA cross-section value was low for dye 4c which is due to the presence of freely rotatable twisted phenyl ring in the conjugation path, pulling the electron density towards itself and thus lead to decrease in σ2PA cross-section. Structure-property relationship is better understood by the correlation of bond length alternation/bond order alternation (BLA/BOA) with NLO properties of dyes. Thus by simple solvatochromic method and computational method, we have screened the carbazole styryls as NLO candidates with good first order hyperpolarizability and good two photon cross-section.

  3. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  4. Light field creating and imaging with different order intensity derivatives

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Jiang, Huan

    2014-10-01

    Microscopic image restoration and reconstruction is a challenging topic in the image processing and computer vision, which can be widely applied to life science, biology and medicine etc. A microscopic light field creating and three dimensional (3D) reconstruction method is proposed for transparent or partially transparent microscopic samples, which is based on the Taylor expansion theorem and polynomial fitting. Firstly the image stack of the specimen is divided into several groups in an overlapping or non-overlapping way along the optical axis, and the first image of every group is regarded as reference image. Then different order intensity derivatives are calculated using all the images of every group and polynomial fitting method based on the assumption that the structure of the specimen contained by the image stack in a small range along the optical axis are possessed of smooth and linear property. Subsequently, new images located any position from which to reference image the distance is Δz along the optical axis can be generated by means of Taylor expansion theorem and the calculated different order intensity derivatives. Finally, the microscopic specimen can be reconstructed in 3D form using deconvolution technology and all the images including both the observed images and the generated images. The experimental results show the effectiveness and feasibility of our method.

  5. Newton's method applied to finite-difference approximations for the steady-state compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bailey, Harry E.; Beam, Richard M.

    1991-01-01

    Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.

  6. Selecting supplier combination based on fuzzy multicriteria analysis

    NASA Astrophysics Data System (ADS)

    Han, Zhi-Qiu; Luo, Xin-Xing; Chen, Xiao-Hong; Yang, Wu-E.

    2015-07-01

    Existing multicriteria analysis (MCA) methods are probably ineffective in selecting a supplier combination. Thus, an MCA-based fuzzy 0-1 programming method is introduced. The programming relates to a simple MCA matrix that is used to select a single supplier. By solving the programming, the most feasible combination of suppliers is selected. Importantly, this result differs from selecting suppliers one by one according to a single-selection order, which is used to rank sole suppliers in existing MCA methods. An example highlights such difference and illustrates the proposed method.

  7. Comparison of methods for estimating flood magnitudes on small streams in Georgia

    USGS Publications Warehouse

    Hess, Glen W.; Price, McGlone

    1989-01-01

    The U.S. Geological Survey has collected flood data for small, natural streams at many sites throughout Georgia during the past 20 years. Flood-frequency relations were developed for these data using four methods: (1) observed (log-Pearson Type III analysis) data, (2) rainfall-runoff model, (3) regional regression equations, and (4) map-model combination. The results of the latter three methods were compared to the analyses of the observed data in order to quantify the differences in the methods and determine if the differences are statistically significant.

  8. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    PubMed

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  9. The Use of Triangulation Methods in Qualitative Educational Research

    ERIC Educational Resources Information Center

    Oliver-Hoyo, Maria; Allen, DeeDee

    2006-01-01

    Triangulation involves the careful reviewing of data collected through different methods in order to achieve a more accurate and valid estimate of qualitative results for a particular construct. This paper describes how we used three qualitative methods of data collection to study attitudes of students toward graphing, hands-on activities, and…

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Hyun-Ju; Chung, Chin-Wook, E-mail: joykang@hanyang.ac.kr; Choi, Hyeok

    A modified central difference method (MCDM) is proposed to obtain the electron energy distribution functions (EEDFs) in single Langmuir probes. Numerical calculation of the EEDF with MCDM is simple and has less noise. This method provides the second derivatives at a given point as the weighted average of second order central difference derivatives calculated at different voltage intervals, weighting each by the square of the interval. In this paper, the EEDFs obtained from MCDM are compared to those calculated via the averaged central difference method. It is found that MCDM effectively suppresses the noises in the EEDF, while the samemore » number of points are used to calculate of the second derivative.« less

  11. OrderRex: clinical order decision support and outcome predictions by data-mining electronic medical records

    PubMed Central

    Chen, Jonathan H; Podchiyska, Tanya

    2016-01-01

    Objective: To answer a “grand challenge” in clinical decision support, the authors produced a recommender system that automatically data-mines inpatient decision support from electronic medical records (EMR), analogous to Netflix or Amazon.com’s product recommender. Materials and Methods: EMR data were extracted from 1 year of hospitalizations (>18K patients with >5.4M structured items including clinical orders, lab results, and diagnosis codes). Association statistics were counted for the ∼1.5K most common items to drive an order recommender. The authors assessed the recommender’s ability to predict hospital admission orders and outcomes based on initial encounter data from separate validation patients. Results: Compared to a reference benchmark of using the overall most common orders, the recommender using temporal relationships improves precision at 10 recommendations from 33% to 38% (P < 10−10) for hospital admission orders. Relative risk-based association methods improve inverse frequency weighted recall from 4% to 16% (P < 10−16). The framework yields a prediction receiver operating characteristic area under curve (c-statistic) of 0.84 for 30 day mortality, 0.84 for 1 week need for ICU life support, 0.80 for 1 week hospital discharge, and 0.68 for 30-day readmission. Discussion: Recommender results quantitatively improve on reference benchmarks and qualitatively appear clinically reasonable. The method assumes that aggregate decision making converges appropriately, but ongoing evaluation is necessary to discern common behaviors from “correct” ones. Conclusions: Collaborative filtering recommender algorithms generate clinical decision support that is predictive of real practice patterns and clinical outcomes. Incorporating temporal relationships improves accuracy. Different evaluation metrics satisfy different goals (predicting likely events vs. “interesting” suggestions). PMID:26198303

  12. A novel method based on new adaptive LVQ neural network for predicting protein-protein interactions from protein sequences.

    PubMed

    Yousef, Abdulaziz; Moghadam Charkari, Nasrollah

    2013-11-07

    Protein-Protein interaction (PPI) is one of the most important data in understanding the cellular processes. Many interesting methods have been proposed in order to predict PPIs. However, the methods which are based on the sequence of proteins as a prior knowledge are more universal. In this paper, a sequence-based, fast, and adaptive PPI prediction method is introduced to assign two proteins to an interaction class (yes, no). First, in order to improve the presentation of the sequences, twelve physicochemical properties of amino acid have been used by different representation methods to transform the sequence of protein pairs into different feature vectors. Then, for speeding up the learning process and reducing the effect of noise PPI data, principal component analysis (PCA) is carried out as a proper feature extraction algorithm. Finally, a new and adaptive Learning Vector Quantization (LVQ) predictor is designed to deal with different models of datasets that are classified into balanced and imbalanced datasets. The accuracy of 93.88%, 90.03%, and 89.72% has been found on S. cerevisiae, H. pylori, and independent datasets, respectively. The results of various experiments indicate the efficiency and validity of the method. © 2013 Published by Elsevier Ltd.

  13. Low-lying excited states of model proteins: Performances of the CC2 method versus multireference methods

    NASA Astrophysics Data System (ADS)

    Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie

    2018-05-01

    A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.

  14. Low-lying excited states of model proteins: Performances of the CC2 method versus multireference methods.

    PubMed

    Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie

    2018-05-14

    A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.

  15. Direct Delta-MBPT(2) method for ionization potentials, electron affinities, and excitation energies using fractional occupation numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beste, Ariana; Vazquez-Mayagoitia, Alvaro; Ortiz, J. Vincent

    2013-01-01

    A direct method (D-Delta-MBPT(2)) to calculate second-order ionization potentials (IPs), electron affinities (EAs), and excitation energies is developed. The Delta-MBPT(2) method is defined as the correlated extension of the Delta-HF method. Energy differences are obtained by integrating the energy derivative with respect to occupation numbers over the appropriate parameter range. This is made possible by writing the second-order energy as a function of the occupation numbers. Relaxation effects are fully included at the SCF level. This is in contrast to linear response theory, which makes the D-Delta-MBPT(2) applicable not only to single excited but also higher excited states. We showmore » the relationship of the D-Delta-MBPT(2) method for IPs and EAs to a second-order approximation of the effective Fock-space coupled-cluster Hamiltonian and a second-order electron propagator method. We also discuss the connection between the D-Delta-MBPT(2) method for excitation energies and the CIS-MP2 method. Finally, as a proof of principle, we apply our method to calculate ionization potentials and excitation energies of some small molecules. For IPs, the Delta-MBPT(2) results compare well to the second-order solution of the Dyson equation. For excitation energies, the deviation from EOM-CCSD increases when correlation becomes more important. When using the numerical integration technique, we encounter difficulties that prevented us from reaching the Delta-MBPT(2) values. Most importantly, relaxation beyond the Hartree Fock level is significant and needs to be included in future research.« less

  16. High-order dynamic modeling and parameter identification of structural discontinuities in Timoshenko beams by using reflection coefficients

    NASA Astrophysics Data System (ADS)

    Fan, Qiang; Huang, Zhenyu; Zhang, Bing; Chen, Dayue

    2013-02-01

    Properties of discontinuities, such as bolt joints and cracks in the waveguide structures, are difficult to evaluate by either analytical or numerical methods due to the complexity and uncertainty of the discontinuities. In this paper, the discontinuity in a Timoshenko beam is modeled with high-order parameters and then these parameters are identified by using reflection coefficients at the discontinuity. The high-order model is composed of several one-order sub-models in series and each sub-model consists of inertia, stiffness and damping components in parallel. The order of the discontinuity model is determined based on the characteristics of the reflection coefficient curve and the accuracy requirement of the dynamic modeling. The model parameters are identified through the least-square fitting iteration method, of which the undetermined model parameters are updated in iteration to fit the dynamic reflection coefficient curve with the wave-based one. By using the spectral super-element method (SSEM), simulation cases, including one-order discontinuities on infinite- and finite-beams and a two-order discontinuity on an infinite beam, were employed to evaluate both the accuracy of the discontinuity model and the effectiveness of the identification method. For practical considerations, effects of measurement noise on the discontinuity parameter identification are investigated by adding different levels of noise to the simulated data. The simulation results were then validated by the corresponding experiments. Both the simulation and experimental results show that (1) the one-order discontinuities can be identified accurately with the maximum errors of 6.8% and 8.7%, respectively; (2) and the high-order discontinuities can be identified with the maximum errors of 15.8% and 16.2%, respectively; and (3) the high-order model can predict the complex discontinuity much more accurately than the one-order discontinuity model.

  17. Mass spectrometry: Raw protein from the top down

    NASA Astrophysics Data System (ADS)

    Breuker, Kathrin

    2018-02-01

    Mass spectrometry is a powerful technique for analysing proteins, yet linking higher-order protein structure to amino acid sequence and post-translational modifications is far from simple. Now, a native top-down method has been developed that can provide information on higher-order protein structure and different proteoforms at the same time.

  18. Quantum corrections for the phase diagram of systems with competing order.

    PubMed

    Silva, N L; Continentino, Mucio A; Barci, Daniel G

    2018-06-06

    We use the effective potential method of quantum field theory to obtain the quantum corrections to the zero temperature phase diagram of systems with competing order parameters. We are particularly interested in two different scenarios: regions of the phase diagram where there is a bicritical point, at which both phases vanish continuously, and the case where both phases coexist homogeneously. We consider different types of couplings between the order parameters, including a bilinear one. This kind of coupling breaks time-reversal symmetry and it is only allowed if both order parameters transform according to the same irreducible representation. This occurs in many physical systems of actual interest like competing spin density waves, different types of orbital antiferromagnetism, elastic instabilities of crystal lattices, vortices in a multigap SC and also applies to describe the unusual magnetism of the heavy fermion compound URu 2 Si 2 . Our results show that quantum corrections have an important effect on the phase diagram of systems with competing orders.

  19. Quantum corrections for the phase diagram of systems with competing order

    NASA Astrophysics Data System (ADS)

    Silva, N. L., Jr.; Continentino, Mucio A.; Barci, Daniel G.

    2018-06-01

    We use the effective potential method of quantum field theory to obtain the quantum corrections to the zero temperature phase diagram of systems with competing order parameters. We are particularly interested in two different scenarios: regions of the phase diagram where there is a bicritical point, at which both phases vanish continuously, and the case where both phases coexist homogeneously. We consider different types of couplings between the order parameters, including a bilinear one. This kind of coupling breaks time-reversal symmetry and it is only allowed if both order parameters transform according to the same irreducible representation. This occurs in many physical systems of actual interest like competing spin density waves, different types of orbital antiferromagnetism, elastic instabilities of crystal lattices, vortices in a multigap SC and also applies to describe the unusual magnetism of the heavy fermion compound URu2Si2. Our results show that quantum corrections have an important effect on the phase diagram of systems with competing orders.

  20. Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu

    2016-02-15

    We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less

  1. Collocated electrodynamic FDTD schemes using overlapping Yee grids and higher-order Hodge duals

    NASA Astrophysics Data System (ADS)

    Deimert, C.; Potter, M. E.; Okoniewski, M.

    2016-12-01

    The collocated Lebedev grid has previously been proposed as an alternative to the Yee grid for electromagnetic finite-difference time-domain (FDTD) simulations. While it performs better in anisotropic media, it performs poorly in isotropic media because it is equivalent to four overlapping, uncoupled Yee grids. We propose to couple the four Yee grids and fix the Lebedev method using discrete exterior calculus (DEC) with higher-order Hodge duals. We find that higher-order Hodge duals do improve the performance of the Lebedev grid, but they also improve the Yee grid by a similar amount. The effectiveness of coupling overlapping Yee grids with a higher-order Hodge dual is thus questionable. However, the theoretical foundations developed to derive these methods may be of interest in other problems.

  2. Monte Carlo simulations on marker grouping and ordering.

    PubMed

    Wu, J; Jenkins, J; Zhu, J; McCarty, J; Watson, C

    2003-08-01

    Four global algorithms, maximum likelihood (ML), sum of adjacent LOD score (SALOD), sum of adjacent recombinant fractions (SARF) and product of adjacent recombinant fraction (PARF), and one approximation algorithm, seriation (SER), were used to compare the marker ordering efficiencies for correctly given linkage groups based on doubled haploid (DH) populations. The Monte Carlo simulation results indicated the marker ordering powers for the five methods were almost identical. High correlation coefficients were greater than 0.99 between grouping power and ordering power, indicating that all these methods for marker ordering were reliable. Therefore, the main problem for linkage analysis was how to improve the grouping power. Since the SER approach provided the advantage of speed without losing ordering power, this approach was used for detailed simulations. For more generality, multiple linkage groups were employed, and population size, linkage cutoff criterion, marker spacing pattern (even or uneven), and marker spacing distance (close or loose) were considered for obtaining acceptable grouping powers. Simulation results indicated that the grouping power was related to population size, marker spacing distance, and cutoff criterion. Generally, a large population size provided higher grouping power than small population size, and closely linked markers provided higher grouping power than loosely linked markers. The cutoff criterion range for achieving acceptable grouping power and ordering power differed for varying cases; however, combining all situations in this study, a cutoff criterion ranging from 50 cM to 60 cM was recommended for achieving acceptable grouping power and ordering power for different cases.

  3. Evaluating the role of higher order nonlinearity in water of finite and shallow depth with a direct numerical simulation method of Euler equations

    NASA Astrophysics Data System (ADS)

    Fernandez, L.; Toffoli, A.; Monbaliu, J.

    2012-04-01

    In deep water, the dynamics of surface gravity waves is dominated by the instability of wave packets to side band perturbations. This mechanism, which is a nonlinear third order in wave steepness effect, can lead to a particularly strong focusing of wave energy, which in turn results in the formation of waves of very large amplitude also known as freak or rogue waves [1]. In finite water depth, however, the interaction between waves and the ocean floor induces a mean current. This subtracts energy from wave instability and causes it to cease for relative water depth , where k is the wavenumber and h the water depth [2]. Yet, this contradicts field observations of extreme waves such as the infamous 26-m "New Year" wave that have mainly been recorded in regions of relatively shallow water . In this respect, recent studies [3] seem to suggest that higher order nonlinearity in water of finite depth may sustain instability. In order to assess the role of higher order nonlinearity in water of finite and shallow depth, here we use a Higher Order Spectral Method [4] to simulate the evolution of surface gravity waves according to the Euler equations of motion. This method is based on an expansion of the vertical velocity about the surface elevation under the assumption of weak nonlinearity and has a great advantage of allowing the activation or deactivation of different orders of nonlinearity. The model is constructed to deal with an arbitrary order of nonlinearity and water depths so that finite and shallow water regimes can be analyzed. Several wave configurations are considered with oblique and collinear with the primary waves disturbances and different water depths. The analysis confirms that nonlinearity higher than third order play a substantial role in the destabilization of a primary wave train and subsequent growth of side band perturbations.

  4. Analysis of delay reducing and fuel saving sequencing and spacing algorithms for arrival traffic

    NASA Technical Reports Server (NTRS)

    Neuman, Frank; Erzberger, Heinz

    1991-01-01

    The air traffic control subsystem that performs sequencing and spacing is discussed. The function of the sequencing and spacing algorithms is to automatically plan the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several algorithms are described and their statistical performance is examined. Sequencing brings order to an arrival sequence for aircraft. First-come-first-served sequencing (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the arriving traffic, gaps will remain in the sequence of aircraft. Delays are reduced by time-advancing the leading aircraft of each group while still preserving the FCFS order. Tightly spaced groups of aircraft remain with a mix of heavy and large aircraft. Spacing requirements differ for different types of aircraft trailing each other. Traffic is reordered slightly to take advantage of this spacing criterion, thus shortening the groups and reducing average delays. For heavy traffic, delays for different traffic samples vary widely, even when the same set of statistical parameters is used to produce each sample. This report supersedes NASA TM-102795 on the same subject. It includes a new method of time-advance as well as an efficient method of sequencing and spacing for two dependent runways.

  5. Parametric instability analysis of truncated conical shells using the Haar wavelet method

    NASA Astrophysics Data System (ADS)

    Dai, Qiyi; Cao, Qingjie

    2018-05-01

    In this paper, the Haar wavelet method is employed to analyze the parametric instability of truncated conical shells under static and time dependent periodic axial loads. The present work is based on the Love first-approximation theory for classical thin shells. The displacement field is expressed as the Haar wavelet series in the axial direction and trigonometric functions in the circumferential direction. Then the partial differential equations are reduced into a system of coupled Mathieu-type ordinary differential equations describing dynamic instability behavior of the shell. Using Bolotin's method, the first-order and second-order approximations of principal instability regions are determined. The correctness of present method is examined by comparing the results with those in the literature and very good agreement is observed. The difference between the first-order and second-order approximations of principal instability regions for tensile and compressive loads is also investigated. Finally, numerical results are presented to bring out the influences of various parameters like static load factors, boundary conditions and shell geometrical characteristics on the domains of parametric instability of conical shells.

  6. A compatible high-order meshless method for the Stokes equations with applications to suspension flows

    NASA Astrophysics Data System (ADS)

    Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe

    2018-02-01

    A stable numerical solution of the steady Stokes problem requires compatibility between the choice of velocity and pressure approximation that has traditionally proven problematic for meshless methods. In this work, we present a discretization that couples a staggered scheme for pressure approximation with a divergence-free velocity reconstruction to obtain an adaptive, high-order, finite difference-like discretization that can be efficiently solved with conventional algebraic multigrid techniques. We use analytic benchmarks to demonstrate equal-order convergence for both velocity and pressure when solving problems with curvilinear geometries. In order to study problems in dense suspensions, we couple the solution for the flow to the equations of motion for freely suspended particles in an implicit monolithic scheme. The combination of high-order accuracy with fully-implicit schemes allows the accurate resolution of stiff lubrication forces directly from the solution of the Stokes problem without the need to introduce sub-grid lubrication models.

  7. Decomposition of conditional probability for high-order symbolic Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  8. Decomposition of conditional probability for high-order symbolic Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  9. Multiplicative noise removal through fractional order tv-based model and fast numerical schemes for its approximation

    NASA Astrophysics Data System (ADS)

    Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad

    2017-07-01

    This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.

  10. Toward a zero VAP rate: personal and team approaches in the ICU.

    PubMed

    Fox, Maria Y

    2006-01-01

    In a fast-paced setting like the intensive care unit (ICU), nurses must have appropriate tools and resources in order to implement appropriate and timely interventions. Ventilator-associated pneumonia (VAP) is a costly and potentially fatal outcome for ICU patients that requires timely interventions. Even with established guidelines and care protocols, nurses do not always incorporate best practice interventions into their daily plan of care. Despite the plethora of information and guidelines about how to apply interventions in order to save lives, managers of ICUs are challenged to involve the bedside nurse and other ICU team members to apply these bundles of interventions in a proactive, rather than reactive, manner in order to prevent complications of care. The purpose of this article is to illustrate the success of 2 different methods utilized to improve patient care in the ICU. The first method is a personal process improvement model, and the second method is a team approach model. Both methods were utilized in order to implement interventions in a timely and complete manner to prevent VAP and its related problem, hospital-associated pneumonia, in the ICU setting. Success with these 2 methods has spurred an interest in other patient care initiatives.

  11. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  12. Kinds of access: different methods for report reveal different kinds of metacognitive access

    PubMed Central

    Overgaard, Morten; Sandberg, Kristian

    2012-01-01

    In experimental investigations of consciousness, participants are asked to reflect upon their own experiences by issuing reports about them in different ways. For this reason, a participant needs some access to the content of her own conscious experience in order to report. In such experiments, the reports typically consist of some variety of ratings of confidence or direct descriptions of one's own experiences. Whereas different methods of reporting are typically used interchangeably, recent experiments indicate that different results are obtained with different kinds of reporting. We argue that there is not only a theoretical, but also an empirical difference between different methods of reporting. We hypothesize that differences in the sensitivity of different scales may reveal that different types of access are used to issue direct reports about experiences and metacognitive reports about the classification process. PMID:22492747

  13. Measuring and partitioning the high-order linkage disequilibrium by multiple order Markov chains.

    PubMed

    Kim, Yunjung; Feng, Sheng; Zeng, Zhao-Bang

    2008-05-01

    A map of the background levels of disequilibrium between nearby markers can be useful for association mapping studies. In order to assess the background levels of linkage disequilibrium (LD), multilocus LD measures are more advantageous than pairwise LD measures because the combined analysis of pairwise LD measures is not adequate to detect simultaneous allele associations among multiple markers. Various multilocus LD measures based on haplotypes have been proposed. However, most of these measures provide a single index of association among multiple markers and does not reveal the complex patterns and different levels of LD structure. In this paper, we employ non-homogeneous, multiple order Markov Chain models as a statistical framework to measure and partition the LD among multiple markers into components due to different orders of marker associations. Using a sliding window of multiple markers on phased haplotype data, we compute corresponding likelihoods for different Markov Chain (MC) orders in each window. The log-likelihood difference between the lowest MC order model (MC0) and the highest MC order model in each window is used as a measure of the total LD or the overall deviation from the gametic equilibrium for the window. Then, we partition the total LD into lower order disequilibria and estimate the effects from two-, three-, and higher order disequilibria. The relationship between different orders of LD and the log-likelihood difference involving two different orders of MC models are explored. By applying our method to the phased haplotype data in the ENCODE regions of the HapMap project, we are able to identify high/low multilocus LD regions. Our results reveal that the most LD in the HapMap data is attributed to the LD between adjacent pairs of markers across the whole region. LD between adjacent pairs of markers appears to be more significant in high multilocus LD regions than in low multilocus LD regions. We also find that as the multilocus total LD increases, the effects of high-order LD tends to get weaker due to the lack of observed multilocus haplotypes. The overall estimates of first, second, third, and fourth order LD across the ENCODE regions are 64, 23, 9, and 3%.

  14. Physician Utilization of a Hospital Information System: A Computer Simulation Model

    PubMed Central

    Anderson, James G.; Jay, Stephen J.; Clevenger, Stephen J.; Kassing, David R.; Perry, Jane; Anderson, Marilyn M.

    1988-01-01

    The purpose of this research was to develop a computer simulation model that represents the process through which physicians enter orders into a hospital information system (HIS). Computer simulation experiments were performed to estimate the effects of two methods of order entry on outcome variables. The results of the computer simulation experiments were used to perform a cost-benefit analysis to compare the two different means of entering medical orders into the HIS. The results indicate that the use of personal order sets to enter orders into the HIS will result in a significant reduction in manpower, salaries and fringe benefits, and errors in order entry.

  15. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    NASA Astrophysics Data System (ADS)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  16. Solution of an eigenvalue problem for the Laplace operator on a spherical surface. M.S. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Walden, H.

    1974-01-01

    Methods for obtaining approximate solutions for the fundamental eigenvalue of the Laplace-Beltrami operator (also referred to as the membrane eigenvalue problem for the vibration equation) on the unit spherical surface are developed. Two specific types of spherical surface domains are considered: (1) the interior of a spherical triangle, i.e., the region bounded by arcs of three great circles, and (2) the exterior of a great circle arc extending for less than pi radians on the sphere (a spherical surface with a slit). In both cases, zero boundary conditions are imposed. In order to solve the resulting second-order elliptic partial differential equations in two independent variables, a finite difference approximation is derived. The symmetric (generally five-point) finite difference equations that develop are written in matrix form and then solved by the iterative method of point successive overrelaxation. Upon convergence of this iterative method, the fundamental eigenvalue is approximated by iteration utilizing the power method as applied to the finite Rayleigh quotient.

  17. BIMLR: a method for constructing rooted phylogenetic networks from rooted phylogenetic trees.

    PubMed

    Wang, Juan; Guo, Maozu; Xing, Linlin; Che, Kai; Liu, Xiaoyan; Wang, Chunyu

    2013-09-15

    Rooted phylogenetic trees constructed from different datasets (e.g. from different genes) are often conflicting with one another, i.e. they cannot be integrated into a single phylogenetic tree. Phylogenetic networks have become an important tool in molecular evolution, and rooted phylogenetic networks are able to represent conflicting rooted phylogenetic trees. Hence, the development of appropriate methods to compute rooted phylogenetic networks from rooted phylogenetic trees has attracted considerable research interest of late. The CASS algorithm proposed by van Iersel et al. is able to construct much simpler networks than other available methods, but it is extremely slow, and the networks it constructs are dependent on the order of the input data. Here, we introduce an improved CASS algorithm, BIMLR. We show that BIMLR is faster than CASS and less dependent on the input data order. Moreover, BIMLR is able to construct much simpler networks than almost all other methods. BIMLR is available at http://nclab.hit.edu.cn/wangjuan/BIMLR/. © 2013 Elsevier B.V. All rights reserved.

  18. Controlling exfoliation in order to minimize damage during dispersion of long SWCNTs for advanced composites

    PubMed Central

    Yoon, Howon; Yamashita, Motoi; Ata, Seisuke; Futaba, Don N.; Yamada, Takeo; Hata, Kenji

    2014-01-01

    We propose an approach to disperse long single-wall carbon nanotubes (SWCNTs) in a manner that is most suitable for the fabrication of high-performance composites. We compare three general classes of dispersion mechanisms, which encompass 11 different dispersion methods, and we have dispersed long SWCNTs, short multi-wall carbon nanotubes, and short SWCNTs in order to understand the most appropriate dispersion methods for the different types of CNTs. From this study, we have found that the turbulent flow methods, as represented by the Nanomizer and high-pressure jet mill methods, produced unique and superior dispersibility of long SWCNTs, which was advantageous for the fabrication of highly conductive composites. The results were interpreted to imply that the biaxial shearing force caused an exfoliation effect to disperse the long SWCNTs homogeneously while suppressing damage. A conceptual model was developed to explain this dispersion mechanism, which is important for future work on advanced CNT composites. PMID:24469607

  19. Pulp properties resulting from different pretreatments of wheat straw and their influence on enzymatic hydrolysis rate.

    PubMed

    Rossberg, Christine; Steffien, Doreen; Bremer, Martina; Koenig, Swetlana; Carvalheiro, Florbela; Duarte, Luís C; Moniz, Patrícia; Hoernicke, Max; Bertau, Martin; Fischer, Steffen

    2014-10-01

    Wheat straw was subjected to three different processes prior to saccharification, namely alkaline pulping, natural pulping and autohydrolysis, in order to study their effect on the rate of enzymatic hydrolysis. Parameters like medium concentration, temperature and time have been varied in order to optimize each method. Milling the raw material to a length of 4mm beforehand showed the best cost-value-ratio compared to other grinding methods studied. Before saccharification the pulp can be stored in dried form, leading to a high yield of glucose. Furthermore the relation of pulp properties (i.e. intrinsic viscosity, Klason-lignin and hemicelluloses content, crystallinity, morphology) to cellulose hydrolysis is discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    PubMed

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.

  1. Thermographic techniques and adapted algorithms for automatic detection of foreign bodies in food

    NASA Astrophysics Data System (ADS)

    Meinlschmidt, Peter; Maergner, Volker

    2003-04-01

    At the moment foreign substances in food are detected mainly by using mechanical and optical methods as well as ultrasonic technique and than they are removed from the further process. These techniques detect a large portion of the foreign substances due to their different mass (mechanical sieving), their different colour (optical method) and their different surface density (ultrasonic detection). Despite the numerous different methods a considerable portion of the foreign substances remain undetected. In order to recognise materials still undetected, a complementary detection method would be desirable removing the foreign substances not registered by the a.m. methods from the production process. In a project with 13 partner from the food industry, the Fraunhofer - Institut für Holzforschung (WKI) and the Technische Unsiversität are trying to adapt thermography for the detection of foreign bodies in the food industry. After the initial tests turned out to be very promising for the differentiation of food stuffs and foreign substances, more and detailed investigation were carried out to develop suitable algorithms for automatic detection of foreign bodies. In order to achieve -besides the mere visual detection of foreign substances- also an automatic detection under production conditions, numerous experiences in image processing and pattern recognition are exploited. Results for the detection of foreign bodies will be presented at the conference showing the different advantages and disadvantages of using grey - level, statistical and morphological image processing techniques.

  2. Acquisition and production of skilled behavior in dynamic decision-making tasks

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1992-01-01

    Currently, two main approaches exist for improving the human-machine interface component of a system in order to improve overall system performance - display enhancement and intelligent decision making. Discussed here are the characteristic issues of these two decision-making strategies. Differences in expert and novice decision making are described in order to help determine whether a particular strategy may be better for a particular type of user. Research is outlined to compare and contrast the two technologies, as well as to examine the interaction effects introduced by the different skill levels and the different methods for training operators.

  3. Exact solutions to the time-fractional differential equations via local fractional derivatives

    NASA Astrophysics Data System (ADS)

    Guner, Ozkan; Bekir, Ahmet

    2018-01-01

    This article utilizes the local fractional derivative and the exp-function method to construct the exact solutions of nonlinear time-fractional differential equations (FDEs). For illustrating the validity of the method, it is applied to the time-fractional Camassa-Holm equation and the time-fractional-generalized fifth-order KdV equation. Moreover, the exact solutions are obtained for the equations which are formed by different parameter values related to the time-fractional-generalized fifth-order KdV equation. This method is an reliable and efficient mathematical tool for solving FDEs and it can be applied to other non-linear FDEs.

  4. Inspection system calibration methods

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2004-12-28

    An inspection system calibration method includes producing two sideband signals of a first wavefront; interfering the two sideband signals in a photorefractive material, producing an output signal therefrom having a frequency and a magnitude; and producing a phase modulated operational signal having a frequency different from the output signal frequency, a magnitude, and a phase modulation amplitude. The method includes determining a ratio of the operational signal magnitude to the output signal magnitude, determining a ratio of a 1st order Bessel function of the operational signal phase modulation amplitude to a 0th order Bessel function of the operational signal phase modulation amplitude, and comparing the magnitude ratio to the Bessel function ratio.

  5. Interplay of symmetries and other integrability quantifiers in finite-dimensional integrable nonlinear dynamical systems

    PubMed Central

    Mohanasubha, R.; Chandrasekar, V. K.; Lakshmanan, M.

    2016-01-01

    In this work, we establish a connection between the extended Prelle–Singer procedure and other widely used analytical methods to identify integrable systems in the case of nth-order nonlinear ordinary differential equations (ODEs). By synthesizing these methods, we bring out the interlink between Lie point symmetries, contact symmetries, λ-symmetries, adjoint symmetries, null forms, Darboux polynomials, integrating factors, the Jacobi last multiplier and generalized λ-symmetries corresponding to the nth-order ODEs. We also prove these interlinks with suitable examples. By exploiting these interconnections, the characteristic quantities associated with different methods can be deduced without solving the associated determining equations. PMID:27436964

  6. Numerical-analytic implementation of the higher-order canonical Van Vleck perturbation theory for the interpretation of medium-sized molecule vibrational spectra.

    PubMed

    Krasnoshchekov, Sergey V; Isayeva, Elena V; Stepanov, Nikolay F

    2012-04-12

    Anharmonic vibrational states of semirigid polyatomic molecules are often studied using the second-order vibrational perturbation theory (VPT2). For efficient higher-order analysis, an approach based on the canonical Van Vleck perturbation theory (CVPT), the Watson Hamiltonian and operators of creation and annihilation of vibrational quanta is employed. This method allows analysis of the convergence of perturbation theory and solves a number of theoretical problems of VPT2, e.g., yields anharmonic constants y(ijk), z(ijkl), and allows the reliable evaluation of vibrational IR and Raman anharmonic intensities in the presence of resonances. Darling-Dennison and higher-order resonance coupling coefficients can be reliably evaluated as well. The method is illustrated on classic molecules: water and formaldehyde. A number of theoretical conclusions results, including the necessity of using sextic force field in the fourth order (CVPT4) and the nearly vanishing CVPT4 contributions for bending and wagging modes. The coefficients of perturbative Dunham-type Hamiltonians in high-orders of CVPT are found to conform to the rules of equality at different orders as earlier proven analytically for diatomic molecules. The method can serve as a good substitution of the more traditional VPT2.

  7. Investigation of production method, geographical origin and species authentication in commercially relevant shrimps using stable isotope ratio and/or multi-element analyses combined with chemometrics: an exploratory analysis.

    PubMed

    Ortea, Ignacio; Gallardo, José M

    2015-03-01

    Three factors defining the traceability of a food product are production method (wild or farmed), geographical origin and biological species, which have to be checked and guaranteed, not only in order to avoid mislabelling and commercial fraud, but also to address food safety issues and to comply with legal regulations. The aim of this study was to determine whether these three factors could be differentiated in shrimps using stable isotope ratio analysis of carbon and nitrogen and/or multi-element composition. Different multivariate statistics methods were applied to different data subsets in order to evaluate their performance in terms of classification or predictive ability. Although the success rates varied depending on the dataset used, the combination of both techniques allowed the correct classification of 100% of the samples according to their actual origin and method of production, and 93.5% according to biological species. Even though further studies including a larger number of samples in each group are needed in order to validate these findings, we can conclude that these methodologies should be considered for studies regarding seafood product authenticity. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. [Research on discrimination of cabbage and weeds based on visible and near-infrared spectrum analysis].

    PubMed

    Zu, Qin; Zhao, Chun-Jiang; Deng, Wei; Wang, Xiu

    2013-05-01

    The automatic identification of weeds forms the basis for precision spraying of crops infest. The canopy spectral reflectance within the 350-2 500 nm band of two strains of cabbages and five kinds of weeds such as barnyard grass, setaria, crabgrass, goosegrass and pigweed was acquired by ASD spectrometer. According to the spectral curve characteristics, the data in different bands were compressed with different levels to improve the operation efficiency. Firstly, the spectrum was denoised in accordance with the different order of multiple scattering correction (MSC) method and Savitzky-Golay (SG) convolution smoothing method set by different parameters, then the model was built by combining the principal component analysis (PCA) method to extract principal components, finally all kinds of plants were classified by using the soft independent modeling of class analogy (SIMCA) taxonomy and the classification results were compared. The tests results indicate that after the pretreatment of the spectral data with the method of the combination of MSC and SG set with 3rd order, 5th degree polynomial, 21 smoothing points, and the top 10 principal components extraction using PCA as a classification model input variable, 100% correct classification rate was achieved, and it is able to identify cabbage and several kinds of common weeds quickly and nondestructively.

  9. Equilibrium Molecular Thermodynamics from Kirkwood Sampling

    PubMed Central

    2015-01-01

    We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525

  10. High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities

    NASA Astrophysics Data System (ADS)

    Britt, Darrell Steven, Jr.

    Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE. For this reason, we implement the method of singularity subtraction as a means for restoring the design accuracy of the scheme in the presence of singularities at the boundary. While this method is well studied for low order methods and for problems in which singularities arise from the geometry (e.g., corners), we adapt it to our high-order scheme for curved boundaries via a conformal mapping and show that it can also be used to restore accuracy when the singularity arises from the BCs rather than the geometry. Altogether, the proposed methodology for 2D boundary value problems is computationally efficient, easily handles a wide class of boundary conditions and boundary shapes that are not aligned with the discretization grid, and requires little modification for solving new problems.

  11. Development of a process-oriented vulnerability concept for water travel time in karst aquifers-case study of Tanour and Rasoun springs catchment area.

    NASA Astrophysics Data System (ADS)

    Hamdan, Ibraheem; Sauter, Martin; Ptak, Thomas; Wiegand, Bettina; Margane, Armin; Toll, Mathias

    2017-04-01

    Key words: Karst aquifer, water travel time, vulnerability assessment, Jordan. The understanding of the groundwater pathways and movement through karst aquifers, and the karst aquifer response to precipitation events especially in the arid to semi-arid areas is fundamental to evaluate pollution risks from point and non-point sources. In spite of the great importance of the karst aquifer for drinking purposes, karst aquifers are highly sensitive to contamination events due to the fast connections between the land-surface and the groundwater (through the karst features) which is makes groundwater quality issues within karst systems very complicated. Within this study, different methods and approaches were developed and applied in order to characterise the karst aquifer system of the Tanour and Rasoun springs (NW-Jordan) and the flow dynamics within the aquifer, and to develop a process-oriented method for vulnerability assessment based on the monitoring of different multi-spatially variable parameters of water travel time in karst aquifer. In general, this study aims to achieve two main objectives: 1. Characterization of the karst aquifer system and flow dynamics. 2. Development of a process-oriented method for vulnerability assessment based on spatially variable parameters of travel time. In order to achieve these aims, different approaches and methods were applied starting from the understanding of the geological and hydrogeological characteristics of the karst aquifer and its vulnerability against pollutants, to using different methods, procedures and monitored parameters in order to determine the water travel time within the aquifer and investigate its response to precipitation event and, finally, with the study of the aquifer response to pollution events. The integrated breakthrough signal obtained from the applied methods and procedures including the using of stable isotopes of oxygen and hydrogen, the monitoring of multi qualitative and quantitative parameters using automated probes and data loggers, and the development of travel time physics-based vulnerability assessment method shows good agreement as an applicable methods to determine the water travel time in karst aquifers, and to investigate its response to precipitation and pollution events.

  12. Reduced-order model for dynamic optimization of pressure swing adsorption processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, A.; Biegler, L.; Zitney, S.

    2007-01-01

    Over the past decades, pressure swing adsorption (PSA) processes have been widely used as energy-efficient gas and liquid separation techniques, especially for high purity hydrogen purification from refinery gases. The separation processes are based on solid-gas equilibrium and operate under periodic transient conditions. Models for PSA processes are therefore multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep concentrations and temperature fronts moving with time. As a result, the optimization of such systems for either designmore » or operation represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approach to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. The study develops a reduced-order model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. Initially, a representative ensemble of solutions of the dynamic PDE system is constructed by solving a higher-order discretization of the model using the method of lines, a two-stage approach that discretizes the PDEs in space and then integrates the resulting DAEs over time. Next, the ROM method applies the Karhunen-Loeve expansion to derive a small set of empirical eigenfunctions (POD modes) which are used as basis functions within a Galerkin's projection framework to derive a low-order DAE system that accurately describes the dominant dynamics of the PDE system. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization before and making optimization problem computationally-efficient. The method has been applied to the dynamic coupled PDE-based model of a two-bed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The gas-phase mole fraction, solid-state loading and temperature profiles from the low-order ROM and from the high-order simulations have been compared. Moreover, the profiles for a different set of inputs and parameter values fed to the same ROM were compared with the accurate profiles from the high-order simulations. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes. Moreover, deviations from the ROM for different set of inputs and parameters suggest that a recalibration of the model is required for the optimization studies. Results for these will also be presented with the aforementioned results.« less

  13. The study techniques of Asian, American, and European medical students during gross anatomy and neuroanatomy courses in Poland.

    PubMed

    Zurada, Anna; Gielecki, Jerzy St; Osman, Nilab; Tubbs, R Shane; Loukas, Marios; Zurada-Zielińska, Agnieszka; Bedi, Neru; Nowak, Dariusz

    2011-03-01

    Past research in medical education has addressed the study of gross anatomy, including the most effective learning techniques, comparing the use of cadavers, dissection, anatomy atlases, and multimedia tools. The aim of this study was to demonstrate similarities and differences among American, Asian, and European medical students (MS) regarding different study methods and to see how these methods affected their clinical skills. To analyze the varying study methods of European, American, and Asian MS in our program and in order to elucidate any ethnic and cultural differences a survey was conducted. A total of 705 international MS, from the Polish (PD), American (AD), and Taiwanese (TD) divisions, were asked to voluntarily participate in the questionnaire. Students were asked the following questions: which methods they used to study anatomy, and which of the methods they believed were most efficient for comprehension, memorization, and review. The questions were based on a 5-point Likert scale, where 5 was 'strongly agree', and 1 was 'strongly disagree'. The PD and AD preferred the use of dissections and prosected specimens to study anatomy. The TD showed less interest in studying from prosected specimens, but did acknowledge that this method was more effective than using atlases, plastic models, or CD-ROMs. Multimedia tools were mainly used for radiological anatomy and review and also for correctly typing proper names of structures using exact anatomical terminology. The findings highlight the differences in study techniques among students from different ethnic backgrounds. The study approaches used in order to accomplish learning objectives was affected by cultural norms that influenced each student group. These differences may be rooted in technological, religious, and language barriers, which can shape the way MS approach learning.

  14. Mixture IRT Model with a Higher-Order Structure for Latent Traits

    ERIC Educational Resources Information Center

    Huang, Hung-Yu

    2017-01-01

    Mixture item response theory (IRT) models have been suggested as an efficient method of detecting the different response patterns derived from latent classes when developing a test. In testing situations, multiple latent traits measured by a battery of tests can exhibit a higher-order structure, and mixtures of latent classes may occur on…

  15. Detecting brain tumor in computed tomography images using Markov random fields and fuzzy C-means clustering techniques

    NASA Astrophysics Data System (ADS)

    Abdulbaqi, Hayder Saad; Jafri, Mohd Zubir Mat; Omar, Ahmad Fairuz; Mustafa, Iskandar Shahrim Bin; Abood, Loay Kadom

    2015-04-01

    Brain tumors, are an abnormal growth of tissues in the brain. They may arise in people of any age. They must be detected early, diagnosed accurately, monitored carefully, and treated effectively in order to optimize patient outcomes regarding both survival and quality of life. Manual segmentation of brain tumors from CT scan images is a challenging and time consuming task. Size and location accurate detection of brain tumor plays a vital role in the successful diagnosis and treatment of tumors. Brain tumor detection is considered a challenging mission in medical image processing. The aim of this paper is to introduce a scheme for tumor detection in CT scan images using two different techniques Hidden Markov Random Fields (HMRF) and Fuzzy C-means (FCM). The proposed method has been developed in this research in order to construct hybrid method between (HMRF) and threshold. These methods have been applied on 4 different patient data sets. The result of comparison among these methods shows that the proposed method gives good results for brain tissue detection, and is more robust and effective compared with (FCM) techniques.

  16. ACCURATE SOLUTION AND GRADIENT COMPUTATION FOR ELLIPTIC INTERFACE PROBLEMS WITH VARIABLE COEFFICIENTS

    PubMed Central

    LI, ZHILIN; JI, HAIFENG; CHEN, XIAOHONG

    2016-01-01

    A new augmented method is proposed for elliptic interface problems with a piecewise variable coefficient that has a finite jump across a smooth interface. The main motivation is not only to get a second order accurate solution but also a second order accurate gradient from each side of the interface. The key of the new method is to introduce the jump in the normal derivative of the solution as an augmented variable and re-write the interface problem as a new PDE that consists of a leading Laplacian operator plus lower order derivative terms near the interface. In this way, the leading second order derivatives jump relations are independent of the jump in the coefficient that appears only in the lower order terms after the scaling. An upwind type discretization is used for the finite difference discretization at the irregular grid points near or on the interface so that the resulting coefficient matrix is an M-matrix. A multi-grid solver is used to solve the linear system of equations and the GMRES iterative method is used to solve the augmented variable. Second order convergence for the solution and the gradient from each side of the interface has also been proved in this paper. Numerical examples for general elliptic interface problems have confirmed the theoretical analysis and efficiency of the new method. PMID:28983130

  17. Sample preparation: a critical step in the analysis of cholesterol oxidation products.

    PubMed

    Georgiou, Christiana A; Constantinou, Michalis S; Kapnissi-Christodoulou, Constantina P

    2014-02-15

    In recent years, cholesterol oxidation products (COPs) have drawn scientific interest, particularly due to their implications on human health. A big number of these compounds have been demonstrated to be cytotoxic, mutagenic, and carcinogenic. The main source of COPs is through diet, and particularly from the consumption of cholesterol-rich foods. This raises questions about the safety of consumers, and it suggests the necessity for the development of a sensitive and a reliable analytical method in order to identify and quantify these components in food samples. Sample preparation is a necessary step in the analysis of COPs in order to eliminate interferences and increase sensitivity. Numerous publications have, over the years, reported the use of different methods for the extraction and purification of COPs. However, no method has, so far, been established as a routine method for the analysis of COPs in foods. Therefore, it was considered important to overview different sample preparation procedures and evaluate the different preparative parameters, such as time of saponification, the type of organic solvents for fat extraction, the stationary phase in solid phase extraction, etc., according to recovery, precision and simplicity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. An improved algorithm of mask image dodging for aerial image

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi

    2011-12-01

    The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.

  19. Rogue-wave solutions of the Zakharov equation

    NASA Astrophysics Data System (ADS)

    Rao, Jiguang; Wang, Lihong; Liu, Wei; He, Jingsong

    2017-12-01

    Using the bilinear transformation method, we derive general rogue-wave solutions of the Zakharov equation. We present these Nth-order rogue-wave solutions explicitly in terms of Nth-order determinants whose matrix elements have simple expressions. We show that the fundamental rogue wave is a line rogue wave with a line profile on the plane ( x, y) arising from a constant background at t ≪ 0 and then gradually tending to the constant background for t ≫ 0. Higher-order rogue waves arising from a constant background and later disappearing into it describe the interaction of several fundamental line rogue waves. We also consider different structures of higher-order rogue waves. We present differences between rogue waves of the Zakharov equation and of the first type of the Davey-Stewartson equation analytically and graphically.

  20. 3D airborne EM modeling based on the spectral-element time-domain (SETD) method

    NASA Astrophysics Data System (ADS)

    Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.

    2017-12-01

    In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays important roles. This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). Figure 1: (a) AEM system over a 3D earth model; (b) magnetic field Bz; (c) magnetic induction dBz/dt.

  1. Computation of Nonlinear Backscattering Using a High-Order Numerical Method

    NASA Technical Reports Server (NTRS)

    Fibich, G.; Ilan, B.; Tsynkov, S.

    2001-01-01

    The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.

  2. A fast direct solver for a class of two-dimensional separable elliptic equations on the sphere

    NASA Technical Reports Server (NTRS)

    Moorthi, Shrinivas; Higgins, R. Wayne

    1992-01-01

    An efficient, direct, second-order solver for the discrete solution of two-dimensional separable elliptic equations on the sphere is presented. The method involves a Fourier transformation in longitude and a direct solution of the resulting coupled second-order finite difference equations in latitude. The solver is made efficient by vectorizing over longitudinal wavenumber and by using a vectorized fast Fourier transform routine. It is evaluated using a prescribed solution method and compared with a multigrid solver and the standard direct solver from FISHPAK.

  3. A stage structure pest management model with impulsive state feedback control

    NASA Astrophysics Data System (ADS)

    Pang, Guoping; Chen, Lansun; Xu, Weijian; Fu, Gang

    2015-06-01

    A stage structure pest management model with impulsive state feedback control is investigated. We get the sufficient condition for the existence of the order-1 periodic solution by differential equation geometry theory and successor function. Further, we obtain a new judgement method for the stability of the order-1 periodic solution of the semi-continuous systems by referencing the stability analysis for limit cycles of continuous systems, which is different from the previous method of analog of Poincarè criterion. Finally, we analyze numerically the theoretical results obtained.

  4. High Order Filter Methods for the Non-ideal Compressible MHD Equations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard divergence cleaning is not required by the present filter approach. For certain non-ideal MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  5. Divergence Free High Order Filter Methods for the Compressible MHD Equations

    NASA Technical Reports Server (NTRS)

    Yea, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard diver- gence cleaning is not required by the present filter approach. For certain MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  6. Study on Separation of Structural Isomer with Magneto-Archimedes method

    NASA Astrophysics Data System (ADS)

    Kobayashi, T.; Mori, T.; Akiyama, Y.; Mishima, F.; Nishijima, S.

    2017-09-01

    Organic compounds are refined by separating their structural isomers, however each separation method has some problems. For example, distillation consumes large energy. In order to solve these problems, new separation method is needed. Considering organic compounds are diamagnetic, we focused on magneto-Archimedes method. With this method, particle mixture dispersed in a paramagnetic medium can be separated in a magnetic field due to the difference of the density and magnetic susceptibility of the particles. In this study, we succeeded in separating isomers of phthalic acid as an example of structural isomer using MnCl2 solution as the paramagnetic medium. In order to use magneto-Archimedes method for separating materials for food or medicine, we proposed harmless medium using oxygen and fluorocarbon instead of MnCl2 aqueous solution. As a result, the possibility of separating every structural isomer was shown.

  7. Current HPLC Methods for Assay of Nano Drug Delivery Systems.

    PubMed

    Tekkeli, Serife Evrim Kepekci; Kiziltas, Mustafa Volkan

    2017-01-01

    In nano drug formulations the mechanism of release is a critical process to recognize controlled and targeted drug delivery systems. In order to gain high bioavailability and specificity from the drug to reach its therapeutic goal, the active substance must be loaded into the nanoparticles efficiently. Therefore, the amount in biological fluids or tissues and the remaining amount in nano carriers are very important parameters to understand the potential of the nano drug delivery systems. For this aim, suitable and validated quantitation methods are required to determine released drug concentrations from nano pharmaceutical formulations. HPLC (High Performance Liquid Chromatography) is one of the most common techniques used for determination of released drug content out of nano drug formulations, in different physical conditions, over different periods of time. Since there are many types of HPLC methods depending on detector and column types, it is a challenge for the researchers to choose a suitable method that is simple, fast and validated HPLC techniques for their nano drug delivery systems. This review's goal is to compare HPLC methods that are currently used in different nano drug delivery systems in order to provide detailed and useful information for researchers. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  9. Comparison of present global reanalysis datasets in the context of a statistical downscaling method for precipitation prediction

    NASA Astrophysics Data System (ADS)

    Horton, Pascal; Weingartner, Rolf; Brönnimann, Stefan

    2017-04-01

    The analogue method is a statistical downscaling method for precipitation prediction. It uses similarity in terms of synoptic-scale predictors with situations in the past in order to provide a probabilistic prediction for the day of interest. It has been used for decades in a context of weather or flood forecasting, and is more recently also applied to climate studies, whether for reconstruction of past weather conditions or future climate impact studies. In order to evaluate the relationship between synoptic scale predictors and the local weather variable of interest, e.g. precipitation, reanalysis datasets are necessary. Nowadays, the number of available reanalysis datasets increases. These are generated by different atmospheric models with different assimilation techniques and offer various spatial and temporal resolutions. A major difference between these datasets is also the length of the archive they provide. While some datasets start at the beginning of the satellite era (1980) and assimilate these data, others aim at homogeneity on a longer period (e.g. 20th century) and only assimilate conventional observations. The context of the application of analogue methods might drive the choice of an appropriate dataset, for example when the archive length is a leading criterion. However, in many studies, a reanalysis dataset is subjectively chosen, according to the user's preferences or the ease of access. The impact of this choice on the results of the downscaling procedure is rarely considered and no comprehensive comparison has been undertaken so far. In order to fill this gap and to advise on the choice of appropriate datasets, nine different global reanalysis datasets were compared in seven distinct versions of analogue methods, over 300 precipitation stations in Switzerland. Significant differences in terms of prediction performance were identified. Although the impact of the reanalysis dataset on the skill score varies according to the chosen predictor, be it atmospheric circulation or thermodynamic variables, some hierarchy between the datasets is often preserved. This work can thus help choosing an appropriate dataset for the analogue method, or raise awareness of the consequences of using a certain dataset.

  10. Discontinuous Galerkin Methods and High-Speed Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Atak, Muhammed; Larsson, Johan; Munz, Claus-Dieter

    2014-11-01

    Discontinuous Galerkin methods gain increasing importance within the CFD community as they combine arbitrary high order of accuracy in complex geometries with parallel efficiency. Particularly the discontinuous Galerkin spectral element method (DGSEM) is a promising candidate for both the direct numerical simulation (DNS) and large eddy simulation (LES) of turbulent flows due to its excellent scaling attributes. In this talk, we present a DNS of a compressible turbulent boundary layer along a flat plate at a free-stream Mach number of M = 2.67 and assess the computational efficiency of the DGSEM at performing high-fidelity simulations of both transitional and turbulent boundary layers. We compare the accuracy of the results as well as the computational performance to results using a high order finite difference method.

  11. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  12. Fourth-order partial differential equation noise removal on welding images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halim, Suhaila Abd; Ibrahim, Arsmah; Sulong, Tuan Nurul Norazura Tuan

    2015-10-22

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussianmore » noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter.« less

  13. The arbitrary order mimetic finite difference method for a diffusion equation with a non-symmetric diffusion tensor

    NASA Astrophysics Data System (ADS)

    Gyrya, V.; Lipnikov, K.

    2017-11-01

    We present the arbitrary order mimetic finite difference (MFD) discretization for the diffusion equation with non-symmetric tensorial diffusion coefficient in a mixed formulation on general polygonal meshes. The diffusion tensor is assumed to be positive definite. The asymmetry of the diffusion tensor requires changes to the standard MFD construction. We present new approach for the construction that guarantees positive definiteness of the non-symmetric mass matrix in the space of discrete velocities. The numerically observed convergence rate for the scalar quantity matches the predicted one in the case of the lowest order mimetic scheme. For higher orders schemes, we observed super-convergence by one order for the scalar variable which is consistent with the previously published result for a symmetric diffusion tensor. The new scheme was also tested on a time-dependent problem modeling the Hall effect in the resistive magnetohydrodynamics.

  14. Discovering variable fractional orders of advection-dispersion equations from field data using multi-fidelity Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Pang, Guofei; Perdikaris, Paris; Cai, Wei; Karniadakis, George Em

    2017-11-01

    The fractional advection-dispersion equation (FADE) can describe accurately the solute transport in groundwater but its fractional order has to be determined a priori. Here, we employ multi-fidelity Bayesian optimization to obtain the fractional order under various conditions, and we obtain more accurate results compared to previously published data. Moreover, the present method is very efficient as we use different levels of resolution to construct a stochastic surrogate model and quantify its uncertainty. We consider two different problem set ups. In the first set up, we obtain variable fractional orders of one-dimensional FADE, considering both synthetic and field data. In the second set up, we identify constant fractional orders of two-dimensional FADE using synthetic data. We employ multi-resolution simulations using two-level and three-level Gaussian process regression models to construct the surrogates.

  15. The arbitrary order mimetic finite difference method for a diffusion equation with a non-symmetric diffusion tensor

    DOE PAGES

    Gyrya, V.; Lipnikov, K.

    2017-07-18

    Here, we present the arbitrary order mimetic finite difference (MFD) discretization for the diffusion equation with non-symmetric tensorial diffusion coefficient in a mixed formulation on general polygonal meshes. The diffusion tensor is assumed to be positive definite. The asymmetry of the diffusion tensor requires changes to the standard MFD construction. We also present new approach for the construction that guarantees positive definiteness of the non-symmetric mass matrix in the space of discrete velocities. The numerically observed convergence rate for the scalar quantity matches the predicted one in the case of the lowest order mimetic scheme. For higher orders schemes, wemore » observed super-convergence by one order for the scalar variable which is consistent with the previously published result for a symmetric diffusion tensor. The new scheme was also tested on a time-dependent problem modeling the Hall effect in the resistive magnetohydrodynamics.« less

  16. The arbitrary order mimetic finite difference method for a diffusion equation with a non-symmetric diffusion tensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyrya, V.; Lipnikov, K.

    Here, we present the arbitrary order mimetic finite difference (MFD) discretization for the diffusion equation with non-symmetric tensorial diffusion coefficient in a mixed formulation on general polygonal meshes. The diffusion tensor is assumed to be positive definite. The asymmetry of the diffusion tensor requires changes to the standard MFD construction. We also present new approach for the construction that guarantees positive definiteness of the non-symmetric mass matrix in the space of discrete velocities. The numerically observed convergence rate for the scalar quantity matches the predicted one in the case of the lowest order mimetic scheme. For higher orders schemes, wemore » observed super-convergence by one order for the scalar variable which is consistent with the previously published result for a symmetric diffusion tensor. The new scheme was also tested on a time-dependent problem modeling the Hall effect in the resistive magnetohydrodynamics.« less

  17. Three-Dimensional High-Order Spectral Finite Volume Method for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Many areas require a very high-order accurate numerical solution of conservation laws for complex shapes. This paper deals with the extension to three dimensions of the Spectral Finite Volume (SV) method for unstructured grids, which was developed to solve such problems. We first summarize the limitations of traditional methods such as finite-difference, and finite-volume for both structured and unstructured grids. We then describe the basic formulation of the spectral finite volume method. What distinguishes the SV method from conventional high-order finite-volume methods for unstructured triangular or tetrahedral grids is the data reconstruction. Instead of using a large stencil of neighboring cells to perform a high-order reconstruction, the stencil is constructed by partitioning each grid cell, called a spectral volume (SV), into 'structured' sub-cells, called control volumes (CVs). One can show that if all the SV cells are partitioned into polygonal or polyhedral CV sub-cells in a geometrically similar manner, the reconstructions for all the SVs become universal, irrespective of their shapes, sizes, orientations, or locations. It follows that the reconstruction is reduced to a weighted sum of unknowns involving just a few simple adds and multiplies, and those weights are universal and can be pre-determined once for all. The method is thus very efficient, accurate, and yet geometrically flexible. The most critical part of the SV method is the partitioning of the SV into CVs. In this paper we present the partitioning of a tetrahedral SV into polyhedral CVs with one free parameter for polynomial reconstructions up to degree of precision five. (Note that the order of accuracy of the method is one order higher than the reconstruction degree of precision.) The free parameter will be determined by minimizing the Lebesgue constant of the reconstruction matrix or similar criteria to obtain optimized partitions. The details of an efficient, parallelizable code to solve three-dimensional problems for any order of accuracy are then presented. Important aspects of the data structure are discussed. Comparisons with the Discontinuous Galerkin (DG) method are made. Numerical examples for wave propagation problems are presented.

  18. An unscaled parameter to measure the order of surfaces: a new surface elaboration to increase cells adhesion.

    PubMed

    Bigerelle, M; Anselme, K; Dufresne, E; Hardouin, P; Iost, A

    2002-08-01

    We present a new parameter to quantify the order of a surface. This parameter is scale-independent and can be used to compare the organization of a surface at different scales of range and amplitude. To test the accuracy of this roughness parameter versus a hundred existing ones, we created an original statistical bootstrap method. In order to assess the physical relevance of this new parameter, we elaborated a great number of surfaces with various roughness amplitudes on titanium and titanium-based alloys using different physical processes. Then we studied the influence of the roughness amplitude on in vitro adhesion and proliferation of human osteoblasts. It was then shown that our new parameter best discriminates among the cell adhesion phenomena than others' parameters (Average roughness (Ra em leader )): cells adhere better on isotropic surfaces with a low order, provided this order is quantified on a scale that is more important than that of the cells. Additionally, on these low ordered metallic surfaces, the shape of the cells presents the same morphological aspect as that we can see on the human bone trabeculae. The method used to prepare these isotropic surfaces (electroerosion) could be undoubtedly and easily applied to prepare most biomaterials with complex geometries and to improve bone implant integration. Moreover, the new order parameter we developed may be particularly useful for the fundamental understanding of the mechanism of bone cell installation on a relief and of the formation of bone cell-material interface.

  19. Assessment of numerical techniques for unsteady flow calculations

    NASA Technical Reports Server (NTRS)

    Hsieh, Kwang-Chung

    1989-01-01

    The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.

  20. a Bounded Finite-Difference Discretization of a Two-Dimensional Diffusion Equation with Logistic Nonlinear Reaction

    NASA Astrophysics Data System (ADS)

    Macías-Díaz, J. E.

    In the present manuscript, we introduce a finite-difference scheme to approximate solutions of the two-dimensional version of Fisher's equation from population dynamics, which is a model for which the existence of traveling-wave fronts bounded within (0,1) is a well-known fact. The method presented here is a nonstandard technique which, in the linear regime, approximates the solutions of the original model with a consistency of second order in space and first order in time. The theory of M-matrices is employed here in order to elucidate conditions under which the method is able to preserve the positivity and the boundedness of solutions. In fact, our main result establishes relatively flexible conditions under which the preservation of the positivity and the boundedness of new approximations is guaranteed. Some simulations of the propagation of a traveling-wave solution confirm the analytical results derived in this work; moreover, the experiments evince a good agreement between the numerical result and the analytical solutions.

  1. Clinical outcomes for patients finished with the SureSmile™ method compared with conventional fixed orthodontic therapy

    PubMed Central

    Alford, Timothy J.; Roberts, W. Eugene; Hartsfield, James K.; Eckert, George J.; Snyder, Ronald J.

    2016-01-01

    Objective Utilize American Board of Orthodontics (ABO) cast/radiographic evaluation (CRE) to compare a series of 63 consecutive patients, finished with manual wire bending (conventional) treatment, vs a subsequent series of 69 consecutive patients, finished by the same orthodontist using the SureSmile™ (SS) method. Materials and Methods Records of 132 nonextraction patients were scored by a calibrated examiner blinded to treatment mode. Age and discrepancy index (DI) between groups were compared by t-tests. A chi-square test was used to compare for differences in sex and whether the patient was treated using braces only (no orthopedic correction). Analysis of covariance tested for differences in CRE outcomes and treatment times, with sex and DI included as covariates. A logarithmic transformation of CRE outcomes and treatment times was used because their distributions were skewed. Significance was defined as P < .05. Results Compared with conventional finishing, SS patients had significantly lower DI scores, less treatment time (~7 months), and better CRE scores for first-order alignment-rotation and interproximal space closure; however, second-order root angulation (RA) was inferior. Conclusion SS patients were treated in less time to better CRE scores for first-order rotation (AR) and interproximal space closure (IC) but on the average, malocclusions were less complex and second order root alignment was inferior, compared with patients finished with manual wire bending. PMID:21261488

  2. On the consistency of QCBED structure factor measurements for TiO 2 (Rutile)

    DOE PAGES

    Jiang, Bin; Zuo, Jian -Min; Friis, Jesper; ...

    2003-09-16

    The same Bragg reflection in TiO 2 from twelve different CBED patterns (from different crystals, orientations and thicknesses) are analysed quantitatively in order to evaluate the consistency of the QCBED method for bond-charge mapping. The standard deviation in the resulting distribution of derived X-ray structure factors is found to be an order of magnitude smaller than that in conventional X-ray work, and the standard error (0.026% for F X(110)) is slightly better than obtained by the X-ray Pendellosung method applied to silicon. This is sufficiently accuracy to distinguish between atomic, covalent and ionic models of bonding. We describe the importancemore » of extracting experimental parameters from CCD camera characterization, and of surface oxidation and crystal shape. Thus, the current experiments show that the QCBED method is now a robust and powerful tool for low order structure factor measurement, which does not suffer from the large extinction (multiple scattering) errors which occur in inorganic X-ray crystallography, and may be applied to nanocrystals. Our results will be used to understand the role of d electrons in the chemical bonding of TiO 2.« less

  3. Comparison between different interdental stripping methods and evaluation of abrasive strips: SEM analysis.

    PubMed

    Grippaudo, Cristina; Cancellieri, Daniela; Grecolini, Maria E; Deli, Roberto

    2010-01-01

    The aim of this study was to evaluate the morphological effects and the surface irregularities produced by different methods of mechanical stripping (abrasive strips and burs) and chemical stripping (37% orthophosphoric acid) and the surface changes following the finishing procedures (polishing strips) or the subsequent application of sealants, in order to establish the right stripping method that can guarantee the smoothest surface. We have also analysed the level of wear on the different abrasive strips employed, according to their structure. 160 proximal surfaces of 80 sound molar teeth extracted for orthodontic and periodontal reasons, were divided into: 1 control group with non-treated enamel proximal surfaces and 5 different groups according to the stripping method used, were observed with scanning electron microscopy (SEM). Each one of the 5 treated groups was also divided into 3 different subgroups according to the finishing procedures or the subsequent application of sealants. The finishing stage following the manual reduction proves to be fundamental in reducing the number and depth of grooves created by the stripping. After the air rotor stripping method, the use of sealants is advised in order to obtain a smoother surface. The analysis of the combinations of mechanical and chemical stripping showed unsatisfactory results. Concerning the wear of the strips, we have highlighted a different abrasion degree for the different types of strips analysed with SEM. The enamel damages are limited only if the finishing procedure is applied, independently of the type of abrasive strip employed. It would be advisable, though clinically seldom possible, the use of sealants after the air rotor stripping technique. Copyright © 2010 Società Italiana di Ortodonzia SIDO. Published by Elsevier Srl. All rights reserved.

  4. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  5. Comparison of Observed and Predicted Abutment Scour at Selected Bridges in Maine

    USGS Publications Warehouse

    Lombard, Pamela J.; Hodgkins, Glenn A.

    2008-01-01

    Maximum abutment-scour depths predicted with five different methods were compared to maximum abutment-scour depths observed at 100 abutments at 50 bridge sites in Maine with a median bridge age of 66 years. Prediction methods included the Froehlich/Hire method, the Sturm method, and the Maryland method published in Federal Highway Administration Hydraulic Engineering Circular 18 (HEC-18); the Melville method; and envelope curves. No correlation was found between scour calculated using any of the prediction methods and observed scour. Abutment scour observed in the field ranged from 0 to 6.8 feet, with an average observed scour of less than 1.0 foot. Fifteen of the 50 bridge sites had no observable scour. Equations frequently overpredicted scour by an order of magnitude and in some cases by two orders of magnitude. The equations also underpredicted scour 4 to 14 percent of the time.

  6. n-Order and maximum fuzzy similarity entropy for discrimination of signals of different complexity: Application to fetal heart rate signals.

    PubMed

    Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc

    2015-09-01

    This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Diversity and community composition of methanogenic archaea in the rumen of Scottish upland sheep assessed by different methods.

    PubMed

    Snelling, Timothy J; Genç, Buğra; McKain, Nest; Watson, Mick; Waters, Sinéad M; Creevey, Christopher J; Wallace, R John

    2014-01-01

    Ruminal archaeomes of two mature sheep grazing in the Scottish uplands were analysed by different sequencing and analysis methods in order to compare the apparent archaeal communities. All methods revealed that the majority of methanogens belonged to the Methanobacteriales order containing the Methanobrevibacter, Methanosphaera and Methanobacteria genera. Sanger sequenced 1.3 kb 16S rRNA gene amplicons identified the main species of Methanobrevibacter present to be a SGMT Clade member Mbb. millerae (≥ 91% of OTUs); Methanosphaera comprised the remainder of the OTUs. The primers did not amplify ruminal Thermoplasmatales-related 16S rRNA genes. Illumina sequenced V6-V8 16S rRNA gene amplicons identified similar Methanobrevibacter spp. and Methanosphaera clades and also identified the Thermoplasmatales-related order as 13% of total archaea. Unusually, both methods concluded that Mbb. ruminantium and relatives from the same clade (RO) were almost absent. Sequences mapping to rumen 16S rRNA and mcrA gene references were extracted from Illumina metagenome data. Mapping of the metagenome data to 16S rRNA gene references produced taxonomic identification to Order level including 2-3% Thermoplasmatales, but was unable to discriminate to species level. Mapping of the metagenome data to mcrA gene references resolved 69% to unclassified Methanobacteriales. Only 30% of sequences were assigned to species level clades: of the sequences assigned to Methanobrevibacter, most mapped to SGMT (16%) and RO (10%) clades. The Sanger 16S amplicon and Illumina metagenome mcrA analyses showed similar species richness (Chao1 Index 19-35), while Illumina metagenome and amplicon 16S rRNA analysis gave lower richness estimates (10-18). The values of the Shannon Index were low in all methods, indicating low richness and uneven species distribution. Thus, although much information may be extracted from the other methods, Illumina amplicon sequencing of the V6-V8 16S rRNA gene would be the method of choice for studying rumen archaeal communities.

  8. Evaluating the Impact of Teaching Methods on Student Motivation

    ERIC Educational Resources Information Center

    Cudney, Elizabeth A.; Ezzell, Julie M.

    2017-01-01

    Educational institutions are consistently looking for ways to prepare students for the competitive workforce. Various methods have been utilized to interpret human differences, such as learning preferences and motivation, in order to make the curriculum more valuable. The objective of this research was to determine the impact of new teaching…

  9. Two-Point Turbulence Closure Applied to Variable Resolution Modeling

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Rubinstein, Robert

    2011-01-01

    Variable resolution methods have become frontline CFD tools, but in order to take full advantage of this promising new technology, more formal theoretical development is desirable. Two general classes of variable resolution methods can be identified: hybrid or zonal methods in which RANS and LES models are solved in different flow regions, and bridging or seamless models which interpolate smoothly between RANS and LES. This paper considers the formulation of bridging methods using methods of two-point closure theory. The fundamental problem is to derive a subgrid two-equation model. We compare and reconcile two different approaches to this goal: the Partially Integrated Transport Model, and the Partially Averaged Navier-Stokes method.

  10. Quantum Monte Carlo analysis of a charge ordered insulating antiferromagnet: The Ti 4O 7 Magneli phase

    DOE PAGES

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; ...

    2016-06-07

    The Magneli phase Ti 4O 7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. We have examined three low- lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate Quantum Monte Carlo methods. We compare our results to those obtained from density functional theory- based methods that include approximate corrections for exchange and correlation.more » Our results confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Here, a detailed analysis suggests that non-local exchange-correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.« less

  11. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  12. Improving the local wavenumber method by automatic DEXP transformation

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  13. Ultrasound – A new approach for non-woven scaffolds investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khramtsova, E. A.; Morokov, E. S.; Levin, V. M.

    2016-05-18

    In this study we verified the method of impulse acoustic microscopy as a tool for scaffold evaluation in tissue engineering investigation. Cellulose diacetate (CDA) non-woven 3D scaffold was used as a model object. Scanning electron microscopy and optical microscopy were used as reference methods in order to realize feasibility of acoustic microscopy method in a regenerative medicine field. Direct comparison of the different methods was carried out.

  14. Effects of Second-Order Hydrodynamics on a Semisubmersible Floating Offshore Wind Turbine: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayati, I.; Jonkman, J.; Robertson, A.

    2014-07-01

    The objective of this paper is to assess the second-order hydrodynamic effects on a semisubmersible floating offshore wind turbine. Second-order hydrodynamics induce loads and motions at the sum- and difference-frequencies of the incident waves. These effects have often been ignored in offshore wind analysis, under the assumption that they are significantly smaller than first-order effects. The sum- and difference-frequency loads can, however, excite eigenfrequencies of the system, leading to large oscillations that strain the mooring system or vibrations that cause fatigue damage to the structure. Observations of supposed second-order responses in wave-tank tests performed by the DeepCwind consortium at themore » MARIN offshore basin suggest that these effects might be more important than originally expected. These observations inspired interest in investigating how second-order excitation affects floating offshore wind turbines and whether second-order hydrodynamics should be included in offshore wind simulation tools like FAST in the future. In this work, the effects of second-order hydrodynamics on a floating semisubmersible offshore wind turbine are investigated. Because FAST is currently unable to account for second-order effects, a method to assess these effects was applied in which linearized properties of the floating wind system derived from FAST (including the 6x6 mass and stiffness matrices) are used by WAMIT to solve the first- and second-order hydrodynamics problems in the frequency domain. The method has been applied to the OC4-DeepCwind semisubmersible platform, supporting the NREL 5-MW baseline wind turbine. The loads and response of the system due to the second-order hydrodynamics are analysed and compared to first-order hydrodynamic loads and induced motions in the frequency domain. Further, the second-order loads and induced response data are compared to the loads and motions induced by aerodynamic loading as solved by FAST.« less

  15. Quantization of collagen organization in the stroma with a new order coefficient

    PubMed Central

    Germann, James A.; Martinez-Enriquez, Eduardo; Marcos, Susana

    2017-01-01

    Many optical and biomechanical properties of the cornea, specifically the transparency of the stroma and its stiffness, can be traced to the degree of order and direction of the constituent collagen fibers. To measure the degree of order inside the cornea, a new metric, the order coefficient, was introduced to quantify the organization of the collagen fibers from images of the stroma produced with a custom-developed second harmonic generation microscope. The order coefficient method gave a quantitative assessment of the differences in stromal collagen arrangement across the cornea depths and between untreated stroma and cross-linked stroma. PMID:29359095

  16. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  17. Multiswitching combination synchronisation of non-identical fractional-order chaotic systems

    NASA Astrophysics Data System (ADS)

    Bhat, Muzaffar Ahmad; Khan, Ayub

    2018-06-01

    In this paper, multiswitching combination synchronisation (MSCS) scheme has been investigated in a class of three non-identical fractional-order chaotic systems. The fractional-order Lorenz and Chen systems are taken as the drive systems. The combination of multidrive systems is then synchronised with the fractional-order Lü chaotic system. In MSCS, the state variables of the two drive systems synchronise with different state variables of the response system, simultaneously. Based on the stability of fractional-order chaotic systems, the MSCS of three fractional-order non-identical systems has been investigated. For the synchronisation of three non-identical fractional-order chaotic systems, suitable controllers have been designed. Theoretical analysis and numerical results are presented to demonstrate the validity and feasibility of the applied method.

  18. Description of the atomic disorder (local order) in crystals by the mixed-symmetry method

    NASA Astrophysics Data System (ADS)

    Dudka, A. P.; Novikova, N. E.

    2017-11-01

    An approach to the description of local atomic disorder (short-range order) in single crystals by the mixed-symmetry method based on Bragg scattering data is proposed, and the corresponding software is developed. In defect-containing crystals, each atom in the unit cell can be described by its own symmetry space group. The expression for the calculated structural factor includes summation over different sets of symmetry operations for different atoms. To facilitate the search for new symmetry elements, an "atomic disorder expert" was developed, which estimates the significance of tested models. It is shown that the symmetry lowering for some atoms correlates with the existence of phase transitions (in langasite family crystals) and the anisotropy of physical properties (in rare-earth dodecaborides RB12).

  19. Vehicle track segmentation using higher order random fields

    DOE PAGES

    Quach, Tu -Thach

    2017-01-09

    Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less

  20. Vehicle track segmentation using higher order random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quach, Tu -Thach

    Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less

  1. Methods of Reflection about Service Learning: Guided vs. Free, Dialogic vs. Expressive, and Public vs. Private

    ERIC Educational Resources Information Center

    Sturgill, Amanda; Motley, Phillip

    2014-01-01

    Reflection is a key component of service learning, but research shows that in order to maximize learning, the reflection must be of high quality. This paper compares the affordances of three different models of written reflection in engendering students' higher-order thought processes. Student reflections were compared across axes of guided versus…

  2. Linear dispersion relation for the mirror instability in context of the gyrokinetic theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porazik, Peter; Johnson, Jay R.

    2013-10-15

    The linear dispersion relation for the mirror instability is discussed in context of the gyrokinetic theory. The objective is to provide a coherent view of different kinetic approaches used to derive the dispersion relation. The method based on gyrocenter phase space transformations is adopted in order to display the origin and ordering of various terms.

  3. Experiments with explicit filtering for LES using a finite-difference method

    NASA Technical Reports Server (NTRS)

    Lund, T. S.; Kaltenbach, H. J.

    1995-01-01

    The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method.

  4. Nonequilibrium scheme for computing the flux of the convection-diffusion equation in the framework of the lattice Boltzmann method.

    PubMed

    Chai, Zhenhua; Zhao, T S

    2014-07-01

    In this paper, we propose a local nonequilibrium scheme for computing the flux of the convection-diffusion equation with a source term in the framework of the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM). Both the Chapman-Enskog analysis and the numerical results show that, at the diffusive scaling, the present nonequilibrium scheme has a second-order convergence rate in space. A comparison between the nonequilibrium scheme and the conventional second-order central-difference scheme indicates that, although both schemes have a second-order convergence rate in space, the present nonequilibrium scheme is more accurate than the central-difference scheme. In addition, the flux computation rendered by the present scheme also preserves the parallel computation feature of the LBM, making the scheme more efficient than conventional finite-difference schemes in the study of large-scale problems. Finally, a comparison between the single-relaxation-time model and the MRT model is also conducted, and the results show that the MRT model is more accurate than the single-relaxation-time model, both in solving the convection-diffusion equation and in computing the flux.

  5. Numerical Solution of Incompressible Navier-Stokes Equations Using a Fractional-Step Approach

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    1999-01-01

    A fractional step method for the solution of steady and unsteady incompressible Navier-Stokes equations is outlined. The method is based on a finite volume formulation and uses the pressure in the cell center and the mass fluxes across the faces of each cell as dependent variables. Implicit treatment of convective and viscous terms in the momentum equations enables the numerical stability restrictions to be relaxed. The linearization error in the implicit solution of momentum equations is reduced by using three subiterations in order to achieve second order temporal accuracy for time-accurate calculations. In spatial discretizations of the momentum equations, a high-order (3rd and 5th) flux-difference splitting for the convective terms and a second-order central difference for the viscous terms are used. The resulting algebraic equations are solved with a line-relaxation scheme which allows the use of large time step. A four color ZEBRA scheme is employed after the line-relaxation procedure in the solution of the Poisson equation for pressure. This procedure is applied to a Couette flow problem using a distorted computational grid to show that the method minimizes grid effects. Additional benchmark cases include the unsteady laminar flow over a circular cylinder for Reynolds Numbers of 200, and a 3-D, steady, turbulent wingtip vortex wake propagation study. The solution algorithm does a very good job in resolving the vortex core when 5th-order upwind differencing and a modified production term in the Baldwin-Barth one-equation turbulence model are used with adequate grid resolution.

  6. An immersed boundary method for fluid-structure interaction with compressible multiphase flows

    NASA Astrophysics Data System (ADS)

    Wang, Li; Currao, Gaetano M. D.; Han, Feng; Neely, Andrew J.; Young, John; Tian, Fang-Bao

    2017-10-01

    This paper presents a two-dimensional immersed boundary method for fluid-structure interaction with compressible multiphase flows involving large structure deformations. This method involves three important parts: flow solver, structure solver and fluid-structure interaction coupling. In the flow solver, the compressible multiphase Navier-Stokes equations for ideal gases are solved by a finite difference method based on a staggered Cartesian mesh, where a fifth-order accuracy Weighted Essentially Non-Oscillation (WENO) scheme is used to handle spatial discretization of the convective term, a fourth-order central difference scheme is employed to discretize the viscous term, the third-order TVD Runge-Kutta scheme is used to discretize the temporal term, and the level-set method is adopted to capture the multi-material interface. In this work, the structure considered is a geometrically non-linear beam which is solved by using a finite element method based on the absolute nodal coordinate formulation (ANCF). The fluid dynamics and the structure motion are coupled in a partitioned iterative manner with a feedback penalty immersed boundary method where the flow dynamics is defined on a fixed Lagrangian grid and the structure dynamics is described on a global coordinate. We perform several validation cases (including fluid over a cylinder, structure dynamics, flow induced vibration of a flexible plate, deformation of a flexible panel induced by shock waves in a shock tube, an inclined flexible plate in a hypersonic flow, and shock-induced collapse of a cylindrical helium cavity in the air), and compare the results with experimental and other numerical data. The present results agree well with the published data and the current experiment. Finally, we further demonstrate the versatility of the present method by applying it to a flexible plate interacting with multiphase flows.

  7. Dynamic analysis of spiral bevel and hypoid gears with high-order transmission errors

    NASA Astrophysics Data System (ADS)

    Yang, J. J.; Shi, Z. H.; Zhang, H.; Li, T. X.; Nie, S. W.; Wei, B. Y.

    2018-03-01

    A new gear surface modification methodology based on curvature synthesis is proposed in this study to improve the transmission performance. The generated high-order transmission error (TE) for spiral bevel and hypoid gears is proved to reduce the vibration of geared-rotor system. The method is comprised of the following steps: Firstly, the fully conjugate gear surfaces with pinion flank modified according to the predesigned relative transmission movement are established based on curvature correction. Secondly, a 14-DOF geared-rotor system model considering backlash nonlinearity is used to evaluate the effect of different orders of TE on the dynamic performance a hypoid gear transmission system. For case study, numerical simulation is performed to illustrate the dynamic response of hypoid gear pair with parabolic, fourth-order and sixth-order transmission error derived. The results show that the parabolic TE curve has higher peak to peak amplitude compared to the other two types of TE. Thus, the excited dynamic response also shows larger amplitude at response peaks. Dynamic responses excited by fourth and sixth order TE also demonstrate distinct response components due to their different TE period which is expected to generate different sound quality or other acoustic characteristics.

  8. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  9. Linear flavor-wave theory for fully antisymmetric SU(N ) irreducible representations

    NASA Astrophysics Data System (ADS)

    Kim, Francisco H.; Penc, Karlo; Nataf, Pierre; Mila, Frédéric

    2017-11-01

    The extension of the linear flavor-wave theory to fully antisymmetric irreducible representations (irreps) of SU (N ) is presented in order to investigate the color order of SU (N ) antiferromagnetic Heisenberg models in several two-dimensional geometries. The square, triangular, and honeycomb lattices are considered with m fermionic particles per site. We present two different methods: the first method is the generalization of the multiboson spin-wave approach to SU (N ) which consists of associating a Schwinger boson to each state on a site. The second method adopts the Read and Sachdev bosons which are an extension of the Schwinger bosons that introduces one boson for each color and each line of the Young tableau. The two methods yield the same dispersing modes, a good indication that they properly capture the semiclassical fluctuations, but the first one leads to spurious flat modes of finite frequency not present in the second one. Both methods lead to the same physical conclusions otherwise: long-range Néel-type order is likely for the square lattice for SU(4) with two particles per site, but quantum fluctuations probably destroy order for more than two particles per site, with N =2 m . By contrast, quantum fluctuations always lead to corrections larger than the classical order parameter for the tripartite triangular lattice (with N =3 m ) or the bipartite honeycomb lattice (with N =2 m ) for more than one particle per site, m >1 , making the presence of color very unlikely except maybe for m =2 on the honeycomb lattice, for which the correction is only marginally larger than the classical order parameter.

  10. New robust bilinear least squares method for the analysis of spectral-pH matrix data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C

    2005-07-01

    A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.

  11. A high-order 3-D spectral-element method for the forward modelling and inversion of gravimetric data—Application to the western Pyrenees

    NASA Astrophysics Data System (ADS)

    Martin, Roland; Chevrot, Sébastien; Komatitsch, Dimitri; Seoane, Lucia; Spangenberg, Hannah; Wang, Yi; Dufréchou, Grégory; Bonvalot, Sylvain; Bruinsma, Sean

    2017-04-01

    We image the internal density structure of the Pyrenees by inverting gravity data using an a priori density model derived by scaling a Vp model obtained by full waveform inversion of teleseismic P-waves. Gravity anomalies are computed via a 3-D high-order finite-element integration in the same high-order spectral-element grid as the one used to solve the wave equation and thus to obtain the velocity model. The curvature of the Earth and surface topography are taken into account in order to obtain a density model as accurate as possible. The method is validated through comparisons with exact semi-analytical solutions. We show that the spectral-element method drastically accelerates the computations when compared to other more classical methods. Different scaling relations between compressional velocity and density are tested, and the Nafe-Drake relation is the one that leads to the best agreement between computed and observed gravity anomalies. Gravity data inversion is then performed and the results allow us to put more constraints on the density structure of the shallow crust and on the deep architecture of the mountain range.

  12. M-Adapting Low Order Mimetic Finite Differences for Dielectric Interface Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGregor, Duncan A.; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-03-07

    We consider a problem of reducing numerical dispersion for electromagnetic wave in the domain with two materials separated by a at interface in 2D with a factor of two di erence in wave speed. The computational mesh in the homogeneous parts of the domain away from the interface consists of square elements. Here the method construction is based on m-adaptation construction in homogeneous domain that leads to fourth-order numerical dispersion (vs. second order in non-optimized method). The size of the elements in two domains also di ers by a factor of two, so as to preserve the same value ofmore » Courant number in each. Near the interface where two meshes merge the mesh with larger elements consists of degenerate pentagons. We demonstrate that prior to m-adaptation the accuracy of the method falls from second to rst due to breaking of symmetry in the mesh. Next we develop m-adaptation framework for the interface region and devise an optimization criteria. We prove that for the interface problem m-adaptation cannot produce increase in method accuracy. This is in contrast to homogeneous medium where m-adaptation can increase accuracy by two orders.« less

  13. Pollution indices as useful tools for the comprehensive evaluation of the degree of soil contamination-A review.

    PubMed

    Kowalska, Joanna Beata; Mazurek, Ryszard; Gąsiorek, Michał; Zaleski, Tomasz

    2018-04-05

    The paper provides a complex, critical assessment of heavy metal soil pollution using different indices. Pollution indices are widely considered a useful tool for the comprehensive evaluation of the degree of contamination. Moreover, they can have a great importance in the assessment of soil quality and the prediction of future ecosystem sustainability, especially in the case of farmlands. Eighteen indices previously described by several authors (I geo , PI, EF, C f , PI sum , PI Nemerow , PLI, PI ave , PI Vector , PIN, MEC, CSI, MERMQ, C deg , RI, mCd and ExF) as well as the newly published Biogeochemical Index (BGI) were compared. The content, as determined by other authors, of the most widely investigated heavy metals (Cd, Pb and Zn) in farmland, forest and urban soils was used as a database for the calculation of all of the presented indices, and this shows, based on statistical methods, the similarities and differences between them. The indices were initially divided into two groups: individual and complex. In order to achieve a more precise classification, our study attempted to further split indices based on their purpose and method of calculation. The strengths and weaknesses of each index were assessed; in addition, a comprehensive method for pollution index choice is presented, in order to best interpret pollution in different soils (farmland, forest and urban). This critical review also contains an evaluation of various geochemical backgrounds (GBs) used in heavy metal soil pollution assessments. The authors propose a comprehensive method in order to assess soil quality, based on the application of local and reference GB.

  14. Exact solitary wave solution for higher order nonlinear Schrodinger equation using He's variational iteration method

    NASA Astrophysics Data System (ADS)

    Rani, Monika; Bhatti, Harbax S.; Singh, Vikramjeet

    2017-11-01

    In optical communication, the behavior of the ultrashort pulses of optical solitons can be described through nonlinear Schrodinger equation. This partial differential equation is widely used to contemplate a number of physically important phenomena, including optical shock waves, laser and plasma physics, quantum mechanics, elastic media, etc. The exact analytical solution of (1+n)-dimensional higher order nonlinear Schrodinger equation by He's variational iteration method has been presented. Our proposed solutions are very helpful in studying the solitary wave phenomena and ensure rapid convergent series and avoid round off errors. Different examples with graphical representations have been given to justify the capability of the method.

  15. Carbon nanotube growth density control

    NASA Technical Reports Server (NTRS)

    Delzeit, Lance D. (Inventor); Schipper, John F. (Inventor)

    2010-01-01

    Method and system for combined coarse scale control and fine scale control of growth density of a carbon nanotube (CNT) array on a substrate, using a selected electrical field adjacent to a substrate surface for coarse scale density control (by one or more orders of magnitude) and a selected CNT growth temperature range for fine scale density control (by multiplicative factors of less than an order of magnitude) of CNT growth density. Two spaced apart regions on a substrate may have different CNT growth densities and/or may use different feed gases for CNT growth.

  16. Determination of astaxanthin in Haematococcus pluvialis by first-order derivative spectrophotometry.

    PubMed

    Liu, Xiao Juan; Juan, Liu Xiao; Wu, Ying Hua; Hua, Wu Ying; Zhao, Li Chao; Chao, Zhao Li; Xiao, Su Yao; Yao, Xiao Su; Zhou, Ai Mei; Mei, Zhou Ai; Liu, Xin; Xin, Liu

    2011-01-01

    A highly selective, convenient, and precise method, first-order derivative spectrophotometry, was applied for the determination of astaxanthin in Haematococcus pluvialis. Ethyl acetate and ethanol (1:1, v/v) were found to be the best extraction solvent tested due to their high efficiency and low toxicity compared with nine other organic solvents. Astaxanthin coexisting with chlorophyll and beta-carotene was analyzed by first-order derivative spectrophotometry in order to optimize the conditions for the determination of astaxanthin. The results show that when detected at 432 nm, the interfering substances could be eliminated. The dynamic linear range was 2.0-8.0 microg/mL, with a correlation coefficient of 0.9916. The detection threshold was 0.41 microg/mL. The RSD for the determination of astaxanthin was in the range of 0.01-0.06%; the results of recovery test were 98.1-108.0%. The statistical analysis between first-order derivative spectrophotometry and HPLC by T-testing did not exceed their critical values, revealing no significant differences between these two methods. It was proved that first-order derivative spectrophotometry is a rapid and convenient method for the determination of astaxanthin in H. pluvialis that can eliminate the negative effect resulting from the coexistence of astaxanthin with chlorophyll and beta-carotene.

  17. Simulated families: A test for different methods of family identification

    NASA Technical Reports Server (NTRS)

    Bendjoya, Philippe; Cellino, Alberto; Froeschle, Claude; Zappala, Vincenzo

    1992-01-01

    A set of families generated in fictitious impact events (leading to a wide range of 'structure' in the orbital element space have been superimposed to various backgrounds of different densities in order to investigate the efficiency and the limitations of the methods used by Zappala et al. (1990) and by Bendjoya et al. (1990) for identifying asteroid families. In addition, an evaluation of the expected interlopers at different significance levels and the possibility of improving the definition of the level of maximum significant of a given family were analyzed.

  18. Comparison of different methods for radiochemical purity testing of [99mTc-EDDA-HYNIC-D-Phe1,Tyr3]-octreotide.

    PubMed

    von Guggenberg, Elisabeth; Penz, Barbara; Kemmler, Georg; Virgolini, Irene; Decristoforo, Clemens

    2006-02-01

    [99mTc-EDDA-HYNIC-D-Phe1,Tyr3]-octreotide (99mTc-EDDA-HYNIC-TOC) is an alternative radioligand for somatostatin receptor (SSTR) scintigraphy of neuroendocrine tumours. In order to allow a rapid and accurate determination of the quality in the clinical routine the aim of this study was to evaluate different methods of radiochemical purity (RCP) testing. Three different methods of RCP testing were compared: high-performance liquid chromatography (HPLC), thin layer chromatography (TLC) and minicolumn (Sep-Pak purification = SPE). HPLC was shown to be the most effective method for the quality control. The use of TLC and SPE is only recommended after sufficient practical labelling experience.

  19. Simplified Predictive Models for CO 2 Sequestration Performance Assessment: Research Topical Report on Task #4 - Reduced-Order Method (ROM) Based Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Srikanta; Jin, Larry; He, Jincong

    2015-06-30

    Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO 2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the applicationmore » of POD-TPWL for CO 2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO 2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between training and test runs, though they do demonstrate that the approach is able to capture basic solution trends. The impact of some of the detailed numerical treatments within the POD-TPWL formulation is considered in an Appendix.« less

  20. Ordering of Z-numbers

    NASA Astrophysics Data System (ADS)

    Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah

    2017-08-01

    The concept of Z-number which was introduced by Zadeh in 2010 has captured attention by many due to its enormous applications in the area of Computing with Words (CWW). A Z-number is an ordered pair of fuzzy numbers, (A, R), where A essentially plays the role of fuzzy restriction which is a real-valued uncertain variable and R is a measure of reliability of the first component. Besides its theoretical development, Z-numbers have been successfully applied to decision making problems under uncertain environment. In any decision making evaluation using Z-number, ideally the final outcome of the calculation should also be in Z-number. A question will arise: how do we order the Z-numbers so that the preference of the alternatives can be ranked appropriately? In this paper, we propose a method of ordering the Z-number via the transformation of the Z-numbers to fuzzy numbers. The Z-number will then be ranked using a ranking fuzzy number method. The proposed method will be tested in several combinations of Z-numbers to investigate its effectiveness. The effect of different values of A and R towards the ordering of Z-numbers is analyzed and discussed.

  1. CMB spectral distortions as solutions to the Boltzmann equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ota, Atsuhisa, E-mail: a.ota@th.phys.titech.ac.jp

    2017-01-01

    We propose to re-interpret the cosmic microwave background spectral distortions as solutions to the Boltzmann equation. This approach makes it possible to solve the second order Boltzmann equation explicitly, with the spectral y distortion and the momentum independent second order temperature perturbation, while generation of μ distortion cannot be explained even at second order in this framework. We also extend our method to higher order Boltzmann equations systematically and find new type spectral distortions, assuming that the collision term is linear in the photon distribution functions, namely, in the Thomson scattering limit. As an example, we concretely construct solutions tomore » the cubic order Boltzmann equation and show that the equations are closed with additional three parameters composed of a cubic order temperature perturbation and two cubic order spectral distortions. The linear Sunyaev-Zel'dovich effect whose momentum dependence is different from the usual y distortion is also discussed in the presence of the next leading order Kompaneets terms, and we show that higher order spectral distortions are also generated as a result of the diffusion process in a framework of higher order Boltzmann equations. The method may be applicable to a wider class of problems and has potential to give a general prescription to non-equilibrium physics.« less

  2. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  3. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  4. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  5. Performance of Blind Source Separation Algorithms for FMRI Analysis using a Group ICA Method

    PubMed Central

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D.

    2007-01-01

    Independent component analysis (ICA) is a popular blind source separation (BSS) technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist, however the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely information maximization, maximization of non-gaussianity, joint diagonalization of cross-cumulant matrices, and second-order correlation based methods when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study the variability among different ICA algorithms and propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA, and JADE all yield reliable results; each having their strengths in specific areas. EVD, an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for the iterative ICA algorithms, it is important to investigate the variability of the estimates from different runs. We test the consistency of the iterative algorithms, Infomax and FastICA, by running the algorithm a number of times with different initializations and note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis. PMID:17540281

  6. Performance of blind source separation algorithms for fMRI analysis using a group ICA method.

    PubMed

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D

    2007-06-01

    Independent component analysis (ICA) is a popular blind source separation technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist; however, the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely, information maximization, maximization of non-Gaussianity, joint diagonalization of cross-cumulant matrices and second-order correlation-based methods, when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study variability among different ICA algorithms, and we propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA and joint approximate diagonalization of eigenmatrices (JADE) all yield reliable results, with each having its strengths in specific areas. Eigenvalue decomposition (EVD), an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for iterative ICA algorithms, it is important to investigate the variability of estimates from different runs. We test the consistency of the iterative algorithms Infomax and FastICA by running the algorithm a number of times with different initializations, and we note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis.

  7. disLocate: tools to rapidly quantify local intermolecular structure to assess two-dimensional order in self-assembled systems.

    PubMed

    Bumstead, Matt; Liang, Kunyu; Hanta, Gregory; Hui, Lok Shu; Turak, Ayse

    2018-01-24

    Order classification is particularly important in photonics, optoelectronics, nanotechnology, biology, and biomedicine, as self-assembled and living systems tend to be ordered well but not perfectly. Engineering sets of experimental protocols that can accurately reproduce specific desired patterns can be a challenge when (dis)ordered outcomes look visually similar. Robust comparisons between similar samples, especially with limited data sets, need a finely tuned ensemble of accurate analysis tools. Here we introduce our numerical Mathematica package disLocate, a suite of tools to rapidly quantify the spatial structure of a two-dimensional dispersion of objects. The full range of tools available in disLocate give different insights into the quality and type of order present in a given dispersion, accessing the translational, orientational and entropic order. The utility of this package allows for researchers to extract the variation and confidence range within finite sets of data (single images) using different structure metrics to quantify local variation in disorder. Containing all metrics within one package allows for researchers to easily and rapidly extract many different parameters simultaneously, allowing robust conclusions to be drawn on the order of a given system. Quantifying the experimental trends which produce desired morphologies enables engineering of novel methods to direct self-assembly.

  8. Dispersion analysis of the Pn -Pn-1DG mixed finite element pair for atmospheric modelling

    NASA Astrophysics Data System (ADS)

    Melvin, Thomas

    2018-02-01

    Mixed finite element methods provide a generalisation of staggered grid finite difference methods with a framework to extend the method to high orders. The ability to generate a high order method is appealing for applications on the kind of quasi-uniform grids that are popular for atmospheric modelling, so that the method retains an acceptable level of accuracy even around special points in the grid. The dispersion properties of such schemes are important to study as they provide insight into the numerical adjustment to imbalance that is an important component in atmospheric modelling. This paper extends the recent analysis of the P2 - P1DG pair, that is a quadratic continuous and linear discontinuous finite element pair, to higher polynomial orders and also spectral element type pairs. In common with the previously studied element pair, and also with other schemes such as the spectral element and discontinuous Galerkin methods, increasing the polynomial order is found to provide a more accurate dispersion relation for the well resolved part of the spectrum but at the cost of a number of unphysical spectral gaps. The effects of these spectral gaps are investigated and shown to have a varying impact depending upon the width of the gap. Finally, the tensor product nature of the finite element spaces is exploited to extend the dispersion analysis into two-dimensions.

  9. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.

  10. Robust Control for Microgravity Vibration Isolation using Fixed Order, Mixed H2/Mu Design

    NASA Technical Reports Server (NTRS)

    Whorton, Mark

    2003-01-01

    Many space-science experiments need an active isolation system to provide a sufficiently quiescent microgravity environment. Modern control methods provide the potential for both high-performance and robust stability in the presence of parametric uncertainties that are characteristic of microgravity vibration isolation systems. While H2 and H(infinity) methods are well established, neither provides the levels of attenuation performance and robust stability in a compensator with low order. Mixed H2/H(infinity), controllers provide a means for maximizing robust stability for a given level of mean-square nominal performance while directly optimizing for controller order constraints. This paper demonstrates the benefit of mixed norm design from the perspective of robustness to parametric uncertainties and controller order for microgravity vibration isolation. A nominal performance metric analogous to the mu measure, for robust stability assessment is also introduced in order to define an acceptable trade space from which different control methodologies can be compared.

  11. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.

  12. Impacts of Different Soil Texture and Organic Content on Hydrological Performance of Bioretention

    NASA Astrophysics Data System (ADS)

    Gülbaz, Sezar; Melek Kazezyilmaz Alhan, Cevza

    2015-04-01

    The land development and increase in urbanization in a watershed has adverse effects such as flooding and water pollution on both surface water and groundwater resources. Low Impact Development (LID) Best Management Practices (BMPs) such as bioretentions, vegetated rooftops, rain barrels, vegetative swales and permeable pavements have been implemented in order to diminish adverse effects of urbanization. LID-BMP is a land planning method which is used to manage storm water runoff by reducing peak flows as well as simultaneously improving water quality. The aim of this study is developing a functional experimental setup called as Rainfall-Watershed-Bioretention (RWB) System in order to investigate and quantify the hydrological performance of bioretention. RWB System is constructed on the Istanbul University Campus and includes an artificial rainfall system, which allows for variable rainfall intensity, drainage area, which has controllable size and slope, and bioretention columns with different soil ratios. Four bioretention columns with different soil textures and organic content are constructed in order to investigate their effects on water quantity. Using RWB System, the runoff volume, hydrograph, peak flow rate and delay in peak time at the exit of bioretention columns may be quantified under various rainfalls in order to understand the role of soil types used in bioretention columns and rainfall intensities. The data obtained from several experiments conducted in RWB System are employed in establishing a relation among rainfall, surface runoff and flow reduction after bioretention. Moreover, the results are supported by mathematical models in order to explain the physical mechanism of bioretention. Following conclusions are reached based on the analyses carried out in this study: i) Results show that different local soil types in bioretention implementation affect surface runoff and peak flow considerably. ii) Rainfall intensity and duration affect peak flow reduction and arrival time and shape of the hydrograph. iii) A mathematical representation of the relation among the rainfall, surface runoff over the watershed and outflow from the bioretention is developed by incorporating kinematic wave equation into the modified Green-Ampt Method. The rainfall intensity in modified Green-Ampt method is represented by the inflow per unit surface area of bioretention which may be obtained from kinematic wave solution using the measured rainfall data. Variable rainfall cases may be taken into account by using the modified Green-Ampt method. Thus, employing the modified Green-Ampt method helps significantly in understanding and explaining the hydrological mechanism of a bioretention cell where the Darcy law or the classical Green-Ampt method is inadequate which works under constant rainfall intensities. Consequently, the rainfall is directly related with the outflow through the bioretention. This study discusses only the water quantity of bioretention.

  13. Discontinuous Skeletal Gradient Discretisation methods on polytopal meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Pietro, Daniele A.; Droniou, Jérôme; Manzini, Gianmarco

    Here, in this work we develop arbitrary-order Discontinuous Skeletal Gradient Discretisations (DSGD) on general polytopal meshes. Discontinuous Skeletal refers to the fact that the globally coupled unknowns are broken polynomials on the mesh skeleton. The key ingredient is a high-order gradient reconstruction composed of two terms: (i) a consistent contribution obtained mimicking an integration by parts formula inside each element and (ii) a stabilising term for which sufficient design conditions are provided. An example of stabilisation that satisfies the design conditions is proposed based on a local lifting of high-order residuals on a Raviart–Thomas–Nédélec subspace. We prove that the novelmore » DSGDs satisfy coercivity, consistency, limit-conformity, and compactness requirements that ensure convergence for a variety of elliptic and parabolic problems. Lastly, links with Hybrid High-Order, non-conforming Mimetic Finite Difference and non-conforming Virtual Element methods are also studied. Numerical examples complete the exposition.« less

  14. Discontinuous Skeletal Gradient Discretisation methods on polytopal meshes

    DOE PAGES

    Di Pietro, Daniele A.; Droniou, Jérôme; Manzini, Gianmarco

    2017-11-21

    Here, in this work we develop arbitrary-order Discontinuous Skeletal Gradient Discretisations (DSGD) on general polytopal meshes. Discontinuous Skeletal refers to the fact that the globally coupled unknowns are broken polynomials on the mesh skeleton. The key ingredient is a high-order gradient reconstruction composed of two terms: (i) a consistent contribution obtained mimicking an integration by parts formula inside each element and (ii) a stabilising term for which sufficient design conditions are provided. An example of stabilisation that satisfies the design conditions is proposed based on a local lifting of high-order residuals on a Raviart–Thomas–Nédélec subspace. We prove that the novelmore » DSGDs satisfy coercivity, consistency, limit-conformity, and compactness requirements that ensure convergence for a variety of elliptic and parabolic problems. Lastly, links with Hybrid High-Order, non-conforming Mimetic Finite Difference and non-conforming Virtual Element methods are also studied. Numerical examples complete the exposition.« less

  15. Periodic solutions of second-order nonlinear difference equations containing a small parameter. III - Perturbation theory

    NASA Technical Reports Server (NTRS)

    Mickens, R. E.

    1986-01-01

    A technique to construct a uniformly valid perturbation series solution to a particular class of nonlinear difference equations is shown. The method allows the determination of approximations to the periodic solutions to these equations. An example illustrating the technique is presented.

  16. Parental Attributions for Success in Managing the Behavior of Children with ADHD

    ERIC Educational Resources Information Center

    Coles, Erika K.; Pelham, William E.; Gnagy, Elizabeth M.

    2010-01-01

    Objective: The current study evaluated the effects of differing intensities of behavior modification and medication on parents' self-reported success in managing their child's misbehavior and the attributions parents gave for success or failure. Method: Children were randomized to receive in counterbalanced orders different levels of behavior…

  17. Matrix effect on vibrational frequencies: Experiments and simulations for HCl and HNgCl (Ng = Kr and Xe)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinowski, Jaroslaw; Räsänen, Markku; Lignell, Antti

    2014-03-07

    We study the environmental effect on molecules embedded in noble-gas (Ng) matrices. The experimental data on HXeCl and HKrCl in Ng matrices is enriched. As a result, the H−Xe stretching bands of HXeCl are now known in four Ng matrices (Ne, Ar, Kr, and Xe), and HKrCl is now known in Ar and Kr matrices. The order of the H−Xe stretching frequencies of HXeCl in different matrices is ν(Ne) < ν(Xe) < ν(Kr) < ν(Ar), which is a non-monotonous function of the dielectric constant, in contrast to the “classical” order observed for HCl: ν(Xe) < ν(Kr) < ν(Ar) < ν(Ne).more » The order of the H−Kr stretching frequencies of HKrCl is consistently ν(Kr) < ν(Ar). These matrix effects are analyzed theoretically by using a number of quantum chemical methods. The calculations on these molecules (HCl, HXeCl, and HKrCl) embedded in single Ng{sup ′} layer cages lead to very satisfactory results with respect to the relative matrix shifts in the case of the MP4(SDQ) method whereas the B3LYP-D and MP2 methods fail to fully reproduce these experimental results. The obtained order of frequencies is discussed in terms of the size available for the Ng hydrides in the cages, probably leading to different stresses on the embedded molecule. Taking into account vibrational anharmonicity produces a good agreement of the MP4(SDQ) frequencies of HCl and HXeCl with the experimental values in different matrices. This work also highlights a number of open questions in the field.« less

  18. Automatic Black-Box Model Order Reduction using Radial Basis Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephanson, M B; Lee, J F; White, D A

    Finite elements methods have long made use of model order reduction (MOR), particularly in the context of fast freqeucny sweeps. In this paper, we discuss a black-box MOR technique, applicable to a many solution methods and not restricted only to spectral responses. We also discuss automated methods for generating a reduced order model that meets a given error tolerance. Numerical examples demonstrate the effectiveness and wide applicability of the method. With the advent of improved computing hardware and numerous fast solution techniques, the field of computational electromagnetics are progressed rapidly in terms of the size and complexity of problems thatmore » can be solved. Numerous applications, however, require the solution of a problem for many different configurations, including optimization, parameter exploration, and uncertainly quantification, where the parameters that may be changed include frequency, material properties, geometric dimensions, etc. In such cases, thousands of solutions may be needed, so solve times of even a few minutes can be burdensome. Model order reduction (MOR) may alleviate this difficulty by creating a small model that can be evaluated quickly. Many MOR techniques have been applied to electromagnetic problems over the past few decades, particularly in the context of fast frequency sweeps. Recent works have extended these methods to allow more than one parameter and to allow the parameters to represent material and geometric properties. There are still limitations with these methods, however. First, they almost always assume that the finite element method is used to solve the problem, so that the system matrix is a known function of the parameters. Second, although some authors have presented adaptive methods (e.g., [2]), the order of the model is often determined before the MOR process begins, with little insight about what order is actually needed to reach the desired accuracy. Finally, it not clear how to efficiently extend most methods to the multiparameter case. This paper address the above shortcomings be developing a method that uses a block-box approach to the solution method, is adaptive, and is easily extensible to many parameters.« less

  19. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  20. Computational methods and traveling wave solutions for the fourth-order nonlinear Ablowitz-Kaup-Newell-Segur water wave dynamical equation via two methods and its applications

    NASA Astrophysics Data System (ADS)

    Ali, Asghar; Seadawy, Aly R.; Lu, Dianchen

    2018-05-01

    The aim of this article is to construct some new traveling wave solutions and investigate localized structures for fourth-order nonlinear Ablowitz-Kaup-Newell-Segur (AKNS) water wave dynamical equation. The simple equation method (SEM) and the modified simple equation method (MSEM) are applied in this paper to construct the analytical traveling wave solutions of AKNS equation. The different waves solutions are derived by assigning special values to the parameters. The obtained results have their importance in the field of physics and other areas of applied sciences. All the solutions are also graphically represented. The constructed results are often helpful for studying several new localized structures and the waves interaction in the high-dimensional models.

  1. Functional Brain Networks: Does the Choice of Dependency Estimator and Binarization Method Matter?

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2016-07-01

    The human brain can be modelled as a complex networked structure with brain regions as individual nodes and their anatomical/functional links as edges. Functional brain networks are constructed by first extracting weighted connectivity matrices, and then binarizing them to minimize the noise level. Different methods have been used to estimate the dependency values between the nodes and to obtain a binary network from a weighted connectivity matrix. In this work we study topological properties of EEG-based functional networks in Alzheimer’s Disease (AD). To estimate the connectivity strength between two time series, we use Pearson correlation, coherence, phase order parameter and synchronization likelihood. In order to binarize the weighted connectivity matrices, we use Minimum Spanning Tree (MST), Minimum Connected Component (MCC), uniform threshold and density-preserving methods. We find that the detected AD-related abnormalities highly depend on the methods used for dependency estimation and binarization. Topological properties of networks constructed using coherence method and MCC binarization show more significant differences between AD and healthy subjects than the other methods. These results might explain contradictory results reported in the literature for network properties specific to AD symptoms. The analysis method should be seriously taken into account in the interpretation of network-based analysis of brain signals.

  2. Wide spectral range multiple orders and half-wave achromatic phase retarders fabricated from two lithium tantalite single crystal plates

    NASA Astrophysics Data System (ADS)

    Emam-Ismail, M.

    2015-11-01

    In a broad spectral range (300-2500 nm), we report the use of channeled spectra formed from the interference of polarized white light to extract the dispersion of the phase birefringence Δnp(λ) of the x- and y-cuts of lithium tantalite (LiTaO3:LT) plates. A new method named as wavenumber difference method is used to extract the spectral behavior of the phase birefringence of the x- and y- cuts of LT plates. The correctness of the obtained birefringence data is confirmed by using Jones vector method through recalculating the plates thicknesses. The spectral variation of the phase birefringence Δnp(λ) of the x- and y-cuts of LT plates is fitted to Cauchy dispersion function with relative error for both x- and y-cuts of order 2.4×10-4. The group birefringence dispersion Δng (λ) of the x- and y-cuts of LT plates is also calculated and fitted to Ghosh dispersion function with relative error for both x- and y-cuts of order 2.83×10-4. Furthermore, the phase retardation introduced by the x- and y-cuts of LT plates is also calculated. It is found that the amount of phase retardation confirms that the x- and y-cuts of LT plates can act as a multiple order half- and quarter-wave plates working at many different wavelengths through the spectral range 300-2500 nm. For the x- and y-cuts of LT plates, a large difference between group and phase birefringence is observed at a short wavelength (λ=300 nm); while such difference progressively diminished at longer wavelength (λ=2000 nm). In the near infrared region (NIR) region (700-2500 nm), a broad spectral full width at half maximum (FWHM) is observed for either x- or y-cut of LT plate which can act as if it is working as a zero order wave plate. Finally, an achromatic half-wave plate working at 598 nm and covering a wide spectral range (300-900 nm) is demonstrated experimentally by combining both x- and y-cuts of LT plates.

  3. Polarimetric signatures of a canopy of dielectric cylinders based on first and second order vector radiative transfer theory

    NASA Technical Reports Server (NTRS)

    Tsang, Leung; Chan, Chi Hou; Kong, Jin AU; Joseph, James

    1992-01-01

    Complete polarimetric signatures of a canopy of dielectric cylinders overlying a homogeneous half space are studied with the first and second order solutions of the vector radiative transfer theory. The vector radiative transfer equations contain a general nondiagonal extinction matrix and a phase matrix. The energy conservation issue is addressed by calculating the elements of the extinction matrix and the elements of the phase matrix in a manner that is consistent with energy conservation. Two methods are used. In the first method, the surface fields and the internal fields of the dielectric cylinder are calculated by using the fields of an infinite cylinder. The phase matrix is calculated and the extinction matrix is calculated by summing the absorption and scattering to ensure energy conservation. In the second method, the method of moments is used to calculate the elements of the extinction and phase matrices. The Mueller matrix based on the first order and second order multiple scattering solutions of the vector radiative transfer equation are calculated. Results from the two methods are compared. The vector radiative transfer equations, combined with the solution based on method of moments, obey both energy conservation and reciprocity. The polarimetric signatures, copolarized and depolarized return, degree of polarization, and phase differences are studied as a function of the orientation, sizes, and dielectric properties of the cylinders. It is shown that second order scattering is generally important for vegetation canopy at C band and can be important at L band for some cases.

  4. Degeneracy relations in QCD and the equivalence of two systematic all-orders methods for setting the renormalization scale

    DOE PAGES

    Bi, Huan -Yu; Wu, Xing -Gang; Ma, Yang; ...

    2015-06-26

    The Principle of Maximum Conformality (PMC) eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I); the other, more recent, method (PMC-II) uses the R δ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfymore » all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio R e+e– and the Higgs partial width I'(H→bb¯). Both methods lead to the same resummed (‘conformal’) series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {β i}-terms in the pQCD expansion are taken into account. In addition, we show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.« less

  5. Biomarkers of Selenium Action in Prostate Cancer

    DTIC Science & Technology

    2005-01-01

    secretory by conventional methods according to published literature. In addition, we have determined the similarities and differences in global gene...transition zone tissue of a 42-year-old man ac- arrays in the resulting data tables were ordered by their cording to previously described methods [4]. The pre...hundred fifteen genes identified by ELISA method . Replicating the conditions used for the SAM analysis showed significant differential expres- microarray

  6. Sleep and wake phase of heart beat dynamics by artificial insymmetrised patterns

    NASA Astrophysics Data System (ADS)

    Dudkowska, A.; Makowiec, D.

    2004-05-01

    In order to determine differences between healthy patients and patients with congestive heart failure we apply the artificial insymmetrised pattern (AIP) method. The AIP method by exploring a human eye ability to extract regularities and read symmetries in a dot pattern, serves a tool for qualitative discrimination of heart rate states.

  7. HYPOTHESIS SETTING AND ORDER STATISTIC FOR ROBUST GENOMIC META-ANALYSIS.

    PubMed

    Song, Chi; Tseng, George C

    2014-01-01

    Meta-analysis techniques have been widely developed and applied in genomic applications, especially for combining multiple transcriptomic studies. In this paper, we propose an order statistic of p-values ( r th ordered p-value, rOP) across combined studies as the test statistic. We illustrate different hypothesis settings that detect gene markers differentially expressed (DE) "in all studies", "in the majority of studies", or "in one or more studies", and specify rOP as a suitable method for detecting DE genes "in the majority of studies". We develop methods to estimate the parameter r in rOP for real applications. Statistical properties such as its asymptotic behavior and a one-sided testing correction for detecting markers of concordant expression changes are explored. Power calculation and simulation show better performance of rOP compared to classical Fisher's method, Stouffer's method, minimum p-value method and maximum p-value method under the focused hypothesis setting. Theoretically, rOP is found connected to the naïve vote counting method and can be viewed as a generalized form of vote counting with better statistical properties. The method is applied to three microarray meta-analysis examples including major depressive disorder, brain cancer and diabetes. The results demonstrate rOP as a more generalizable, robust and sensitive statistical framework to detect disease-related markers.

  8. Time difference of arrival estimation of microseismic signals based on alpha-stable distribution

    NASA Astrophysics Data System (ADS)

    Jia, Rui-Sheng; Gong, Yue; Peng, Yan-Jun; Sun, Hong-Mei; Zhang, Xing-Li; Lu, Xin-Ming

    2018-05-01

    Microseismic signals are generally considered to follow the Gauss distribution. A comparison of the dynamic characteristics of sample variance and the symmetry of microseismic signals with the signals which follow α-stable distribution reveals that the microseismic signals have obvious pulse characteristics and that the probability density curve of the microseismic signal is approximately symmetric. Thus, the hypothesis that microseismic signals follow the symmetric α-stable distribution is proposed. On the premise of this hypothesis, the characteristic exponent α of the microseismic signals is obtained by utilizing the fractional low-order statistics, and then a new method of time difference of arrival (TDOA) estimation of microseismic signals based on fractional low-order covariance (FLOC) is proposed. Upon applying this method to the TDOA estimation of Ricker wavelet simulation signals and real microseismic signals, experimental results show that the FLOC method, which is based on the assumption of the symmetric α-stable distribution, leads to enhanced spatial resolution of the TDOA estimation relative to the generalized cross correlation (GCC) method, which is based on the assumption of the Gaussian distribution.

  9. The Dalgarno-Lewis summation technique: Some comments and examples

    NASA Astrophysics Data System (ADS)

    Mavromatis, Harry A.

    1991-08-01

    The Dalgarno-Lewis technique [A. Dalgarno and J. T. Lewis, ``The exact calculation of long-range forces between atoms by perturbation theory,'' Proc. R. Soc. London Ser. A 233, 70-74 (1955)] provides an elegant method to obtain exact results for various orders in perturbation theory, while avoiding the infinite sums which arise in each order. In the present paper this technique, which perhaps has not been exploited as much as it could be, is first reviewed with attention to some of its not-so-straightforward details, and then six examples of the method are given using three different one-dimensional bases.

  10. A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.

    2018-06-01

    A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.

  11. Colorimetric characterization of digital cameras with unrestricted capture settings applicable for different illumination circumstances

    NASA Astrophysics Data System (ADS)

    Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin

    2016-05-01

    With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.

  12. High-Order Discontinuous Galerkin Level Set Method for Interface Tracking and Re-Distancing on Unstructured Meshes

    NASA Astrophysics Data System (ADS)

    Greene, Patrick; Nourgaliev, Robert; Schofield, Sam

    2015-11-01

    A new sharp high-order interface tracking method for multi-material flow problems on unstructured meshes is presented. The method combines the marker-tracking algorithm with a discontinuous Galerkin (DG) level set method to implicitly track interfaces. DG projection is used to provide a mapping from the Lagrangian marker field to the Eulerian level set field. For the level set re-distancing, we developed a novel marching method that takes advantage of the unique features of the DG representation of the level set. The method efficiently marches outward from the zero level set with values in the new cells being computed solely from cell neighbors. Results are presented for a number of different interface geometries including ones with sharp corners and multiple hierarchical level sets. The method can robustly handle the level set discontinuities without explicit utilization of solution limiters. Results show that the expected high order (3rd and higher) of convergence for the DG representation of the level set is obtained for smooth solutions on unstructured meshes. High-order re-distancing on irregular meshes is a must for applications were the interfacial curvature is important for underlying physics, such as surface tension, wetting and detonation shock dynamics. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Information management release number LLNL-ABS-675636.

  13. Reliability of Hypernasality Rating: Comparison of 3 Different Methods for Perceptual Assessment.

    PubMed

    Yamashita, Renata Paciello; Borg, Elisabet; Granqvist, Svante; Lohmander, Anette

    2018-01-01

    To compare reliability in auditory-perceptual assessment of hypernasality for 3 different methods and to explore the influence of language background. Comparative methodological study. Participants and Materials: Audio recordings of 5-year-old Swedish-speaking children with repaired cleft lip and palate consisting of 73 stimuli of 9 nonnasal single-word strings in 3 different randomized orders. Four experienced speech-language pathologists (2 native speakers of Brazilian-Portuguese and 2 native speakers of Swedish) participated as listeners. After individual training, each listener performed the hypernasality rating task. Each order of stimuli was analyzed individually using the 2-step, VISOR and Borg centiMax scale methods. Comparison of intra- and inter-rater reliability, and consistency  for each method within language of the listener and between listener languages (Swedish and Brazilian-Portuguese). Good to excellent intra-rater reliability was found within each listener for all methods, 2-step: κ = 0.59-0.93; VISOR: intraclass correlation coefficient (ICC) = 0.80-0.99; Borg centiMax (cM) scale: ICC = 0.80-1.00. The highest inter-rater reliability was demonstrated for VISOR (ICC = 0.60-0.90) and Borg cM-scale (ICC = 0.40-0.80). High consistency within each method was found with the highest for the Borg cM scale (ICC = 0.89-0.91). There was a significant difference in the ratings between the Swedish and the Brazilian listeners for all methods. The category-ratio scale Borg cM was considered most reliable in the assessment of hypernasality. Language background of Brazilian-Portuguese listeners influenced the perceptual ratings of hypernasality in Swedish speech samples, despite their experience in perceptual assessment of cleft palate speech disorders.

  14. EVALUATION AND IMPORTANCE OF SELECTED MICROBIOLOGICAL METHODS IN THE DIAGNOSIS OF HUMAN BRUCELLOSIS

    PubMed Central

    Šiširak, Maida; Hukić, Mirsada

    2009-01-01

    Brucellosis is an important public health problem in Bosnia and Herzegovina. The diagnosis of bru-cellosis in the country without any experiences with this kind of infection may be very difficult. The aim of this study was to evaluate diagnostic methods: Rose Bengal test, blood cultures and ELISA IgM and IgG in the patients with brucellosis. The study included 91 brucellosis patients in the period 2004 to 2007. All the patients were treated at the Clinic for Infectious Diseases, University of Sarajevo Clinics Centre. Blood cultures were positive in 28/91 (30, 8%) patients. This method often needs a long period of incubation and specimens need to be obtained early. These limitations make serology the most useful tool for the laboratory diagnosis of Brucella infection. Rose Bengal is a rapid plate agglutination test, very sensitive irrespective of the stage of the disease. In our study, Rose Bengal test was positive in all patients 91/91 (100, 0%). Brucella IgM antibodies with ELISA were positive in 59/91 (64, 8%). Brucella IgG antibodies with ELISA were positive in 51/91 (56%). In order to determine the diagnostic value of the different tests, we compared the sensitivity among test-methods: Rose Bengal test-100.0%, blood culture-30.8%, ELISA IgM-64.8% and ELISA IgG-56.1%. Sensitivity of test methods was different in the different stages of illness. It is necessary to use combination of different tests such are blood culture, Rose Bengal test and ELISA in order to ensure the diagnosis. Rose Bengal test is excellent for the screening. Blood culture is a method of choice for the diagnosis acute infection. ELISA is a very good method for the diagnostic chronic disease and relapse. PMID:19754473

  15. Analysis and diagnosis of basal cell carcinoma (BCC) via infrared imaging

    NASA Astrophysics Data System (ADS)

    Flores-Sahagun, J. H.; Vargas, J. V. C.; Mulinari-Brenner, F. A.

    2011-09-01

    In this work, a structured methodology is proposed and tested through infrared imaging temperature measurements of a healthy control group to establish expected normality ranges and of basal cell carcinoma patients (a type of skin cancer) previously diagnosed through biopsies of the affected regions. A method of conjugated gradients is proposed to compare measured dimensionless temperature difference values (Δ θ) between two symmetric regions of the patient's body, that takes into account the skin, the surrounding ambient and the individual core temperatures and doing so, the limitation of the results interpretation for different individuals become simple and nonsubjective. The range of normal temperatures in different regions of the body for seven healthy individuals was determined, and admitting that the human skin exhibits a unimodal normal distribution, the normal range for each region was considered to be the mean dimensionless temperature difference plus/minus twice the standard deviation of the measurements (Δθ±2σ) in order to represent 95% of the population. Eleven patients with previously diagnosed basal cell carcinoma through biopsies were examined with the method, which was capable of detecting skin abnormalities in all cases. Therefore, the conjugated gradients method was considered effective in the identification of the basal cell carcinoma through infrared imaging even with the use of a low optical resolution camera (160 × 120 pixels) and a thermal resolution of 0.1 °C. The method could also be used to scan a larger area around the lesion in order to detect the presence of other lesions still not perceptible in the clinical exam. However, it is necessary that a temperature differences mesh-like mapping of the healthy human body skin is produced, so that the comparison of the patient Δ θ could be made with the exact region of such mapping in order to possibly make a more effective diagnosis. Finally, the infrared image analyzed through the conjugated gradients method could be useful in the definition of a better safety margin in the surgery for the removal of the lesion, both minimizing esthetics damage to the patient and possibly avoiding basal cell carcinoma recurrence.

  16. A meta-analysis of in vitro antibiotic synergy against Acinetobacter baumannii.

    PubMed

    March, Gabriel A; Bratos, Miguel A

    2015-12-01

    The aim of the work was to describe the different in vitro models for testing synergism of antibiotics and gather the results of antibiotic synergy against multidrug-resistant Acinetobacter baumannii (MDR-Ab). The different original articles were obtained from different web sites. In order to compare the results obtained by the different methods for synergy testing, the Pearson chi-square and the Fischer tests were used. Moreover, non-parametric chi-square test was used in order to compare the frequency distribution in each analysed manuscript. In the current meta-analysis 24 manuscripts, which encompassed 2016 tests of in vitro synergism of different antimicrobials against MDR-Ab, were revised. Checkerboard synergy testing was used in 11 studies, which encompasses 1086 tests (53.9%); time-kill assays were applied in 12 studies, which encompass 359 tests (17.8%); gradient diffusion methods were used in seven studies, encompassing 293 tests (14.5%). And, finally, time-kill plus checkerboard were applied in two studies, encompassing 278 tests (13.8%). By comparing these data, checkerboard and time-kill methods were significantly more used than gradient diffusion methods (p<0.005). Regarding synergy rates obtained on the basis of the applied method, checkerboard provided 227 tests (20.9%) with a synergistic effect; time-kill assays yielded 222 tests (61.8%) with a synergistic effect; gradient diffusion methods only provided 29 tests (9.9%) with a synergistic effect; and, finally, time-kill plus checkerboard yielded just 15 tests (5.4%) with a synergistic effect. When comparing these percentages, synergy rates reported by time-kill methods were significantly higher than that obtained by checkerboard and gradient diffusion methods (p<0.005). On the basis of the revised data, the combinations of a bactericidal antibiotic plus Tigecycline, Vancomycin or Teicoplanin are not recommended. The best combinations of antibiotics are those which include bactericidal antibiotics such as Carbapenems, Fosfomycin, Amikacin, Polymyxins, Rifampicin and Ampicillin/Sulbactam. Copyright © 2015. Published by Elsevier B.V.

  17. Joint modelling rationale for chained equations

    PubMed Central

    2014-01-01

    Background Chained equations imputation is widely used in medical research. It uses a set of conditional models, so is more flexible than joint modelling imputation for the imputation of different types of variables (e.g. binary, ordinal or unordered categorical). However, chained equations imputation does not correspond to drawing from a joint distribution when the conditional models are incompatible. Concurrently with our work, other authors have shown the equivalence of the two imputation methods in finite samples. Methods Taking a different approach, we prove, in finite samples, sufficient conditions for chained equations and joint modelling to yield imputations from the same predictive distribution. Further, we apply this proof in four specific cases and conduct a simulation study which explores the consequences when the conditional models are compatible but the conditions otherwise are not satisfied. Results We provide an additional “non-informative margins” condition which, together with compatibility, is sufficient. We show that the non-informative margins condition is not satisfied, despite compatible conditional models, in a situation as simple as two continuous variables and one binary variable. Our simulation study demonstrates that as a consequence of this violation order effects can occur; that is, systematic differences depending upon the ordering of the variables in the chained equations algorithm. However, the order effects appear to be small, especially when associations between variables are weak. Conclusions Since chained equations is typically used in medical research for datasets with different types of variables, researchers must be aware that order effects are likely to be ubiquitous, but our results suggest they may be small enough to be negligible. PMID:24559129

  18. [An attempt for standardization of serum CA19-9 levels, in order to dissolve the gap between three different methods].

    PubMed

    Hayashi, Kuniki; Hoshino, Tadashi; Yanai, Mitsuru; Tsuchiya, Tatsuyuki; Kumasaka, Kazunari; Kawano, Kinya

    2004-06-01

    It is well known that serious method-related differences exist in results of serum CA19-9, and the necessity of standardization has been pointed out. In this study, differences of serum tumor marker CA19-9 levels obtained by various immunoassay kits (CLEIA, FEIA, LPIA and RIA) were evaluated in sixty-seven clinical samples and five calibrators and the possibility to improve the inter-methodological differences were observed not only for clinical samples but also for calibrators. We supposed an assumed standard material using by a calibrator. We calculated the serum levels of CA19-9 when using the assumed standard material for three different measurement methods. We approximate the CA19-9 values using by this method. It is suggested that the obtained CA19-9 values could be approximated by recalculation with the assumed standard material would be able to correct between-method and between-laboratory discrepancies in particular systematic errors.

  19. Comparative evaluation of adsorption kinetics of diclofenac and isoproturon by activated carbon.

    PubMed

    Torrellas, Silvia A; Rodriguez, Araceli R; Escudero, Gabriel O; Martín, José María G; Rodriguez, Juan G

    2015-01-01

    Adsorption mechanism of diclofenac and isoproturon onto activated carbon has been proposed using Langmuir and Freundlich isotherms. Adsorption capacity and optimum adsorption isotherms were predicted by nonlinear regression method. Different kinetic equations, pseudo-first-order, pseudo-second-order, intraparticle diffusion model and Bangham kinetic model, were applied to study the adsorption kinetics of emerging contaminants on activated carbon in two aqueous matrices.

  20. Modelling and simulation of a heat exchanger

    NASA Technical Reports Server (NTRS)

    Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.

    1991-01-01

    Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.

  1. An efficient numerical technique for calculating thermal spreading resistance

    NASA Technical Reports Server (NTRS)

    Gale, E. H., Jr.

    1977-01-01

    An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.

  2. An Adaptive Moving Target Imaging Method for Bistatic Forward-Looking SAR Using Keystone Transform and Optimization NLCS.

    PubMed

    Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu

    2017-01-23

    Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.

  3. Mueller matrix imaging study to detect the dental demineralization

    NASA Astrophysics Data System (ADS)

    Chen, Qingguang; Shen, Huanbo; Wang, Binqiang

    2018-01-01

    Mueller matrix is an optical parameter invasively to reveal the structure information of anisotropic material. Dental tissue has the ordered structure including dental enamel prism and dentinal tubule. The ordered structure of teeth surface will be destroyed by demineralization. The structure information has the possibility to reflect the dental demineralization. In the paper, the experiment setup was built to obtain the Mueller matrix images based on the dual- wave plate rotation method. Two linear polarizer and two quarter-wave plate were rotated by electric control revolving stage respectively to capture 16 images at different group of polarization states. Therefore, Mueller matrix image can be calculated from the 16 images. On this basis, depolarization index, the diattenuation index and retardance index of the Mueller matrix were analyzed by Lu-Chipman polarization decomposition method. Mueller matrix images of artificial demineralized enamels at different stages were analyzed and the results show the possibility to detect the dental demineralization using Mueller matrix imaging method.

  4. Parallel Computing of Upwelling in a Rotating Stratified Flow

    NASA Astrophysics Data System (ADS)

    Cui, A.; Street, R. L.

    1997-11-01

    A code for the three-dimensional, unsteady, incompressible, and turbulent flow has been implemented on the IBM SP2, using message passing. The effects of rotation and variable density are included. A finite volume method is used to discretize the Navier-Stokes equations in general curvilinear coordinates on a non-staggered grid. All the spatial derivatives are approximated using second-order central differences with the exception of the convection terms, which are handled with special upwind-difference schemes. The semi-implicit, second-order accurate, time-advancement scheme employs the Adams-Bashforth method for the explicit terms and Crank-Nicolson for the implicit terms. A multigrid method, with the four-color ZEBRA as smoother, is used to solve the Poisson equation for pressure, while the momentum equations are solved with an approximate factorization technique. The code was successfully validated for a variety test cases. Simulations of a laboratory model of coastal upwelling in a rotating annulus are in progress and will be presented.

  5. Multiscale recurrence quantification analysis of order recurrence plots

    NASA Astrophysics Data System (ADS)

    Xu, Mengjia; Shang, Pengjian; Lin, Aijing

    2017-03-01

    In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Petersson, N. A.; Rodgers, A.

    Acoustic waveform modeling is a computationally intensive task and full three-dimensional simulations are often impractical for some geophysical applications such as long-range wave propagation and high-frequency sound simulation. In this study, we develop a two-dimensional high-order accurate finite-difference code for acoustic wave modeling. We solve the linearized Euler equations by discretizing them with the sixth order accurate finite difference stencils away from the boundary and the third order summation-by-parts (SBP) closure near the boundary. Non-planar topographic boundary is resolved by formulating the governing equation in curvilinear coordinates following the interface. We verify the implementation of the algorithm by numerical examplesmore » and demonstrate the capability of the proposed method for practical acoustic wave propagation problems in the atmosphere.« less

  7. An Upwind Solver for the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Sockol, Peter M.

    2011-01-01

    An upwind solver is presented for the unstructured grid National Combustion Code (NCC). The compressible Navier-Stokes equations with time-derivative preconditioning and preconditioned flux-difference splitting of the inviscid terms are used. First order derivatives are computed on cell faces and used to evaluate the shear stresses and heat fluxes. A new flux limiter uses these same first order derivatives in the evaluation of left and right states used in the flux-difference splitting. The k-epsilon turbulence equations are solved with the same second-order method. The new solver has been installed in a recent version of NCC and the resulting code has been tested successfully in 2D on two laminar cases with known solutions and one turbulent case with experimental data.

  8. Detecting brain tumor in computed tomography images using Markov random fields and fuzzy C-means clustering techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdulbaqi, Hayder Saad; Department of Physics, College of Education, University of Al-Qadisiya, Al-Qadisiya; Jafri, Mohd Zubir Mat

    Brain tumors, are an abnormal growth of tissues in the brain. They may arise in people of any age. They must be detected early, diagnosed accurately, monitored carefully, and treated effectively in order to optimize patient outcomes regarding both survival and quality of life. Manual segmentation of brain tumors from CT scan images is a challenging and time consuming task. Size and location accurate detection of brain tumor plays a vital role in the successful diagnosis and treatment of tumors. Brain tumor detection is considered a challenging mission in medical image processing. The aim of this paper is to introducemore » a scheme for tumor detection in CT scan images using two different techniques Hidden Markov Random Fields (HMRF) and Fuzzy C-means (FCM). The proposed method has been developed in this research in order to construct hybrid method between (HMRF) and threshold. These methods have been applied on 4 different patient data sets. The result of comparison among these methods shows that the proposed method gives good results for brain tissue detection, and is more robust and effective compared with (FCM) techniques.« less

  9. Texture analysis with statistical methods for wheat ear extraction

    NASA Astrophysics Data System (ADS)

    Bakhouche, M.; Cointault, F.; Gouton, P.

    2007-01-01

    In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.

  10. Driver behavior profiling: An investigation with different smartphone sensors and machine learning

    PubMed Central

    Ferreira, Jair; Carvalho, Eduardo; Ferreira, Bruno V.; de Souza, Cleidson; Suhara, Yoshihiko; Pentland, Alex

    2017-01-01

    Driver behavior impacts traffic safety, fuel/energy consumption and gas emissions. Driver behavior profiling tries to understand and positively impact driver behavior. Usually driver behavior profiling tasks involve automated collection of driving data and application of computer models to generate a classification that characterizes the driver aggressiveness profile. Different sensors and classification methods have been employed in this task, however, low-cost solutions and high performance are still research targets. This paper presents an investigation with different Android smartphone sensors, and classification algorithms in order to assess which sensor/method assembly enables classification with higher performance. The results show that specific combinations of sensors and intelligent methods allow classification performance improvement. PMID:28394925

  11. Treatment alternatives of slaughterhouse wastes, and their effect on the inactivation of different pathogens: a review.

    PubMed

    Franke-Whittle, Ingrid H; Insam, Heribert

    2013-05-01

    Slaughterhouse wastes are a potential reservoir of bacterial, viral, prion and parasitic pathogens, capable of infecting both animals and humans. A quick, cost effective and safe disposal method is thus essential in order to reduce the risk of disease following animal slaughter. Different methods for the disposal of such wastes exist, including composting, anaerobic digestion (AD), alkaline hydrolysis (AH), rendering, incineration and burning. Composting is a disposal method that allows a recycling of the slaughterhouse waste nutrients back into the earth. The high fat and protein content of slaughterhouse wastes mean however, that such wastes are an excellent substrate for AD processes, resulting in both the disposal of wastes, a recycling of nutrients (soil amendment with sludge), and in methane production. Concerns exist as to whether AD and composting processes can inactivate pathogens. In contrast, AH is capable of the inactivation of almost all known microorganisms. This review was conducted in order to compare three different methods of slaughterhouse waste disposal, as regards to their ability to inactivate various microbial pathogens. The intention was to investigate whether AD could be used for waste disposal (either alone, or in combination with another process) such that both energy can be obtained and potentially hazardous materials be disposed of.

  12. Chitosan from shrimp shells: A renewable sorbent applied to the clean-up step of the QuEChERS method in order to determine multi-residues of veterinary drugs in different types of milk.

    PubMed

    Arias, Jean Lucas de Oliveira; Schneider, Antunielle; Batista-Andrade, Jahir Antonio; Vieira, Augusto Alves; Caldas, Sergiane Souza; Primel, Ednei Gilberto

    2018-02-01

    Clean extracts are essential in LC-MS/MS, since the matrix effect can interfere in the analysis. Alternative materials which can be used as sorbents, such as chitosan in the clean-up step, are cheap and green options. In this study, chitosan from shrimp shell waste was evaluated as a sorbent in the QuEChERS method in order to determine multi-residues of veterinary drugs in different types of milk, i. e., fatty matrices. After optimization, the method showed correlation coefficients above 0.99, LOQs ranged between 1 and 50μgkg -1 and recoveries ranged between 62 and 125%, with RSD<20% for all veterinary drugs in all types of milk under study. The clean-up step which employed chitosan proved to be effective, since it reduced both the matrix effect (from values between -40 and -10% to values from -10 to +10%) and the extract turbidity (up to 95%). When the proposed method was applied to different milk samples, residues of albendazole (49μgkg -1 ), sulfamethazine (

  13. Prediction of oral disintegration time of fast disintegrating tablets using texture analyzer and computational optimization.

    PubMed

    Szakonyi, G; Zelkó, R

    2013-05-20

    One of the promising approaches to predict in vivo disintegration time of orally disintegrating tablets (ODT) is the use of texture analyzer instrument. Once the method is able to provide good in vitro in vivo correlation (IVIVC) in the case of different tablets, it might be able to predict the oral disintegration time of similar products. However, there are many tablet parameters that influence the in vivo and the in vitro disintegration time of ODT products. Therefore, the measured in vitro and in vivo disintegration times can occasionally differ, even if they coincide in most cases of the investigated products and the in vivo disintegration times may also change if the aimed patient group is suffering from a special illness. If the method is no longer able to provide good IVIVC, then the modification of a single instrumental parameter may not be successful and the in vitro method must be re-set in a complex manner in order to provide satisfactory results. In the present experiment, an optimization process was developed based on texture analysis measurements using five different tablets in order to predict their in vivo disintegration times, and the optimized texture analysis method was evaluated using independent tablets. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Treatment alternatives of slaughterhouse wastes, and their effect on the inactivation of different pathogens: A review

    PubMed Central

    2013-01-01

    Slaughterhouse wastes are a potential reservoir of bacterial, viral, prion and parasitic pathogens, capable of infecting both animals and humans. A quick, cost effective and safe disposal method is thus essential in order to reduce the risk of disease following animal slaughter. Different methods for the disposal of such wastes exist, including composting, anaerobic digestion (AD), alkaline hydrolysis (AH), rendering, incineration and burning. Composting is a disposal method that allows a recycling of the slaughterhouse waste nutrients back into the earth. The high fat and protein content of slaughterhouse wastes mean however, that such wastes are an excellent substrate for AD processes, resulting in both the disposal of wastes, a recycling of nutrients (soil amendment with sludge), and in methane production. Concerns exist as to whether AD and composting processes can inactivate pathogens. In contrast, AH is capable of the inactivation of almost all known microorganisms. This review was conducted in order to compare three different methods of slaughterhouse waste disposal, as regards to their ability to inactivate various microbial pathogens. The intention was to investigate whether AD could be used for waste disposal (either alone, or in combination with another process) such that both energy can be obtained and potentially hazardous materials be disposed of. PMID:22694189

  15. Ionization energies and electron affinities from a random-phase-approximation many-body Green's-function method including exchange interactions

    NASA Astrophysics Data System (ADS)

    Heßelmann, Andreas

    2017-06-01

    A many-body Green's-function method employing an infinite order summation of ring and exchange-ring contributions to the self-energy is presented. The individual correlation and relaxation contributions to the quasiparticle energies are calculated using an iterative scheme which utilizes density fitting of the particle-hole, particle-particle and hole-hole densities. It is shown that the ionization energies and electron affinities of this approach agree better with highly accurate coupled-cluster singles and doubles with perturbative triples energy difference results than those obtained with second-order Green's-function approaches. An analysis of the correlation and relaxation terms of the self-energy for the direct- and exchange-random-phase-approximation (RPA) Green's-function methods shows that the inclusion of exchange interactions leads to a reduction of the two contributions in magnitude. These differences, however, strongly cancel each other when summing the individual terms to the quasiparticle energies. Due to this, the direct- and exchange-RPA methods perform similarly for the description of ionization energies (IPs) and electron affinities (EAs). The coupled-cluster reference IPs and EAs, if corrected to the adiabatic energy differences between the neutral and charged molecules, were shown to be in very good agreement with experimental measurements.

  16. Method of moments for the dilute granular flow of inelastic spheres

    NASA Astrophysics Data System (ADS)

    Strumendo, Matteo; Canu, Paolo

    2002-10-01

    Some peculiar features of granular materials (smooth, identical spheres) in rapid flow are the normal pressure differences and the related anisotropy of the velocity distribution function f(1). Kinetic theories have been proposed that account for the anisotropy, mostly based on a generalization of the Chapman-Enskog expansion [N. Sela and I. Goldhirsch, J. Fluid Mech. 361, 41 (1998)]. In the present paper, we approach the problem differently by means of the method of moments; previously, similar theories have been constructed for the nearly elastic behavior of granular matter but were not able to predict the normal pressures differences. To overcome these restrictions, we use as an approximation of the f(1) a truncated series expansion in Hermite polynomials around the Maxwellian distribution function. We used the approximated f(1) to evaluate the collisional source term and calculated all the resulting integrals; also, the difference in the mean velocity of the two colliding particles has been taken into account. To simulate the granular flows, all the second-order moment balances are considered together with the mass and momentum balances. In balance equations of the Nth-order moments, the (N+1)th-order moments (and their derivatives) appear: we therefore introduced closure equations to express them as functions of lower-order moments by a generalization of the ``elementary kinetic theory,'' instead of the classical procedure of neglecting the (N+1)th-order moments and their derivatives. We applied the model to the translational flow on an inclined chute obtaining the profiles of the solid volumetric fraction, the mean velocity, and all the second-order moments. The theoretical results have been compared with experimental data [E. Azanza, F. Chevoir, and P. Moucheront, J. Fluid Mech. 400, 199 (1999); T. G. Drake, J. Fluid Mech. 225, 121 (1991)] and all the features of the flow are reflected by the model: the decreasing exponential profile of the solid volumetric fraction, the parabolic shape of the mean velocity, the constancy of the granular temperature and of its components. Besides, the model predicts the normal pressures differences, typical of the granular materials.

  17. Oxygen consumption rates by different oenological tannins in a model wine solution.

    PubMed

    Pascual, Olga; Vignault, Adeline; Gombau, Jordi; Navarro, Maria; Gómez-Alonso, Sergio; García-Romero, Esteban; Canals, Joan Miquel; Hermosín-Gutíerrez, Isidro; Teissedre, Pierre-Louis; Zamora, Fernando

    2017-11-01

    The kinetics of oxygen consumption by different oenological tannins were measured in a model wine solution using the non-invasive method based on luminiscence. The results indicate that the oxygen consumption rate follows second-order kinetics depending on tannin and oxygen concentrations. They also confirm that the oxygen consumption rate is influenced by temperature in accordance with Arrhenius law. The indications are that ellagitannins are the fastest oxygen consumers of the different oenological tannins, followed in decreasing order by quebracho tannins, skin tannins, seed tannins and finally gallotannins. This methodology can therefore be proposed as an index for determining the effectiveness of different commercial tannins in protecting wines against oxidation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. TSCAN: Pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis

    PubMed Central

    Ji, Zhicheng; Ji, Hongkai

    2016-01-01

    When analyzing single-cell RNA-seq data, constructing a pseudo-temporal path to order cells based on the gradual transition of their transcriptomes is a useful way to study gene expression dynamics in a heterogeneous cell population. Currently, a limited number of computational tools are available for this task, and quantitative methods for comparing different tools are lacking. Tools for Single Cell Analysis (TSCAN) is a software tool developed to better support in silico pseudo-Time reconstruction in Single-Cell RNA-seq ANalysis. TSCAN uses a cluster-based minimum spanning tree (MST) approach to order cells. Cells are first grouped into clusters and an MST is then constructed to connect cluster centers. Pseudo-time is obtained by projecting each cell onto the tree, and the ordered sequence of cells can be used to study dynamic changes of gene expression along the pseudo-time. Clustering cells before MST construction reduces the complexity of the tree space. This often leads to improved cell ordering. It also allows users to conveniently adjust the ordering based on prior knowledge. TSCAN has a graphical user interface (GUI) to support data visualization and user interaction. Furthermore, quantitative measures are developed to objectively evaluate and compare different pseudo-time reconstruction methods. TSCAN is available at https://github.com/zji90/TSCAN and as a Bioconductor package. PMID:27179027

  19. TSCAN: Pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis.

    PubMed

    Ji, Zhicheng; Ji, Hongkai

    2016-07-27

    When analyzing single-cell RNA-seq data, constructing a pseudo-temporal path to order cells based on the gradual transition of their transcriptomes is a useful way to study gene expression dynamics in a heterogeneous cell population. Currently, a limited number of computational tools are available for this task, and quantitative methods for comparing different tools are lacking. Tools for Single Cell Analysis (TSCAN) is a software tool developed to better support in silico pseudo-Time reconstruction in Single-Cell RNA-seq ANalysis. TSCAN uses a cluster-based minimum spanning tree (MST) approach to order cells. Cells are first grouped into clusters and an MST is then constructed to connect cluster centers. Pseudo-time is obtained by projecting each cell onto the tree, and the ordered sequence of cells can be used to study dynamic changes of gene expression along the pseudo-time. Clustering cells before MST construction reduces the complexity of the tree space. This often leads to improved cell ordering. It also allows users to conveniently adjust the ordering based on prior knowledge. TSCAN has a graphical user interface (GUI) to support data visualization and user interaction. Furthermore, quantitative measures are developed to objectively evaluate and compare different pseudo-time reconstruction methods. TSCAN is available at https://github.com/zji90/TSCAN and as a Bioconductor package. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Numerical simulation for solution of space-time fractional telegraphs equations with local fractional derivatives via HAFSTM

    NASA Astrophysics Data System (ADS)

    Pandey, Rishi Kumar; Mishra, Hradyesh Kumar

    2017-11-01

    In this paper, the semi-analytic numerical technique for the solution of time-space fractional telegraph equation is applied. This numerical technique is based on coupling of the homotopy analysis method and sumudu transform. It shows the clear advantage with mess methods like finite difference method and also with polynomial methods similar to perturbation and Adomian decomposition methods. It is easily transform the complex fractional order derivatives in simple time domain and interpret the results in same meaning.

  1. Application of the enhanced homotopy perturbation method to solve the fractional-order Bagley-Torvik differential equation

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M.; Ghaderi, R.; Sheikhol Eslami, A.; Ranjbar, A.; Hosseinnia, S. H.; Momani, S.; Sadati, J.

    2009-10-01

    The enhanced homotopy perturbation method (EHPM) is applied for finding improved approximate solutions of the well-known Bagley-Torvik equation for three different cases. The main characteristic of the EHPM is using a stabilized linear part, which guarantees the stability and convergence of the overall solution. The results are finally compared with the Adams-Bashforth-Moulton numerical method, the Adomian decomposition method (ADM) and the fractional differential transform method (FDTM) to verify the performance of the EHPM.

  2. Clouds in the atmospheres of extrasolar planets. IV. On the scattering greenhouse effect of CO2 ice particles: Numerical radiative transfer studies

    NASA Astrophysics Data System (ADS)

    Kitzmann, D.; Patzer, A. B. C.; Rauer, H.

    2013-09-01

    Context. Owing to their wavelength-dependent absorption and scattering properties, clouds have a strong impact on the climate of planetary atmospheres. The potential greenhouse effect of CO2 ice clouds in the atmospheres of terrestrial extrasolar planets is of particular interest because it might influence the position and thus the extension of the outer boundary of the classic habitable zone around main sequence stars. Such a greenhouse effect, however, is a complicated function of the CO2 ice particles' optical properties. Aims: We study the radiative effects of CO2 ice particles obtained by different numerical treatments to solve the radiative transfer equation. To determine the effectiveness of the scattering greenhouse effect caused by CO2 ice clouds, the radiative transfer calculations are performed over the relevant wide range of particle sizes and optical depths, employing different numerical methods. Methods: We used Mie theory to calculate the optical properties of particle polydispersion. The radiative transfer calculations were done with a high-order discrete ordinate method (DISORT). Two-stream radiative transfer methods were used for comparison with previous studies. Results: The comparison between the results of a high-order discrete ordinate method and simpler two-stream approaches reveals large deviations in terms of a potential scattering efficiency of the greenhouse effect. The two-stream methods overestimate the transmitted and reflected radiation, thereby yielding a higher scattering greenhouse effect. For the particular case of a cool M-type dwarf, the CO2 ice particles show no strong effective scattering greenhouse effect by using the high-order discrete ordinate method, whereas a positive net greenhouse effect was found for the two-stream radiative transfer schemes. As a result, previous studies of the effects of CO2 ice clouds using two-stream approximations overrated the atmospheric warming caused by the scattering greenhouse effect. Consequently, the scattering greenhouse effect of CO2 ice particles seems to be less effective than previously estimated. In general, higher order radiative transfer methods are needed to describe the effects of CO2 ice clouds accurately as indicated by our numerical radiative transfer studies.

  3. Verification of Software: The Textbook and Real Problems

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee

    2006-01-01

    The process of verification, or determining the order of accuracy of computational codes, can be problematic when working with large, legacy computational methods that have been used extensively in industry or government. Verification does not ensure that the computer program is producing a physically correct solution, it ensures merely that the observed order of accuracy of solutions are the same as the theoretical order of accuracy. The Method of Manufactured Solutions (MMS) is one of several ways for determining the order of accuracy. MMS is used to verify a series of computer codes progressing in sophistication from "textbook" to "real life" applications. The degree of numerical precision in the computations considerably influenced the range of mesh density to achieve the theoretical order of accuracy even for 1-D problems. The choice of manufactured solutions and mesh form shifted the observed order in specific areas but not in general. Solution residual (iterative) convergence was not always achieved for 2-D Euler manufactured solutions. L(sub 2,norm) convergence differed variable to variable therefore an observed order of accuracy could not be determined conclusively in all cases, the cause of which is currently under investigation.

  4. Engineering applications and analysis of vibratory motion fourth order fluid film over the time dependent heated flat plate

    NASA Astrophysics Data System (ADS)

    Mohmand, Muhammad Ismail; Mamat, Mustafa Bin; Shah, Qayyum

    2017-07-01

    This article deals with the time dependent analysis of thermally conducting and Magneto-hydrodynamic (MHD) liquid film flow of a fourth order fluid past a vertical and vibratory plate. In this article have been developed for higher order complex nature fluids. The governing-equations have been modeled in the terms of nonlinear partial differential equations with the help of physical boundary circumstances. Two different analytical approaches i.e. Adomian decomposition method (ADM) and the optimal homotopy asymptotic method (OHAM), have been used for discoveryof the series clarification of the problems. Solutions obtained via two diversemethods have been compared using the graphs, tables and found an excellent contract. Variants of the embedded flow parameters in the solution have been analysed through the graphical diagrams.

  5. Characterisation of a reference site for quantifying uncertainties related to soil sampling.

    PubMed

    Barbizzi, Sabrina; de Zorzi, Paolo; Belli, Maria; Pati, Alessandra; Sansone, Umberto; Stellato, Luisa; Barbina, Maria; Deluisa, Andrea; Menegon, Sandro; Coletti, Valter

    2004-01-01

    The paper reports a methodology adopted to face problems related to quality assurance in soil sampling. The SOILSAMP project, funded by the Environmental Protection Agency of Italy (APAT), is aimed at (i) establishing protocols for soil sampling in different environments; (ii) assessing uncertainties associated with different soil sampling methods in order to select the "fit-for-purpose" method; (iii) qualifying, in term of trace elements spatial variability, a reference site for national and international inter-comparison exercises. Preliminary results and considerations are illustrated.

  6. An Accurate Co-registration Method for Airborne Repeat-pass InSAR

    NASA Astrophysics Data System (ADS)

    Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.

    2017-10-01

    Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.

  7. Self-adaptive difference method for the effective solution of computationally complex problems of boundary layer theory

    NASA Technical Reports Server (NTRS)

    Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.

    1986-01-01

    An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.

  8. Radon Diffusion Measurement in Polyethylene based on Alpha Detection

    NASA Astrophysics Data System (ADS)

    Rau, Wolfgang

    2011-04-01

    We present a method to measure the diffusion of Radon in solid materials based on the alpha decay of the radon daughter products. In contrast to usual diffusion measurements which detect the radon that penetrates a thin barrier, we let the radon diffuse into the material and then measure the alpha decays of the radon daughter products in the material. We applied this method to regular and ultra high molecular weight poly ethylene and find diffusion lengths of order of mm as expected. However, the preliminary analysis shows significant differences between two different approaches we have chosen. These differences may be explained by the different experimental conditions.

  9. Evaluation of MTANNs for eliminating false-positive with different computer aided pulmonary nodules detection software.

    PubMed

    Shi, Zhenghao; Ma, Jiejue; Feng, Yaning; He, Lifeng; Suzuki, Kenji

    2015-11-01

    MTANN (Massive Training Artificial Neural Network) is a promising tool, which applied to eliminate false-positive for thoracic CT in recent years. In order to evaluate whether this method is feasible to eliminate false-positive of different CAD schemes, especially, when it is applied to commercial CAD software, this paper evaluate the performance of the method for eliminating false-positives produced by three different versions of commercial CAD software for lung nodules detection in chest radiographs. Experimental results demonstrate that the approach is useful in reducing FPs for different computer aided lung nodules detection software in chest radiographs.

  10. Coherent manipulation of non-thermal spin order in optical nuclear polarization experiments

    NASA Astrophysics Data System (ADS)

    Buntkowsky, Gerd; Ivanov, Konstantin L.; Zimmermann, Herbert; Vieth, Hans-Martin

    2017-03-01

    Time resolved measurements of Optical Nuclear Polarization (ONP) have been performed on hyperpolarized triplet states in molecular crystals created by light excitation. Transfer of the initial electron polarization to nuclear spins has been studied in the presence of radiofrequency excitation; the experiments have been performed with different pulse sequences using different doped molecular systems. The experimental results clearly demonstrate the dominant role of coherent mechanisms of spin order transfer, which manifest themselves in well pronounced oscillations. These oscillations are of two types, precessions and nutations, having characteristic frequencies, which are the same for the different molecular systems and the pulse sequences applied. Hence, precessions and nutations constitute a general feature of polarization transfer in ONP experiments. In general, coherent manipulation of spin order transfer creates a powerful resource for improving the performance of the ONP method, which paves the way to strong signal enhancement in nuclear magnetic resonance.

  11. MS-Electronic Nose Performance Improvement Using GC Retention Times And 2-Way And 3-Way Data Processing Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burian, Cosmin; Llobet, Eduard; Vilanova, Xavier

    We have designed a challenging experimental sample set in the form of 20 solutions with a high degree of similarity in order to study whether the addition of chromatographic separation information improves the performance of regular MS based electronic noses. In order to make an initial study of the approach, two different chromatographic methods were used. By processing the data of these experiments with 2 and 3-way algorithms, we have shown that the addition of chromatographic separation information improves the results compared to the 2-way analysis of mass spectra or total ion chromatogram treated separately. Our findings show that whenmore » the chromatographic peaks are resolved (longer measurement times), 2-way methods work better than 3-way methods, whereas in the case of a more challenging measurement (more coeluted chromatograms, much faster GC-MS measurements) 3-way methods work better.« less

  12. Dispersive optical soliton solutions for higher order nonlinear Sasa-Satsuma equation in mono mode fibers via new auxiliary equation method

    NASA Astrophysics Data System (ADS)

    Khater, Mostafa M. A.; Seadawy, Aly R.; Lu, Dianchen

    2018-01-01

    In this research, we apply new technique for higher order nonlinear Schrödinger equation which is representing the propagation of short light pulses in the monomode optical fibers and the evolution of slowly varying packets of quasi-monochromatic waves in weakly nonlinear media that have dispersion. Nonlinear Schrödinger equation is one of the basic model in fiber optics. We apply new auxiliary equation method for nonlinear Sasa-Satsuma equation to obtain a new optical forms of solitary traveling wave solutions. Exact and solitary traveling wave solutions are obtained in different kinds like trigonometric, hyperbolic, exponential, rational functions, …, etc. These forms of solutions that we represent in this research prove the superiority of our new technique on almost thirteen powerful methods. The main merits of this method over the other methods are that it gives more general solutions with some free parameters.

  13. Electron-phonon coupling from finite differences

    NASA Astrophysics Data System (ADS)

    Monserrat, Bartomeu

    2018-02-01

    The interaction between electrons and phonons underlies multiple phenomena in physics, chemistry, and materials science. Examples include superconductivity, electronic transport, and the temperature dependence of optical spectra. A first-principles description of electron-phonon coupling enables the study of the above phenomena with accuracy and material specificity, which can be used to understand experiments and to predict novel effects and functionality. In this topical review, we describe the first-principles calculation of electron-phonon coupling from finite differences. The finite differences approach provides several advantages compared to alternative methods, in particular (i) any underlying electronic structure method can be used, and (ii) terms beyond the lowest order in the electron-phonon interaction can be readily incorporated. But these advantages are associated with a large computational cost that has until recently prevented the widespread adoption of this method. We describe some recent advances, including nondiagonal supercells and thermal lines, that resolve these difficulties, and make the calculation of electron-phonon coupling from finite differences a powerful tool. We review multiple applications of the calculation of electron-phonon coupling from finite differences, including the temperature dependence of optical spectra, superconductivity, charge transport, and the role of defects in semiconductors. These examples illustrate the advantages of finite differences, with cases where semilocal density functional theory is not appropriate for the calculation of electron-phonon coupling and many-body methods such as the GW approximation are required, as well as examples in which higher-order terms in the electron-phonon interaction are essential for an accurate description of the relevant phenomena. We expect that the finite difference approach will play a central role in future studies of the electron-phonon interaction.

  14. On the Relation Between Spherical Harmonics and Simplified Spherical Harmonics Methods

    NASA Astrophysics Data System (ADS)

    Coppa, G. G. M.; Giusti, V.; Montagnini, B.; Ravetto, P.

    2010-03-01

    The purpose of the paper is, first, to recall the proof that the AN method and, therefore, the SP2N-1 method (of which AN was shown to be a variant) are equivalent to the odd order P2N-1, at least for a particular class of multi-region problems; namely the problems for which the total cross section has the same value for all the regions and the scattering is supposed to be isotropic. By virtue of the introduction of quadrature formulas representing first collision probabilities, this class is then enlarged in order to encompass the systems in which the regions may have different total cross sections. Some examples are reported to numerically validate the procedure.

  15. [Tumor segmentation of brain MRI with adaptive bandwidth mean shift].

    PubMed

    Hou, Xiaowen; Liu, Qi

    2014-10-01

    In order to get the adaptive bandwidth of mean shift to make the tumor segmentation of brain magnetic resonance imaging (MRI) to be more accurate, we in this paper present an advanced mean shift method. Firstly, we made use of the space characteristics of brain image to eliminate the impact on segmentation of skull; and then, based on the characteristics of spatial agglomeration of different tissues of brain (includes tumor), we applied edge points to get the optimal initial mean value and the respectively adaptive bandwidth, in order to improve the accuracy of tumor segmentation. The results of experiment showed that, contrast to the fixed bandwidth mean shift method, the method in this paper could segment the tumor more accurately.

  16. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    PubMed

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  17. GPU-accelerated computational tool for studying the effectiveness of asteroid disruption techniques

    NASA Astrophysics Data System (ADS)

    Zimmerman, Ben J.; Wie, Bong

    2016-10-01

    This paper presents the development of a new Graphics Processing Unit (GPU) accelerated computational tool for asteroid disruption techniques. Numerical simulations are completed using the high-order spectral difference (SD) method. Due to the compact nature of the SD method, it is well suited for implementation with the GPU architecture, hence solutions are generated at orders of magnitude faster than the Central Processing Unit (CPU) counterpart. A multiphase model integrated with the SD method is introduced, and several asteroid disruption simulations are conducted, including kinetic-energy impactors, multi-kinetic energy impactor systems, and nuclear options. Results illustrate the benefits of using multi-kinetic energy impactor systems when compared to a single impactor system. In addition, the effectiveness of nuclear options is observed.

  18. Optical frequency-domain chromatic dispersion measurement method for higher-order modes in an optical fiber.

    PubMed

    Ahn, Tae-Jung; Jung, Yongmin; Oh, Kyunghwan; Kim, Dug Young

    2005-12-12

    We propose a new chromatic dispersion measurement method for the higher-order modes of an optical fiber using optical frequency modulated continuous-wave (FMCW) interferometry. An optical fiber which supports few excited modes was prepared for our experiments. Three different guiding modes of the fiber were identified by using far-field spatial beam profile measurements and confirmed with numerical mode analysis. By using the principle of a conventional FMWC interferometry with a tunable external cavity laser, we have demonstrated that the chromatic dispersion of a few-mode optical fiber can be obtained directly and quantitatively as well as qualitatively. We have also compared our measurement results with those of conventional modulation phase-shift method.

  19. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  20. Acoustic wayfinding: A method to measure the acoustic contrast of different paving materials for blind people.

    PubMed

    Secchi, Simone; Lauria, Antonio; Cellai, Gianfranco

    2017-01-01

    Acoustic wayfinding involves using a variety of auditory cues to create a mental map of the surrounding environment. For blind people, these auditory cues become the primary substitute for visual information in order to understand the features of the spatial context and orient themselves. This can include creating sound waves, such as tapping a cane. This paper reports the results of a research about the "acoustic contrast" parameter between paving materials functioning as a cue and the surrounding or adjacent surface functioning as a background. A number of different materials was selected in order to create a test path and a procedure was defined for the verification of the ability of blind people to distinguish different acoustic contrasts. A method is proposed for measuring acoustic contrast generated by the impact of a cane tip on the ground to provide blind people with environmental information on spatial orientation and wayfinding in urban places. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

    NASA Astrophysics Data System (ADS)

    Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.

    2018-01-01

    We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

  2. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  3. Mesoporous TiO2 and copper-modified TiO2 nanoparticles: A case study

    NASA Astrophysics Data System (ADS)

    Ajay Kumar, R.; Vasavi Dutt, V. G.; Rajesh, Ch.

    2018-02-01

    In this paper we report the synthesis of mesoporous titanium dioxide (M-TiO2) nanoparticles (NPs) and copper (Cu)-modified M-TiO2 NPs by the hydrothermal method at relatively low temperatures using cetyltrimethylammonium bromide (CTAB) as a template. In order to get ordered spherical particles and better interaction between cationic and anionic precursor, we have used titanium isopropoxide (TTIP) as titanium source and CTAB as surfactant. The process of modification by copper to M-TiO2 follows the impregnation method. The change in structural and optical properties of NPs were estimated using different characterization techniques like X-ray diffraction, field emission scanning electron microscopy, Brunner-Emmett-Teller curve and UV-Vis absorption analysis. M-TiO2 and Cu-modified M-TiO2 exhibit pure anatase crystalline phase and shows no evidence of CuO formation. Nitrogen adsorption-desorption hysteresis reveals that the material is mesoporous. Several samples synthesized at different process temperature were further studied in order to make them suitable for a wide range of applications.

  4. Probing Graphene χ((2)) Using a Gold Photon Sieve.

    PubMed

    Lobet, Michaël; Sarrazin, Michaël; Cecchet, Francesca; Reckinger, Nicolas; Vlad, Alexandru; Colomer, Jean-François; Lis, Dan

    2016-01-13

    Nonlinear second harmonic optical activity of graphene covering a gold photon sieve was determined for different polarizations. The photon sieve consists of a subwavelength gold nanohole array placed on glass. It combines the benefits of efficient light trapping and surface plasmon propagation to unravel different elements of graphene second-order susceptibility χ((2)). Those elements efficiently contribute to second harmonic generation. In fact, the graphene-coated photon sieve produces a second harmonic intensity at least two orders of magnitude higher compared with a bare, flat gold layer and an order of magnitude coming from the plasmonic effect of the photon sieve; the remaining enhancement arises from the graphene layer itself. The measured second harmonic generation yield, supplemented by semianalytical computations, provides an original method to constrain the graphene χ((2)) elements. The values obtained are |d31 + d33| ≤ 8.1 × 10(3) pm(2)/V and |d15| ≤ 1.4 × 10(6) pm(2)/V for a second harmonic signal at 780 nm. This original method can be applied to any kind of 2D materials covering such a plasmonic structure.

  5. Investigation of the Impact of Different Terms in the Second Order Hamiltonian on Excitation Energies of Valence and Rydberg States.

    PubMed

    Tajti, Attila; Szalay, Péter G

    2016-11-08

    Describing electronically excited states of molecules accurately poses a challenging problem for theoretical methods. Popular second order techniques like Linear Response CC2 (CC2-LR), Partitioned Equation-of-Motion MBPT(2) (P-EOM-MBPT(2)), or Equation-of-Motion CCSD(2) (EOM-CCSD(2)) often produce results that are controversial and are ill-balanced with their accuracy on valence and Rydberg type states. In this study, we connect the theory of these methods and, to investigate the origin of their different behavior, establish a series of intermediate variants. The accuracy of these on excitation energies of singlet valence and Rydberg electronic states is benchmarked on a large sample against high-accuracy Linear Response CC3 references. The results reveal the role of individual terms of the second order similarity transformed Hamiltonian, and the reason for the bad performance of CC2-LR in the description of Rydberg states. We also clarify the importance of the T̂ 1 transformation employed in the CC2 procedure, which is found to be very small for vertical excitation energies.

  6. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  7. High Order Finite Difference Methods with Subcell Resolution for 2D Detonation Waves

    NASA Technical Reports Server (NTRS)

    Wang, W.; Shu, C. W.; Yee, H. C.; Sjogreen, B.

    2012-01-01

    In simulating hyperbolic conservation laws in conjunction with an inhomogeneous stiff source term, if the solution is discontinuous, spurious numerical results may be produced due to different time scales of the transport part and the source term. This numerical issue often arises in combustion and high speed chemical reacting flows.

  8. Differences in Characteristics of Online versus Traditional Students: Implications for Target Marketing

    ERIC Educational Resources Information Center

    Pentina, Iryna; Neeley, Concha

    2007-01-01

    This study provides insight for educators and administrators into differences between students enrolled in Web-based and traditional classes as online learning enters the growth stage of its product life cycle. We identify characteristics that differentiate online students from those who prefer traditional education methods in order to offer more…

  9. Characteristics of The Narrow Spectrum Beams Used in the Secondary Standard Dosimetry Laboratory at the Lebanese Atomic Energy Commission.

    PubMed

    Melhem, N; El Balaa, H; Younes, G; Al Kattar, Z

    2017-06-15

    The Secondary Standard Dosimetry Laboratory at the Lebanese Atomic Energy Commission has different calibration methods for various types of dosimeters used in industrial, military and medical fields. The calibration is performed using different beams of X-rays (low and medium energy) and Gamma radiation delivered by a Cesium 137 source. The Secondary Standard Dosimetry laboratory in charge of calibration services uses different protocols for the determination of high and low air kerma rate and for narrow and wide series. In order to perform this calibration work, it is very important to identify all the beam characteristics for the different types of sources and qualities of radiation. The following work describes the methods used for the determination of different beam characteristics and calibration coefficients with their uncertainties in order to enhance the radiation protection of workers and patient applications in the fields of medical diagnosis and industrial X-ray. All the characteristics of the X-ray beams are determined for the narrow spectrum series in the 40 and 200 keV range where the inherent filtration, the current intensity, the high voltage, the beam profile and the total uncertainty are the specific characteristics of these X-ray beams. An X-ray software was developed in order to visualize the reference values according to the characteristics of each beam. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. The estimation of the load of non-point source nitrogen and phosphorus based on observation experiments and export coefficient method in Three Gorges Reservoir Area

    NASA Astrophysics Data System (ADS)

    Tong, X. X.; Hu, B.; Xu, W. S.; Liu, J. G.; Zhang, P. C.

    2017-12-01

    In this paper, Three Gorges Reservoir Area (TGRA) was chosen to be the study area, the export coefficients of different land-use type were calculated through the observation experiments and literature consultation, and then the load of non-point source (NPS) nitrogen and phosphorus of different pollution sources such as farmland pollution sources, decentralized livestock and poultry breeding pollution sources and domestic pollution sources were estimated. The results show as follows: the pollution load of dry land is the main source of farmland pollution. The order of total nitrogen load of different pollution sources from high to low is livestock breeding pollution, domestic pollution, land use pollution, while the order of phosphorus load of different pollution sources from high to low is land use pollution, livestock breeding pollution, domestic pollution, Therefore, reasonable farmland management, effective control methods of dry land fertilization and sewage discharge of livestock breeding are the keys to the prevention and control of NPS nitrogen and phosphorus in TGRA.

  11. Hybrid robust predictive optimization method of power system dispatch

    DOEpatents

    Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY

    2011-08-02

    A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.

  12. Derivation of cloud-free-region atmospheric motion vectors from FY-2E thermal infrared imagery

    NASA Astrophysics Data System (ADS)

    Wang, Zhenhui; Sui, Xinxiu; Zhang, Qing; Yang, Lu; Zhao, Hang; Tang, Min; Zhan, Yizhe; Zhang, Zhiguo

    2017-02-01

    The operational cloud-motion tracking technique fails to retrieve atmospheric motion vectors (AMVs) in areas lacking cloud; and while water vapor shown in water vapor imagery can be used, the heights assigned to the retrieved AMVs are mostly in the upper troposphere. As the noise-equivalent temperature difference (NEdT) performance of FY-2E split window (10.3-11.5 μm, 11.6-12.8 μm) channels has been improved, the weak signals representing the spatial texture of water vapor and aerosols in cloud-free areas can be strengthened with algorithms based on the difference principle, and applied in calculating AMVs in the lower troposphere. This paper is a preliminary summary for this purpose, in which the principles and algorithm schemes for the temporal difference, split window difference and second-order difference (SD) methods are introduced. Results from simulation and cases experiments are reported in order to verify and evaluate the methods, based on comparison among retrievals and the "truth". The results show that all three algorithms, though not perfect in some cases, generally work well. Moreover, the SD method appears to be the best in suppressing the surface temperature influence and clarifying the spatial texture of water vapor and aerosols. The accuracy with respect to NCEP 800 hPa reanalysis data was found to be acceptable, as compared with the accuracy of the cloud motion vectors.

  13. New method for calculations of nanostructure kinetic stability at high temperature

    NASA Astrophysics Data System (ADS)

    Fedorov, A. S.; Kuzubov, A. A.; Visotin, M. A.; Tomilin, F. N.

    2017-10-01

    A new universal method is developed for determination of nanostructure kinetic stability (KS) at high temperatures, when nanostructures can be destroyed by chemical bonds breaking due to atom thermal vibrations. The method is based on calculation of probability for any bond in the structure to stretch more than a limit value Lmax, when the bond breaks. Assuming the number of vibrations is very large and all of them are independent, using the central limit theorem, an expression for the probability of a given bond elongation up to Lmax is derived in order to determine the KS. It is shown that this expression leads to the effective Arrhenius formula, but unlike the standard transition state theory it allows one to find the contributions of different vibrations to a chemical bond cleavage. To determine the KS, only calculation of frequencies and eigenvectors of vibrational modes in the groundstate of the nanostructure is needed, while the transition states need not be found. The suggested method was tested on calculating KS of bonds in some alkanes, octene isomers and narrow graphene nanoribbons of different types and widths at the temperature T=1200 K. The probability of breaking of the C-C bond in the center of these hydrocarbons is found to be significantly higher than at the ends of the molecules. It is also shown that the KS of the octene isomers decreases when the double C˭C bond is moved to the end of the molecule, which agrees well with the experimental data. The KS of the narrowest graphene nanoribbons of different types varies by 1-2 orders of magnitude depending on the width and structure, while all of them are by several orders of magnitude less stable at high temperature than the hydrocarbons and benzene.

  14. Linear combinations come alive in crossover designs.

    PubMed

    Shuster, Jonathan J

    2017-10-30

    Before learning anything about statistical inference in beginning service courses in biostatistics, students learn how to calculate the mean and variance of linear combinations of random variables. Practical precalculus examples of the importance of these exercises can be helpful for instructors, the target audience of this paper. We shall present applications to the "1-sample" and "2-sample" methods for randomized short-term 2-treatment crossover studies, where patients experience both treatments in random order with a "washout" between the active treatment periods. First, we show that the 2-sample method is preferred as it eliminates "conditional bias" when sample sizes by order differ and produces a smaller variance. We also demonstrate that it is usually advisable to use the differences in posttests (ignoring baseline and post washout values) rather than the differences between the changes in treatment from the start of the period to the end of the period ("delta of delta"). Although the intent is not to provide a definitive discussion of crossover designs, we provide a section and references to excellent alternative methods, where instructors can provide motivation to students to explore the topic in greater detail in future readings or courses. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NASA Astrophysics Data System (ADS)

    Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.

    2008-11-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.

  16. Spatial eigensolution analysis of energy-stable flux reconstruction schemes and influence of the numerical flux on accuracy and robustness

    NASA Astrophysics Data System (ADS)

    Mengaldo, Gianmarco; De Grazia, Daniele; Moura, Rodrigo C.; Sherwin, Spencer J.

    2018-04-01

    This study focuses on the dispersion and diffusion characteristics of high-order energy-stable flux reconstruction (ESFR) schemes via the spatial eigensolution analysis framework proposed in [1]. The analysis is performed for five ESFR schemes, where the parameter 'c' dictating the properties of the specific scheme recovered is chosen such that it spans the entire class of ESFR methods, also referred to as VCJH schemes, proposed in [2]. In particular, we used five values of 'c', two that correspond to its lower and upper bounds and the others that identify three schemes that are linked to common high-order methods, namely the ESFR recovering two versions of discontinuous Galerkin methods and one recovering the spectral difference scheme. The performance of each scheme is assessed when using different numerical intercell fluxes (e.g. different levels of upwinding), ranging from "under-" to "over-upwinding". In contrast to the more common temporal analysis, the spatial eigensolution analysis framework adopted here allows one to grasp crucial insights into the diffusion and dispersion properties of FR schemes for problems involving non-periodic boundary conditions, typically found in open-flow problems, including turbulence, unsteady aerodynamics and aeroacoustics.

  17. Uranium phase diagram from first principles

    NASA Astrophysics Data System (ADS)

    Yanilkin, Alexey; Kruglov, Ivan; Migdal, Kirill; Oganov, Artem; Pokatashkin, Pavel; Sergeev, Oleg

    2017-06-01

    The work is devoted to the investigation of uranium phase diagram up to pressure of 1 TPa and temperature of 15 kK based on density functional theory. First of all the comparison of pseudopotential and full potential calculations is carried out for different uranium phases. In the second step, phase diagram at zero temperature is investigated by means of program USPEX and pseudopotential calculations. Stable and metastable structures with close energies are selected. In order to obtain phase diagram at finite temperatures the preliminary selection of stable phases is made by free energy calculation based on small displacement method. For remaining candidates the accurate values of free energy are obtained by means of thermodynamic integration method (TIM). For this purpose quantum molecular dynamics are carried out at different volumes and temperatures. Interatomic potentials based machine learning are developed in order to consider large systems and long times for TIM. The potentials reproduce the free energy with the accuracy 1-5 meV/atom, which is sufficient for prediction of phase transitions. The equilibrium curves of different phases are obtained based on free energies. Melting curve is calculated by modified Z-method with developed potential.

  18. Physical Activity and Music to Support Pre-School Children's Mathematics Learning

    ERIC Educational Resources Information Center

    Elofsson, Jessica; Englund Bohm, Anna; Jeppsson, Catarina; Samuelsson, Joakim

    2018-01-01

    In order to give all children equal opportunities in school, methods to prevent early differences are needed. The overall aim of this study was to investigate the effectiveness of two structured teaching methods: Math in Action, characterised by physical activity and music, and common numerical activities. Children (28 girls, 25 boys) were…

  19. Methods of Adapting Digital Content for the Learning Process via Mobile Devices

    ERIC Educational Resources Information Center

    Lopez, J. L. Gimenez; Royo, T. Magal; Laborda, Jesus Garcia; Calvo, F. Garde

    2009-01-01

    This article analyses different methods of adapting digital content for its delivery via mobile devices taking into account two aspects which are a fundamental part of the learning process; on the one hand, functionality of the contents, and on the other, the actual controlled navigation requirements that the learner needs in order to acquire high…

  20. A compact finite element method for elastic bodies

    NASA Technical Reports Server (NTRS)

    Rose, M. E.

    1984-01-01

    A nonconforming finite method is described for treating linear equilibrium problems, and a convergence proof showing second order accuracy is given. The close relationship to a related compact finite difference scheme due to Phillips and Rose is examined. A condensation technique is shown to preserve the compactness property and suggests an approach to a certain type of homogenization.

Top